input
stringlengths
6.82k
29k
Instruction: Is EuroSCORE useful in the prediction of extended intensive care unit stay after cardiac surgery? Abstracts: abstract_id: PUBMED:19307134 Is EuroSCORE useful in the prediction of extended intensive care unit stay after cardiac surgery? Objective: Risk stratification allows preoperative assessment of cardiac surgical risk faced by individual patients and permits retrospective analysis of postoperative complications in the intensive care unit (ICU). The aim of this single-center study was to investigate the prediction of extended ICU stay after cardiac surgery using both the additive and logistic model of the European System for Cardiac Operative Risk Evaluation (EuroSCORE). Methods: A retrospective observational study was conducted. We collected clinical data of 1562 consecutive patients undergoing cardiac surgery over a 2-year period at the Antwerp University Hospital, Belgium. EuroSCORE values of all patients were obtained. The outcome measure was the duration of ICU stay in days. The predictive performance of EuroSCORE was analyzed by the discriminatory power of a receiver operating characteristic (ROC) curve. Each EuroSCORE value was used as a theoretical cut-off point to predict duration of ICU stay. Three subsequent ICU stays were defined as prolonged: more than 2, 5 and 7 days. ROC curves were constructed for both the additive and logistic model. Results: Patients had a median ICU stay of 2 days and a mean ICU stay of 5.5 days. Median additive EuroSCORE was 5 (range, 0-22) and logistic EuroSCORE was 3.94% (range, 0.00-87.00). In the additive EuroSCORE model, a predictive value of 0.76 for an ICU stay of >7 days, 0.72 for >5 days and 0.67 for >2 days was found. The logistic EuroSCORE model yielded an area under the ROC curve of 0.77, 0.75 and 0.68 for each ICU length of stay, respectively. Conclusions: In our patient database, prolonged length of stay in the ICU correlated positively with EuroSCORE. The logistic model was more discriminatory than the additive in tracing extended ICU stay. The overall predictive performance of EuroSCORE is acceptable and most likely based on the presence of variables that are risk factors for both mortality and extended ICU stay. Hence, EuroSCORE is a useful predicting tool and provides both surgeons and intensivists with a good estimate of patient risk in terms of ICU stay. abstract_id: PUBMED:25696483 The EuroSCORE as predictor for prolonged hospital and intensive care stay after cardiac surgery? Objective: Validation of the EuroSCORE as predictor for a prolonged hospital and intensive care stay after CABG vs. institution-specific scoring systems. Methods: For the evaluation of a prolonged hospital stay, 3359 patients were included in the analysis of EuroSCORE vs. the CORRAD morbidity score. For a prolonged intensive care stay, 1638 patients were included in the analysis of the EuroSCORE vs. the PICUS score. Results: There was no significant difference in hospital stay between the three different EuroSCORE risk groups. The difference in hospital stay between the high-risk and low-risk groups, identified by the CORRAD morbidity score, was significant (6.9 vs.11.2 days). For a prolonged intensive care stay, the patients identified as high risk by the EuroSCORE and by the PICUS score also had a significantly longer intensive care stay; however, the discriminatory power was low. Conclusion: The EuroSCORE is not of value as a predictive system for a prolonged hospital stay. There is a relation between the high-risk patients identified by the EuroSCORE and a prolonged intensive care stay. abstract_id: PUBMED:29678435 Prediction of Patient Length of Stay on the Intensive Care Unit Following Cardiac Surgery: A Logistic Regression Analysis Based on the Cardiac Operative Mortality Risk Calculator, EuroSCORE. Objective: The aim of this study was to develop a statistical model based on patient parameters to predict the length of stay (LOS) in the intensive care unit (ICU) following cardiac surgery in a single center. Design: Data were collected from patients admitted to the ICU following cardiac surgery over a 10-year period (2006-2016). Both the additive and logistic EuroSCORE were calculated, and logistic regression analysis was carried out to formulate a model relating the predicted LOS to the EuroSCORE. This model was used to stratify patients into short stay (less than 48 hours) or long stay (more than 48 hours). Setting: ICU at Papworth Hospital, Cambridgeshire. Participants: A total of 18,377 consecutive patients who had been in ICU following cardiac surgery (coronary graft bypass surgery, valve surgery, or a combination of both). Interventions: This was an observational study. Measurements And Main Results: The authors have shown that both the additive and logistic EuroSCORE can be used to stratify cardiac surgical patients in various predicted LOS in ICU. Further adjustments can be made to increase the number of patients correctly identified as either short stay or long stay. Comparison of the model predictions to the data demonstrated a high overall accuracy of 79.77%, and receiver operating characteristic curve analysis showed the area under the curve to be 0.7296. Conclusion: This analysis of an extensive data set shows that patient LOS in ICU after cardiac surgery in a single center can be predicted accurately using the simple cardiac operative risk scoring tool EuroSCORE. Using such predictions has the potential to improve ICU resource management. abstract_id: PUBMED:15511424 EuroSCORE predicts intensive care unit stay and costs of open heart surgery. Background: This study aimed to determine whether the preoperative risk stratification model EuroSCORE predicts the different components of resource utilization in open heart surgery. Methods: Data for all adult patients undergoing heart surgery at the University Hospital of Lund, Sweden, between 1999 and 2002 were prospectively collected. Costs were calculated for the surgery and intensive care and ward stay for each patient (excluding transplant cases and patients who died intraoperatively). Regression analysis was applied to evaluate the correlation between EuroSCORE and costs. The predictive accuracy for prolonged postoperative intensive care unit (ICU) stay was assessed by the Hosmer-Lemeshow goodness-of-fit test. The discriminatory power was evaluated by calculating the areas under receiver operating characteristics curves. Results: The study included 3,404 patients. The mean cost for the surgery was 7,300 dollars, in the ICU 3,746 dollars, and in the ward 3,500 dollars. Total cost was significantly correlated with EuroSCORE, with a correlation coefficient of 0.47 (p < 0.0001); the correlation coefficient was 0.31 for the surgery cost, 0.46 for the ICU cost, and 0.11 for the ward cost. The Hosmer-Lemeshow p value for EuroSCORE prediction of more than 2 days' stay in the ICU was 0.40, indicating good accuracy. The area under the receiver operating characteristics curve was 0.78. The probability of an ICU stay exceeding 2 days was more than 50% at a EuroSCORE of 14 or more. Conclusions: In this single-institution study, the additive EuroSCORE algorithm could be used to predict ICU cost and also an ICU stay of more than 2 days after open heart surgery. abstract_id: PUBMED:21545065 Correlation between EuroSCORE and intensive care unit length of stay after coronary surgery. During the last several years many authors have found that the European System for Cardiac Operative Risk Evaluation is useful in the prediction of not only postoperative mortality but also of the length of stay in the intensive care unit, complication rate and overall treatment expenses. This study included 329 patients who had undergone isolated surgical myocardial revascularization at our Department during the period from January 1st to June 6th, 2008. For the operative risk evaluation, the additive European System for Cardiac Operative Risk Evaluaion was used. In group I (low risk 0-2%) there were 144 patients (43.7%), whereas group II (medium risk 3-5%) and group III (high risk > or = 6%) included 141 (42.8%) and 44 (13.4%) patients, respectively. The length of stay in the intensive care unit was 25.56, 32.43 and 49.59 hours for groups I, II and III, respectively. The difference in the mean length of stay in the intensive care unit between the groups was highly statistically significant (p < 0.001) with a positive correlation (R = 0.193; p < 0.001). There is a positive correlation in patients who had undergone surgical myocardial revascularization in terms of operative risk expressed by the additive European System for Cardiac Operative Risk Evaluation and length of stay in the intensive care unit, total intubation period and development of early postoperative complications. abstract_id: PUBMED:16551817 Determinants of morbidity and intensive care unit stay after coronary surgery. The study evaluated rates and determinants of hospital morbidity, serious morbid events, and prolonged intensive care unit stay associated with isolated coronary artery bypass. The medical records of 391 patients undergoing isolated coronary artery bypass at our center during 2003 were reviewed. The observed crude hospital mortality rate was 2.05%, similar to the EuroSCORE predicted mortality rate of 2.34%. Arrhythmia was the most frequent postoperative complication (17.6%). The serious hospital morbidity rate was 5.9%. The final logistic regression model of serious morbid events identified the following predictors: drug allergy, diabetes, and EuroSCORE. Prolonged intensive care unit stay (>/= 3 days) was observed in 9.5% of patients. Multivariable logistic regression analysis revealed age, preoperative rhythm disturbances, previous cardiac operation, and hypertension as independent predictors of prolonged intensive care unit stay. The rates of hospital mortality, morbidity, and prolonged intensive care unit stay were comparable to those of other major international cardiac surgery centers. These data can be used as a benchmark for further self- and peer-assessment quality improvement activities. abstract_id: PUBMED:22833511 Length of intensive care unit stay following cardiac surgery: is it impossible to find a universal prediction model? Objectives: Accurate models for prediction of a prolonged intensive care unit (ICU) stay following cardiac surgery may be developed using Cox proportional hazards regression. Our aims were to develop a preoperative and intraoperative model to predict the length of the ICU stay and to compare our models with published risk models, including the EuroSCORE II. Methods: Models were developed using data from all patients undergoing cardiac surgery at St. Olavs Hospital, Trondheim, Norway from 2000-2007 (n = 4994). Internal validation and calibration were performed by bootstrapping. Discrimination was assessed by areas under the receiver operating characteristics curves and calibration for the published logistic regression models with the Hosmer-Lemeshow test. Results: Despite a diverse risk profile, 93.7% of the patients had an ICU stay <2 days, in keeping with our fast-track regimen. Our models showed good calibration and excellent discrimination for prediction of a prolonged stay of more than 2, 5 or 7 days. Discrimination by the EuroSCORE II and other published models was good, but calibration was poor (Hosmer-Lemeshow test: P < 0.0001), probably due to the short ICU stays of almost all our patients. None of the models were useful for prediction of ICU stay in individual patients because most patients in all risk categories of all models had short ICU stays (75th percentiles: 1 day). Conclusions: A universal model for prediction of ICU stay may be difficult to develop, as the distribution of length of stay may depend on both medical factors and institutional policies governing ICU discharge. abstract_id: PUBMED:33689923 Prediction of Prolonged Intensive Care Unit Length of Stay Following Cardiac Surgery. Intensive care unit (ICU) costs comprise a significant proportion of the total inpatient charges for cardiac surgery. No reliable method for predicting intensive care unit length of stay following cardiac surgery exists, making appropriate staffing and resource allocation challenging. We sought to develop a predictive model to anticipate prolonged ICU length of stay (LOS). All patients undergoing coronary artery bypass grafting (CABG) and/or valve surgery with a Society of Thoracic Surgeons (STS) predicted risk score were evaluated from an institutional STS database. Models were developed using 2014-2017 data; validation used 2018-2019 data. Prolonged ICU LOS was defined as requiring ICU care for at least three days postoperatively. Predictive models were created using lasso regression and relative utility compared. A total of 3283 patients were included with 1669 (50.8%) undergoing isolated CABG. Overall, 32% of patients had prolonged ICU LOS. Patients with comorbid conditions including severe COPD (53% vs 29%, P < 0.001), recent pneumonia (46% vs 31%, P < 0.001), dialysis-dependent renal failure (57% vs 31%, P < 0.001) or reoperative status (41% vs 31%, P < 0.001) were more likely to experience prolonged ICU stays. A prediction model utilizing preoperative and intraoperative variables correctly predicted prolonged ICU stay 76% of the time. A preoperative variable-only model exhibited 74% prediction accuracy. Excellent prediction of prolonged ICU stay can be achieved using STS data. Moreover, there is limited loss of predictive ability when restricting models to preoperative variables. This novel model can be applied to aid patient counseling, resource allocation, and staff utilization. abstract_id: PUBMED:20679549 Prediction models for prolonged intensive care unit stay after cardiac surgery: systematic review and validation study. Background: Several models have been developed to predict prolonged stay in the intensive care unit (ICU) after cardiac surgery. However, no extensive quantitative validation of these models has yet been conducted. This study sought to identify and validate existing prediction models for prolonged ICU length of stay after cardiac surgery. Methods And Results: After a systematic review of the literature, the identified models were applied on a large registry database comprising 11 395 cardiac surgical interventions. The probabilities of prolonged ICU length of stay based on the models were compared with the actual outcome to assess the discrimination and calibration performance of the models. Literature review identified 20 models, of which 14 could be included. Of the 6 models for the general cardiac surgery population, the Parsonnet model showed the best discrimination (area under the receiver operating characteristic curve=0.75 [95% confidence interval, 0.73 to 0.76]), followed by the European system for cardiac operative risk evaluation (EuroSCORE) (0.71 [0.70 to 0.72]) and a model by Huijskes and colleagues (0.71 [0.70 to 0.73]). Most of the models showed good calibration. Conclusions: In this validation of prediction models for prolonged ICU length of stay, 2 widely implemented models (Parsonnet, EuroSCORE), although originally designed for prediction of mortality, were superior in identifying patients with prolonged ICU length of stay. abstract_id: PUBMED:32124735 Incidence and predictors of prolonged intensive care unit stay after coronary artery bypass in Iceland Introduction: To maximize the use of intensive care unit (ICU) re--sources, it is important to estimate the prevalence and risk factors for prolonged ICU unit stay after coronary artery bypass grafting (CABG) surgery. Material And Methods: This retrospective cohort study included all patients who underwent primary isolated CABG at Landspitali between 2001 and 2018. Patient information was collected from hospital charts and death registries. Patients who stayed in the ICU for the conventional one night postoperatively were compared with those who needed longer stays in the ICU. Survival rate was estimated with the Kaplan-Meier method. Predictors for prolonged ICU stay were calculated with logistic regression and the outcome used to create a calculator that estimates the probability of prolonged ICU stay. Results: Out of 2177 patients, 20% required prolonged ICU stay. Patients with prolonged stay were more frequently female (23% vs 16%, p=0.001), had a higher rate of cardiovascular risk factors and higher EuroSCORE II (4.7 vs. 1.9, p<0.001). They also had a higher rate of impaired renal function before surgery (14% vs. 4%, p<0.001) and emergent surgery (18% vs. 2%, p<0.001). Furthermore, these patients had higher rates of both short-term and long-term complications, and lower long-term survival (85% vs 68% five-year survival rate, p<0.0001). Independent risk factors for prolonged ICU stay were advanced age, female gender, EuroSCORE II, history of heart diseases, impaired renal function and emergent surgery. Conclusions: Every fifth patient had a prolonged ICU stay after CABG. Several risk factors predicted prolonged ICU stay after CABG, in particular patients' medical condition before surgery, EuroSCORE II and emergent surgery. A better understanding of the risk factors for prolonged ICU stay will hopefully aid in scheduling CABG surgeries at Landspitali. Answer: Yes, EuroSCORE has been found to be useful in predicting extended intensive care unit (ICU) stay after cardiac surgery. Several studies have demonstrated a positive correlation between EuroSCORE values and prolonged ICU stay. For instance, a study conducted at the Antwerp University Hospital in Belgium found that both the additive and logistic models of EuroSCORE correlated positively with prolonged ICU stay, with the logistic model being more discriminatory (PUBMED:19307134). Similarly, a study at Papworth Hospital in Cambridgeshire showed that EuroSCORE could accurately predict patient length of stay (LOS) in the ICU following cardiac surgery, with an overall accuracy of 79.77% (PUBMED:29678435). However, the EuroSCORE's predictive value for prolonged ICU stay is not without limitations. A study comparing EuroSCORE with institution-specific scoring systems found that while EuroSCORE could identify high-risk patients for prolonged ICU stay, its discriminatory power was low (PUBMED:25696483). Another study from the University Hospital of Lund, Sweden, indicated that EuroSCORE could predict ICU costs and an ICU stay of more than 2 days after open heart surgery (PUBMED:15511424). Additionally, a study including patients who underwent isolated surgical myocardial revascularization found a positive correlation between EuroSCORE and ICU length of stay, intubation period, and development of early postoperative complications (PUBMED:21545065). Despite these findings, some studies suggest that the predictive ability of EuroSCORE may vary depending on institutional policies and medical factors (PUBMED:22833511). Moreover, a systematic review and validation study found that while EuroSCORE was useful in identifying patients with prolonged ICU length of stay, it was not originally designed for this purpose and other models may also be effective (PUBMED:20679549). In summary, EuroSCORE is a useful tool for predicting extended ICU stay after cardiac surgery, but its effectiveness can be influenced by various factors, and it may not be the only or the best tool for all settings.
Instruction: Anterior instrumentation for thoracolumbar adolescent idiopathic scoliosis: do structural interbody grafts preserve sagittal alignment better than morselized rib autografts? Abstracts: abstract_id: PUBMED:16985462 Anterior instrumentation for thoracolumbar adolescent idiopathic scoliosis: do structural interbody grafts preserve sagittal alignment better than morselized rib autografts? Study Design: This is a retrospective, sequential cohort study of 34 patients treated by anterior instrumented fusion with single solid rod, single screw constructs with at least 2-year follow-up. Sixteen of the patients received structural grafts as interbody spacers in disc levels below T12, while the other 18 patients received only morselized rib autograft. Objective: To determine if structural interbody grafts preserve sagittal alignment better than morselized rib autograft. Summary Of Background Data: Some studies have shown that structural grafts are more effective in preserving sagittal alignment, while others have found them to be no more effective than morselized rib graft. Methods: Anterior-posterior radiographs were measured for primary, secondary, and fractional Cobb curves, and C7-sacrum plumb lines. Lateral radiographs were measured for: T5-HIV (highest instrumented vertebrae), instrumented levels, LIV (lowest instrumented vertebrae)-S1, T12-LIV, and T12-S1 angles, C7-sacrum plumb lines, and LID-A (lowest instrumented disc-angle). Results: The increase in kyphosis from preoperative to follow-up radiographs of the angle between T12-LIV was significantly more for the patients with morselized rib graft compared with those with structural grafts, 9 degrees and 1 degree, respectively (P &lt; 0.05). Conclusions: The structural grafts placed in disc spaces below T12 were able to maintain sagittal alignment over this region, while the spines that received only morselized rib graft collapsed into kyphosis. abstract_id: PUBMED:14520036 Anterior single rod instrumentation for thoracolumbar adolescent idiopathic scoliosis with and without the use of structural interbody support. Study Design: A radiographic and clinical outcomes analysis of 41 patients treated for thoracolumbar adolescent idiopathic scoliosis utilizing a single anterior rigid rod construct. Objectives: To evaluate the necessity of structural interbody support to improve primary curve correction and preserve or augment lordosis when used in conjunction with a single anterior rigid rod construct, to identify parameters that predict horizontalization of the lowest instrumented vertebra, adjacent disc angulation, and distal uninstrumented vertebrae, and to assess patient satisfaction following surgery. Background Data: Instrumentation-induced kyphosis has been a concern with nonrigid anterior systems used in the past for the treatment of scoliosis. Interbody structural support has been recommended to maintain appropriate sagittal profile when anterior systems are utilized. It has also been suggested that the use of structural interbody support creates a fulcrum to increase curve correction when compression is applied to the convexity of the deformity. However, the necessity of interbody structural support when used in conjunction with a rigid anterior system has not been previously evaluated in patients with adolescent idiopathic scoliosis. Materials And Methods: Forty-one patients mean age 15.9 years (range 12.1-18.6 years) with thoracolumbar adolescent idiopathic scoliosis underwent anterior spinal fusion using a single 6.0 to 6.5 mm solid rod construct between June 1995 and August 1999 performed by the senior author (T.G.L.). Four additional patients with thoracolumbar curves with similar anterior instrumentation over the same time period were lost to follow-up or had incomplete records and were not included in the study. Structural interbody support was used in 21 patients and packed morselized autograft alone was used in 20 patients. The patients in the group with packed morselized bone alone generally underwent surgery earlier in the series before the author began using structural interbody support on a regular basis. Each patient had a minimum follow-up of 3 years. Preoperative, initial, and most recent (&gt;3 years) follow-up radiographs were reviewed to determine in each group Cobb angle measurements, flexibility of primary, secondary, and fractional curves, apical and end vertebral translation, lowest instrumented vertebral and caudal disc angulation, global coronal and sagittal balance, and sagittal Cobb measurements in both instrumented levels as well as lumbar lordosis (T12-S1). In addition, the SRS outcomes instrument was completed by 38 of 41 patients. Results: The mean preoperative primary curve in patients with structural support was 47 degrees (Group II) and 45 degrees in patients without structural support (Group I). Mean curve correction was to 13 degrees in Groups I and II. One patient in Group II became slightly more unbalanced at final follow-up; otherwise all were improved after surgery. Sagittal measurements over instrumented segments as well as total lumbar lordosis (T12-S1) was maintained between preoperative and final postoperative values in both groups. Similarly, in both groups, when horizontalization of the distal end instrumented vertebra was achieved on the preoperative reverse side-bending radiograph, more normal relationships were achieved between instrumented and distal noninstrumented segments (adjacent disc angulation and fractional lumbar curve) at final follow-up (P &lt;or= 0.01). Patients in both groups were equally pleased with their clinical outcomes based on the SRS outcomes instrument. Conclusions: The use of interbody structural support does not appear to be necessary to maintain an appropriate sagittal profile or to maximize coronal curve correction when a rigid rod construct with packed morselized bone is used for the treatment of thoracolumbar adolescent idiopathic scoliosis. Parameters predicting horizontalization of the lower instrumented vertebra and uninstrumented segments below the construct were identified, which, if achieved, should predict an optimal long-term outcome. Clinical outcomes were very good in both groups. abstract_id: PUBMED:20084031 Can a bone marrow-based graft replacement result in similar fusion rates as rib autograft in anterior interbody fusion procedures for adolescent thoracolumbar scoliosis? Study Design: Nonrandomized consecutive case series comparing interbody spine fusion with autograft versus bone marrow-based graft replacement (BGR). Objectives: Effectiveness of bone marrow-based graft versus rib autograft in achieving anterior interbody fusion of the thoracolumbar/lumbar spine. Summary Of Background Data: The use of bone marrow (BM) with graft materials was shown in a prior study to aid with bone regeneration. Limited clinical data are currently available to demonstrate the effectiveness of BM for spinal applications. Engineered matrices of collagen type I coated with hydroxyapatite and combined with BM have been safely used in both spinal and long bone applications. Methods: Nineteen consecutive patients from 2003 to 2006 underwent anterior interbody fusion through an anterior approach with dual-rod instrumentation and structural interbody support for thoracolumbar scoliosis. Within 19 patients, there were 42 disc levels treated with graft replacement material combined with BM (BGR+BM) and 25 disc levels with rib autograft. The mean follow-up time was 17 months with a minimum of 6 months. Clinical and radiographic data included Scoliosis Research Society (SRS)-22 questionnaires and pain and fusion assessments of posterior-anterior and lateral radiographs, collected preoperatively and at 6, 12, and 24 months, postoperatively. Results: At 6 months, 72% of BGR+BM segments versus 44% of autograft segments were defined as fused. All BGR+BM segments were fused by 12 months, and all autograft segments were fused by 24 months. There was no pseudoarthrosis or instrumentation failure, and interbody fusion rate was 100%. The average correction was 73.5+/-13.5%. The overall loss of correction from the immediate alignment to postoperative follow-up was less than 4%. There was no loss of sagittal plane alignment or measured kyphosis. No morbidity was observed at the BM aspiration site. Conclusions: Anterior spinal fusion using bone marrow-based graft substitutes for thoracolumbar adolescent idiopathic scoliosis demonstrated equivalent results to rib autograft when used with dual-rod instrumentation and structural support. In this patient series, the rate of fusion was faster in the bone marrow-treated segments. These results suggest that for patients as described in this cohort, bone marrow-based graft replacements can thus be used as an alternative, or adjunct, to autograft to achieve interbody fusion in scoliosis surgery. abstract_id: PUBMED:29368138 Sagittal balance and idiopathic scoliosis: does final sagittal alignment influence outcomes, degeneration rate or failure rate? Introduction: In the last decade, spine surgeons have been impacted by the "sagittal plane analysis revolution". Significant correlations have been found in adult spinal deformity (ASD) between sagittal lumbo-pelvic parameters and functional outcomes, but most of them do not apply in adolescent idiopathic scoliosis (AIS). Meanwhile, instrumentation and reduction strategies have considerably evolved. This paper aims to describe the preoperative sagittal alignment in AIS, and to report literature evidence regarding the influence of postoperative sagittal balance on complication rates, low back pain incidence and disc degeneration. Methods: A bibliographic search in Medline and Google database from 1984 to May 2017 was performed. The keywords included 'adolescent idiopathic scoliosis', 'adult scoliosis', 'sagittal alignment', 'proximal junctional kyphosis', 'distal junctional kyphosis', 'outcomes', 'low back pain' and 'complication', used individually or in combination. Results: Algorithms of sagittal balance analysis and treatment decision have been reported in ASD, but the clinical situation is very different in children. Sagittal alignment greatly varies in AIS among the various Lenke types. Most patients are clinically balanced before surgery, but the spinal harmony is altered, with overgrowth of the anterior column and global sagittal flattening (undersestimated in 2D). The exact role of pelvic incidence and whether or not patients also use pelvic compensation to maintain balance still require further clarification. The incidence of radiological junctional failures remains highly variable, depending on definitions, cohort size and follow-up. Preoperative hyperkyphosis seems to be a consistent and relevant risk factor. Current literature does not support the recent trend to save motion segments (selective fusion), and no significant association was found between the distal level of fusion and the incidence of low back pain. Postoperative sagittal alignment seems to be more important than LIV selection to avoid disc degeneration at mid-term follow-up. Conclusion: It is clear now that sagittal alignment plays a major role in clinical outcomes and should not be neglected in AIS. Seven key guidelines that should be considered for each patient before surgery are reported (Table 2). Personalized planning using 3D technology is gaining popularity and might help in the future reducing complications. abstract_id: PUBMED:21666508 A comparison of anterior and posterior instrumentation for restoring and retaining sagittal balance in patients with idiopathic adolescent scoliosis. Study Design: Retrospective, comparative study. Objective: To compare the effects of anterior rod-screw instrumentation and posterior pedicle screw instrumentation on sagittal balance in patients with Lenke type 5 adolescent idiopathic scoliosis (AIS). Summary Of Background Data: Lenke type 5 AIS is treated by anterior or posterior spinal fusion surgery. Most studies comparing anterior and posterior fusion surgery have focused on assessing improvement in coronal balance. Studies comparing the effects of anterior and posterior surgery on sagittal balance are lacking. Methods: The records of 49 patients diagnosed with Lenke type 5 AIS were examined. A total of 21 patients underwent anterior surgery between 2000 and 2003, while 26 underwent posterior surgery between 2004 and 2006. Preoperative, postoperative, and follow-up thoracic kyphosis (T5-T12 and T2-T12), lumbar lordosis, thoracolumbar junction kyphosis, and spinal vertical axis measurements were made by examining radiographs. Quality of life was assessed using the Scoliosis Research Society-22 questionnaire. All patients were followed up for at least 2 years. Results: There were no significant between group differences in coronal alignment, thoracic kyphosis, or T11-L2 alignment after surgery. Sagittal alignment improvement was significantly more pronounced in the anterior surgery group compared with the posterior surgery group. The fusion segment was also significantly shorter in the anterior surgery compared with the posterior surgery group. Quality of life scores were significantly higher in the anterior surgery group compared with the posterior surgery group. Conclusion: Anterior solid rod-screw instrumentation results in shorter fusion segments, and better sagittal alignment and quality of life than posterior pedicle screw instrumentation in patients with Lenke type 5 AIS. abstract_id: PUBMED:29093788 Normal Age-Adjusted Sagittal Spinal Alignment Is Achieved with Surgical Correction in Adolescent Idiopathic Scoliosis. Study Design: Retrospective analysis. Purpose: Our hypothesis is that the surgical correction of adolescent idiopathic scoliosis (AIS) maintains normal sagittal alignment as compared to age-matched normative adolescent population. Overview Of Literature: Sagittal spino-pelvic alignment in AIS has been reported, however, whether corrective spinal fusion surgery re-establishes normal alignment remains unverified. Methods: Sagittal profiles and spino-pelvic parameters of thirty-eight postsurgical correction AIS patients ≤21 years old without prior fusion from a single institution database were compared to previously published normative age-matched data. Coronal and sagittal measurements including structural coronal Cobb angle, pelvic incidence, pelvic tilt, thoracic kyphosis, lumbar lordosis, sagittal vertical axis, C2-C7 cervical lordosis, C2-C7 sagittal vertical axis, and T1 pelvic angles were measured on standing full-body stereoradiographs using validated software to compare preoperative and 6 months postoperative changes with previously published adolescent norms. A sub-group analysis of patients with type 1 Lenke curves was performed comparing preoperative to postoperative alignment and also comparing this with previously published normative values. Results: The mean coronal curve of the 38 AIS patients (mean age, 16±2.2 years; 76.3% female) was corrected from 53.6° to 9.6° (80.9%, p&lt;0.01). None of the thoracic and spino-pelvic sagittal parameters changed significantly after surgery in previously hypo- and normo-kyphotic patients. In hyper-kyphotic patients, thoracic kyphosis decreased (p=0.003) with a reciprocal decrease in lumbar lordosis (p=0.01), thus lowering pelvic incidence-lumbar lordosis mismatch mismatch (p=0.009). Structural thoracic scoliosis patients had slightly more thoracic kyphosis than age-matched patients at baseline and surgical correction of the coronal plane of their scoliosis preserved normal sagittal alignment postoperatively. A sub-analysis of Lenke curve type 1 patients (n=24) demonstrated no statistically significant changes in the sagittal alignment postoperatively despite adequate coronal correction. Conclusions: Surgical correction of the coronal plane in AIS patients preserves sagittal and spino-pelvic alignment as compared to age-matched asymptomatic adolescents. abstract_id: PUBMED:27927337 Progressive Changes in Sagittal Contour After Anterior Spinal Fusion With Instrumentation of Different Sizes for Thoracic Adolescent Idiopathic Scoliosis: Is Continued Posterior Spinal Growth an Issue in Skeletally Immature Children? Study Design: Retrospective analysis of radiographs for a prospective group of 196 adolescent patients with thoracic idiopathic scoliosis after anterior spinal fusion with instrumentation. Objectives: To analyze progressive changes in the sagittal profile of immature and mature patients during the first 2 postoperative years. Summary Of Background Data: In a previous study of similar patients, a flexible 3.2-mm rod construct was used. An additional 15° (average) of kyphosis was seen in 60% of Risser 0 patients. The current patient group had fusion with solid rod (&gt;4.0-mm) instrumentation. Methods: All included patients had single anterior rod instrumentation, clinical and radiographic evidence of solid fusion, a minimum follow-up of 2 years, and a coronal progression of ≤5° including adequate biplanar standard radiographs at preoperative, immediate postoperative, and 2-year follow-up visits. Patients were stratified by skeletal maturity and preoperative thoracic kyphosis. Significant sagittal progression was defined as &gt;10°. Results: Significant sagittal progression that caused the patient to be hyperkyphotic (T5-T12 &gt; 40°) occurred in 18.37% of the 196 study patients. A total of 55 who were group I Risser 0 at the time of surgery and 141 were group II Risser 1-5. Progression occurred much more frequently in Risser 0 patients who had a preoperative T5-T12 of ≥30° (67.67%) versus Risser 1-5 patients (25.00%). Conclusions: Compared with the authors' previous work, solid rod instrumentation (&gt;4.0 mm) for anterior spinal fusion for thoracic scoliosis is better at preventing progressive thoracic kyphosis than the flexible rod (3.2 mm). However, when performing a thoracic anterior spinal instrumented fusion in skeletally immature patients when the preoperative T5-T12 sagittal curve is &gt;30° it is recommended to leave a low normal kyphosis (20°) in the instrumented region of T5-T12. abstract_id: PUBMED:29803090 The change of cervical sagittal alignment after surgery for adolescent idiopathic scoliosis. Objective: The postoperative change in cervical sagittal alignment has an impact on health-related quality of life in adolescent idiopathic scoliosis (AIS) patients who have undergone deformity correction. However, the effect of deformity correction on sagittal cervical profile is still controversial in the literatures. The objective of this study was to investigate the postoperative change in the cervical sagittal alignment of patients with AIS. Patients And Methods: A total of 46 AIS patients treated by posterior instrumentation and fusion with pedicle screw constructs were included in the study. Radiographs were collected preoperatively, immediate postoperatively and at the final follow-up. The C2-C7 Cobb angle and C2-C7 sagittal vertical axis (cSVA) were used to assess the cervical sagittal alignment. Spinopelvic alignment parameters, such as thoracic kyphosis (TK), lumbar lordosis (LL), pelvic incidence (PI), sacral slope (SS), pelvic tilt (PT), and sagittal vertical axis (SVA), were also measured. The correlations between the cervical sagittal parameters and spinopelvic parameters were analyzed. Results: The incidence of cervical kyphosis was 67.4% preoperatively but increased to 87% postoperatively and 69.5% at the final follow-up. The C2-C7 Cobb angle significantly increased from pre-operation (-1.5° ± 15°) to post-operation (-5.4° ± 7.3°; P &lt; 0.05) and spontaneously decreased to -2.9° ± 10.5° at the final follow up. The cSVA was 18.1 ± 13 mm preoperatively, 17 ± 12.3 mm after surgery and 18.5 ± 9.5 mm at the last follow-up, but the change was not statistically significant (P &gt; 0.05). TK decreased significantly from pre-operation (17.7° ± 14.4°) to post-operation (14.2° ± 7.6°) and spontaneously improved to 16.9° ± 8.2° at the final follow-up. TK showed a significant correlation with the C2-C7 Cobb angle, but not with cSVA, in the preoperative (r = 0.709, P &lt; 0.01), postoperative (r = 0.472, P &lt; 0.01), and last follow-up measurements(r = 0.505, P &lt; 0.01). Compared with patients with preoperative thoracic hypokyphosis or hyperkyphosis, patients with a normal thoracic spine had more significant postoperative changes in the C2-C7 Cobb angle and TK. Conclusions: Cervical sagittal alignment after deformity correction is altered in AIS patients. An increase in cervical kyphosis after surgery is correlated with a loss of thoracic kyphosis. The change in the cervical sagittal profile may be a compensatory mechanism in response to an abnormal thoracic sagittal profile. abstract_id: PUBMED:30182063 A retrospective analysis of health-related quality of life in adolescent idiopathic scoliosis children treated by anterior instrumentation and fusion. Background: Idiopathic scoliosis is the most common type of spinal deformity. Scoliosis is defined as a lateral curvature of the spine greater than 10° accompanied by rotation of the vertebrae. The treatment available for adolescent idiopathic scoliosis is observation, orthosis, and surgery. The surgical options include open anterior release and instrumentation, posterior instrumentation, and thoracoscopic approaches. The Scoliosis Research Society Questionnaire (SRS-30) is a specific instrument to measure health-related quality of life in patients with scoliosis, who had or had not undergone surgery. The purpose was to assess the post-operative functional outcome using SRS-30 in children who underwent anterior release, instrumentation, and fusion using autogenous rib graft for adolescent idiopathic scoliosis (AIS). Methods: In a retrospective cohort study, 25 patients between the ages of 11 and 17 years, who underwent anterior release, instrumentation, and fusion using autogenous rib graft for adolescent idiopathic scoliosis (AIS) between 2008 and 2014, were included in the study. Results: The total average score was 4.26 with a SD of 0.014 and had maximum average score 4.5 (for pain) and minimum average score 3.8 (for self-image). Conclusion: Anterior release, instrumentation, and fusion using autogenous rib graft is having good functional outcome in all domains. abstract_id: PUBMED:33023844 Alterations of sagittal alignment and thoracic cage parameters after long-term bracing in adolescents with idiopathic scoliosis. Background: Sagittal alignment and thoracic cage parameters are correlated with the surgery success rate and life quality of adolescents with idiopathic scoliosis (AIS). However, the effects of the long-term bracing on sagittal and thoracic cage parameters have not been clearly recognized. Hypothesis: Long-term brace treatment could compromise sagittal balance and thoracic development in patients with AIS. Patients And Methods: Two hundred and seventy-five patients with AIS were included in this study. The radiographs when AIS was diagnosed and 2 years after Chêneau bracing treatment were collected. Sagittal, cervical, pelvic, and thoracic cage parameters were evaluated. In addition, 32 patients finishing brace treatment with complete radiograph data were selected from included 275 patients and the data of CL, TK and LL at five different time points was collected. Results: CL (average: from 14.13° to 8.94°, p=0.012), TK (average: from 24.35° to 19.02°, p=0.001) and LL (average: from 38.44° to 32.13°, p=0.004) underwent observably decline after two-year brace treatment. No statistically significant alteration of pelvic parameters was shown. The vertical parameters of thoracic cage including T1-12 height, left and right thorax height and thoracic transverse diameter increased significantly. Thoracic anteroposterior diameter at the T7 vertebral level (average: from 11.49 to 10.57cm, p=0.001) and diaphragm level (average: from 11.89 to 10.74cm, p=0.001) decreased significantly after bracing. Discussion: CL, TK and LL decreased after long-term bracing treatment, which lead to the aggravation of "flat back" in AIS patients. In addition, the thoracic anteroposterior diameters declined after two-year bracing, which may result from reduced TK and contribute to further pulmonary function impairment. Level Of Evidence: IV. Answer: Yes, structural interbody grafts preserve sagittal alignment better than morselized rib autografts in thoracolumbar adolescent idiopathic scoliosis. A study comparing the two methods found that the increase in kyphosis from preoperative to follow-up radiographs of the angle between T12-LIV was significantly more for the patients with morselized rib graft compared with those with structural grafts, 9 degrees and 1 degree, respectively (P < 0.05). The conclusion was that structural grafts placed in disc spaces below T12 were able to maintain sagittal alignment over this region, while the spines that received only morselized rib graft collapsed into kyphosis (PUBMED:16985462).
Instruction: Are special read alignment strategies necessary and cost-effective when handling sequencing reads from patient-derived tumor xenografts? Abstracts: abstract_id: PUBMED:25539684 Are special read alignment strategies necessary and cost-effective when handling sequencing reads from patient-derived tumor xenografts? Background: Patient-derived tumor xenografts in mice are widely used in cancer research and have become important in developing personalized therapies. When these xenografts are subject to DNA sequencing, the samples could contain various amounts of mouse DNA. It has been unclear how the mouse reads would affect data analyses. We conducted comprehensive simulations to compare three alignment strategies at different mutation rates, read lengths, sequencing error rates, human-mouse mixing ratios and sequenced regions. We also sequenced a nasopharyngeal carcinoma xenograft and a cell line to test how the strategies work on real data. Results: We found the "filtering" and "combined reference" strategies performed better than aligning reads directly to human reference in terms of alignment and variant calling accuracies. The combined reference strategy was particularly good at reducing false negative variants calls without significantly increasing the false positive rate. In some scenarios the performance gain of these two special handling strategies was too small for special handling to be cost-effective, but it was found crucial when false non-synonymous SNVs should be minimized, especially in exome sequencing. Conclusions: Our study systematically analyzes the effects of mouse contamination in the sequencing data of human-in-mouse xenografts. Our findings provide information for designing data analysis pipelines for these data. abstract_id: PUBMED:29304755 Computational approach to discriminate human and mouse sequences in patient-derived tumour xenografts. Background: Patient-Derived Tumour Xenografts (PDTXs) have emerged as the pre-clinical models that best represent clinical tumour diversity and intra-tumour heterogeneity. The molecular characterization of PDTXs using High-Throughput Sequencing (HTS) is essential; however, the presence of mouse stroma is challenging for HTS data analysis. Indeed, the high homology between the two genomes results in a proportion of mouse reads being mapped as human. Results: In this study we generated Whole Exome Sequencing (WES), Reduced Representation Bisulfite Sequencing (RRBS) and RNA sequencing (RNA-seq) data from samples with known mixtures of mouse and human DNA or RNA and from a cohort of human breast cancers and their derived PDTXs. We show that using an In silico Combined human-mouse Reference Genome (ICRG) for alignment discriminates between human and mouse reads with up to 99.9% accuracy and decreases the number of false positive somatic mutations caused by misalignment by &gt;99.9%. We also derived a model to estimate the human DNA content in independent PDTX samples. For RNA-seq and RRBS data analysis, the use of the ICRG allows dissecting computationally the transcriptome and methylome of human tumour cells and mouse stroma. In a direct comparison with previously reported approaches, our method showed similar or higher accuracy while requiring significantly less computing time. Conclusions: The computational pipeline we describe here is a valuable tool for the molecular analysis of PDTXs as well as any other mixture of DNA or RNA species. abstract_id: PUBMED:33880552 Chromatin conformation capture (Hi-C) sequencing of patient-derived xenografts: analysis guidelines. Background: Sequencing of patient-derived xenograft (PDX) mouse models allows investigation of the molecular mechanisms of human tumor samples engrafted in a mouse host. Thus, both human and mouse genetic material is sequenced. Several methods have been developed to remove mouse sequencing reads from RNA-seq or exome sequencing PDX data and improve the downstream signal. However, for more recent chromatin conformation capture technologies (Hi-C), the effect of mouse reads remains undefined. Results: We evaluated the effect of mouse read removal on the quality of Hi-C data using in silico created PDX Hi-C data with 10% and 30% mouse reads. Additionally, we generated 2 experimental PDX Hi-C datasets using different library preparation strategies. We evaluated 3 alignment strategies (Direct, Xenome, Combined) and 3 pipelines (Juicer, HiC-Pro, HiCExplorer) on Hi-C data quality. Conclusions: Removal of mouse reads had little-to-no effect on data quality as compared with the results obtained with the Direct alignment strategy. Juicer extracted more valid chromatin interactions for Hi-C matrices, regardless of the mouse read removal strategy. However, the pipeline effect was minimal, while the library preparation strategy had the largest effect on all quality metrics. Together, our study presents comprehensive guidelines on PDX Hi-C data processing. abstract_id: PUBMED:30286710 XenofilteR: computational deconvolution of mouse and human reads in tumor xenograft sequence data. Background: Mouse xenografts from (patient-derived) tumors (PDX) or tumor cell lines are widely used as models to study various biological and preclinical aspects of cancer. However, analyses of their RNA and DNA profiles are challenging, because they comprise reads not only from the grafted human cancer but also from the murine host. The reads of murine origin result in false positives in mutation analysis of DNA samples and obscure gene expression levels when sequencing RNA. However, currently available algorithms are limited and improvements in accuracy and ease of use are necessary. Results: We developed the R-package XenofilteR, which separates mouse from human sequence reads based on the edit-distance between a sequence read and reference genome. To assess the accuracy of XenofilteR, we generated sequence data by in silico mixing of mouse and human DNA sequence data. These analyses revealed that XenofilteR removes &gt; 99.9% of sequence reads of mouse origin while retaining human sequences. This allowed for mutation analysis of xenograft samples with accurate variant allele frequencies, and retrieved all non-synonymous somatic tumor mutations. Conclusions: XenofilteR accurately dissects RNA and DNA sequences from mouse and human origin, thereby outperforming currently available tools. XenofilteR is open source and available at https://github.com/PeeperLab/XenofilteR . abstract_id: PUBMED:30649719 Bioinformatics Basics for High-Throughput Hybridization-Based Targeted DNA Sequencing from FFPE-Derived Tumor Specimens: From Reads to Variants. The use of next-generation sequencing and hybridization-based capture for target enrichment have enabled the interrogation of coding regions of several clinically significant cancer genes in tumor specimens using both targeted panels of a few to hundreds of genes, to whole-exome panels encompassing coding regions of all genes in the genome. Next-generation sequencing (NGS) technologies produce millions of relatively short segments of sequences or reads that require bioinformatics tools to map reads back to a reference genome using various read alignment tools, as well as to determine differences between single bases (single nucleotide variants or SNVs) or multiple bases (insertions and deletions or indels) between the aligned reads and the reference genome to call variants. In addition to single nucleotide changes or small insertions and deletions, high copy gains and losses can also be gleaned from NGS data to call gene amplifications and deletions. Throughout these processes, numerous quality control metrics can be assessed at each step to ensure that the resulting called variants are of high quality and are accurate. In this chapter we review common tools used to generate reads from Illumina-derived sequence data, align reads, and call variants from hybridization-based targeted NGS panel data generated from tumor FFPE-derived DNA specimens as well as basic quality metrics to assess for each assayed specimen. abstract_id: PUBMED:31908598 Patient-derived xenografts of different grade gliomas retain the heterogeneous histological and genetic features of human gliomas. Background: Gliomas account for the major part of primary brain tumors. Based on their histology and molecular alternations, adult gliomas have been classified into four grades, each with distinct biology and outcome. Previous studies have focused on cell-line-based models and patient-derived xenografts (PDXs) from patient-derived glioma cultures for grade IV glioblastoma. However, the PDX of lower grade diffuse gliomas, particularly those harboring the endogenous IDH mutation, are scarce due to the difficulty growing glioma cells in vitro and in vivo. The purpose of this study was to develop a panel of patient-derived subcutaneous xenografts of different grade gliomas that represented the heterogeneous histopathologic and genetic features of human gliomas. Methods: Tumor pieces from surgical specimens were subcutaneously implanted into flanks of NOD-Prkdcscid ll2rgnull mice. Then, we analyzed the association between the success rate of implantation with clinical parameters using the Chi square test and resemblance to the patient's original tumor using immunohistochemistry, immunofluorescence, short tandem repeat analysis, quantitative real-time polymerase chain reaction, and whole-exome sequencing. Results: A total of 11 subcutaneous xenografts were successfully established from 16 surgical specimens. An increased success rate of implantation in gliomas with wild type isocitrate dehydrogenase (IDH) and high Ki67 expression was observed compared to gliomas with mutant IDH and low Ki67 expression. Recurrent and distant aggressive xenografts were present near the primary implanted tumor fragments from WHO grades II to IV. The xenografts histologically represented the corresponding patient tumor and reconstituted the heterogeneity of different grade gliomas. However, increased Ki67 expression was found in propagated xenografts. Endothelial cells from mice in patient-derived xenografts over several generations replaced the corresponding human tumor blood vessels. Short tandem repeat and whole-exome sequencing analyses indicated that the glioma PDX tumors maintained their genomic features during engraftments over several generations. Conclusions: The panel of patient-derived glioma xenografts in this study reproduced the diverse heterogeneity of different grade gliomas, thereby allowing the study of the growth characteristics of various glioma types and the identification of tumor-specific molecular markers, which has applications in drug discovery and patient-tailored therapy. abstract_id: PUBMED:35444950 Genomic and Molecular Signatures of Successful Patient-Derived Xenografts for Oral Cavity Squamous Cell Carcinoma. Background: Oral cavity squamous cell carcinoma (OSCC) is an aggressive malignant tumor with high recurrence and poor prognosis in the advanced stage. Patient-derived xenografts (PDXs) serve as powerful preclinical platforms for drug testing and precision medicine for cancer therapy. We assess which molecular signatures affect tumor engraftment ability and tumor growth rate in OSCC PDXs. Methods: Treatment-naïve OSCC primary tumors were collected for PDX models establishment. Comprehensive genomic analysis, including whole-exome sequencing and RNA-seq, was performed on case-matched tumors and PDXs. Regulatory genes/pathways were analyzed to clarify which molecular signatures affect tumor engraftment ability and the tumor growth rate in OSCC PDXs. Results: Perineural invasion was found as an important pathological feature related to engraftment ability. Tumor microenvironment with enriched hypoxia, PI3K-Akt, and epithelial-mesenchymal transition pathways and decreased inflammatory responses had high engraftment ability and tumor growth rates in OSCC PDXs. High matrix metalloproteinase-1 (MMP1) expression was found that have a great graft advantage in xenografts and is associated with pooled disease-free survival in cancer patients. Conclusion: This study provides a panel with detailed genomic characteristics of OSCC PDXs, enabling preclinical studies on personalized therapy options for oral cancer. MMP1 could serve as a biomarker for predicting successful xenografts in OSCC patients. abstract_id: PUBMED:36612135 Role of Patient-Derived Models of Cancer in Translational Oncology. Cancer is a heterogeneous disease. Each individual tumor is unique and characterized by structural, cellular, genetic and molecular features. Therefore, patient-derived cancer models are indispensable tools in cancer research and have been actively introduced into the healthcare system. For instance, patient-derived models provide a good reproducibility of susceptibility and resistance of cancer cells against drugs, allowing personalized therapy for patients. In this article, we review the advantages and disadvantages of the following patient-derived models of cancer: (1) PDC-patient-derived cell culture, (2) PDS-patient-derived spheroids and PDO-patient-derived organoids, (3) PDTSC-patient-derived tissue slice cultures, (4) PDX-patient-derived xenografts, humanized PDX, as well as PDXC-PDX-derived cell cultures and PDXO-PDX-derived organoids. We also provide an overview of current clinical investigations and new developments in the area of patient-derived cancer models. Moreover, attention is paid to databases of patient-derived cancer models, which are collected in specialized repositories. We believe that the widespread use of patient-derived cancer models will improve our knowledge in cancer cell biology and contribute to the development of more effective personalized cancer treatment strategies. abstract_id: PUBMED:32181445 Biliary tract cancer patient-derived xenografts: Surgeon impact on individualized medicine. Background & Aims: Biliary tract tumors are uncommon but highly aggressive malignancies with poor survival outcomes. Due to their low incidence, research into effective therapeutics has been limited. Novel research platforms for pre-clinical studies are desperately needed. We sought to develop a patient-derived biliary tract cancer xenograft catalog. Methods: With appropriate consent and approval, surplus malignant tissues were obtained from surgical resection or radiographic biopsy and implanted into immunocompromised mice. Mice were monitored for xenograft growth. Established xenografts were verified by a hepatobiliary pathologist. Xenograft characteristics were correlated with original patient/tumor characteristics and oncologic outcomes. A subset of xenografts were then genomically characterized using Mate Pair sequencing (MPseq). Results: Between October 2013 and January 2018, 87 patients with histologically confirmed biliary tract carcinomas were enrolled. Of the 87 patients, 47 validated PDX models were successfully generated. The majority of the PDX models were created from surgical resection specimens (n = 44, 94%), which were more likely to successfully engraft when compared to radiologic biopsies (p = 0.03). Histologic recapitulation of original patient tumor morphology was observed in all xenografts. Successful engraftment was an independent predictor for worse recurrence-free survival. MPseq showed genetically diverse tumors with frequent alterations of CDKN2A, SMAD4, NRG1, TP53. Sequencing also identified worse survival in patients with tumors containing tetraploid genomes. Conclusions: This is the largest series of biliary tract cancer xenografts reported to date. Histologic and genomic analysis of patient-derived xenografts demonstrates accurate recapitulation of original tumor morphology with direct correlations to patient outcomes. Successful development of biliary cancer tumografts is feasible and may be used to direct subsequent therapy in high recurrence risk patients. Lay Summary: Patient biliary tract tumors grown in immunocompromised mice are an invaluable resource in the treatment of biliary tract cancers. They can be used to guide individualized cancer treatment in high-risk patients. abstract_id: PUBMED:27186422 Successful establishment of patient-derived tumor xenografts from gastrointestinal stromal tumor-a single center experience. Patient-derived tumor xenografts (PDTX) generally represent a kind of more reliable model of human disease, by which a potential drugs' preclinical efficacy could be evaluated. To date, no stable gastrointestinal stromal tumor (GIST) PDTX models have been reported. In this study, we aimed to establish stable GIST PDTX models and to evaluate whether these models accurately reflected the histological feature of the corresponding patient tumors and create a reliable GIST PDTX models for our future experiment. By engrafting fresh patient GIST tissues into immune-compromised mice (BALB/c athymic mice), 4 PDTX models were established. Histological features were assessed by a qualified pathologist based on H&amp;E staining, CD117 and DOG-1. We also conduct whole exome sequencing(WES) for the 4 established GIST PDTX models to test if the model still harbored the same mutation detected in corresponding patient tumors and get a more intensive vision for the genetic profile of the models we have established, which will help a lot for our future experiment. To explore the tumorigenesis mechanism for GIST, we also have a statistical analysis for the genes detected as nonsynchronous-mutated simultaneously in 4 samples. All 4 GIST PDTX models retained the histological features of the corresponding human tumors, with original morphology type and positive stains for CD117 and DOG-1. Between the GIST PDTX models and their parental tumors, a same mutation site was detected, which confirmed the genetic consistency. The stability of molecular profiles observed within the GIST PDTX models provides confidence in the utility and translational significance of these models for in vivo testing of personalized therapies. To date, we conducted the first study to successfully establish a GIST PDTX model whose genetic profiles were revealed by whole exome sequencing. Our experience could be of great use. Answer: Special read alignment strategies are indeed necessary when handling sequencing reads from patient-derived tumor xenografts (PDTXs), as these samples can contain a mix of human and mouse DNA. The presence of mouse DNA can affect the accuracy of alignment and variant calling if not properly accounted for. Studies have shown that using special alignment strategies, such as "filtering" and "combined reference" strategies, can perform better than aligning reads directly to the human reference genome. These strategies can reduce false negative variant calls without significantly increasing the false positive rate, which is particularly important in exome sequencing where minimizing false non-synonymous SNVs is crucial (PUBMED:25539684). The use of an In silico Combined human-mouse Reference Genome (ICRG) for alignment has been shown to discriminate between human and mouse reads with up to 99.9% accuracy and significantly decrease the number of false positive somatic mutations caused by misalignment by more than 99.9%. This approach also allows for the computational dissection of the transcriptome and methylome of human tumor cells and mouse stroma in RNA-seq and RRBS data analysis, respectively. Moreover, this method has demonstrated similar or higher accuracy compared to previously reported approaches while requiring significantly less computing time, suggesting a cost-effective solution (PUBMED:29304755). For chromatin conformation capture (Hi-C) sequencing of PDTXs, the removal of mouse reads had little-to-no effect on data quality compared with the Direct alignment strategy. However, the library preparation strategy had the largest effect on all quality metrics, indicating that while special read alignment strategies may not be as critical for Hi-C data, other factors such as library preparation are more influential (PUBMED:33880552). The R-package XenofilteR has been developed to separate mouse from human sequence reads based on the edit-distance between a sequence read and reference genome, removing over 99.9% of mouse origin reads while retaining human sequences. This tool has been shown to outperform currently available tools, suggesting its utility in improving the accuracy of mutation analysis and gene expression levels in xenograft samples (PUBMED:30286710). In conclusion, special read alignment strategies are necessary to accurately analyze sequencing data from PDTXs. These strategies are cost-effective in terms of improving the accuracy of variant calling and other analyses, which is essential for the development of personalized therapies and cancer research using PDTX models.
Instruction: ROMEO: a rapid rule out strategy for low risk chest pain. Does it work in a UK emergency department? Abstracts: abstract_id: PUBMED:12204983 ROMEO: a rapid rule out strategy for low risk chest pain. Does it work in a UK emergency department? Aims: To examine the feasibility of using the ROMEO (rule out myocardial events on "obs" ward) pathway for low risk patients with chest pain in a UK emergency department. Methods: A prospective study was undertaken to determine outcomes for the first 100 patients entering the pathway (from May to Oct 1999). Serum troponin levels, serial ECG recordings, exercise test result, total length of stay, and final diagnoses were reviewed. Patients were telephoned after discharge to inquire about persisting or recurrent pain, and further investigations after completing the ROMEO pathway. Results: 82 of 100 (82%) had myocardial damage excluded by serum troponin assay. Sixty two of 82 (76%) of these completed exercise tolerance testing (ETT). Fifty seven of 62 (92%) ETTs were negative. Twenty of 82 (26%) did not undergo ETT because of mobility problems, recent ETT, or if considered very low probability of cardiac pain on consultant review. Five of 100 (5%) had an increased initial troponin and five of 100 (5%) had an increased 12 hour troponin. These patients were referred for admission under the general physicians. Seven of 100 (7%) were referred for other reasons (late ECG changes, continuing or worsening pain). One patient self discharged. Length of stay varied because of changes to arrangements for ETT. The median time for all patients over the period studied was 23 hours. All patients were discharged within an hour of a negative ETT. FOLLOW UP RESULTS: 67 of 74 (91%) eligible patients were contacted by telephone. Forty six of 67 (69%) had no further pain, attendances, or GP consultations. Six of 67 (9%) had further cardiological investigation or treatment. Conclusions: A rapid rule out strategy such as the ROMEO pathway is feasible in the UK healthcare setting and provides standardised and consistent evaluation. abstract_id: PUBMED:35543712 Guidelines for Reasonable and Appropriate Care in the Emergency Department 2 (GRACE-2): Low-risk, recurrent abdominal pain in the emergency department. This second Guideline for Reasonable and Appropriate Care in the Emergency Department (GRACE-2) from the Society for Academic Emergency Medicine is on the topic "low-risk, recurrent abdominal pain in the emergency department." The multidisciplinary guideline panel applied the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach to assess the certainty of evidence and strength of recommendations regarding four priority questions for adult emergency department patients with low-risk, recurrent, undifferentiated abdominal pain. The intended population includes adults with multiple similar presentations of abdominal signs and symptoms recurring over a period of months or years. The panel reached the following recommendations: (1) if a prior negative computed tomography of the abdomen and pelvis (CTAP) has been performed within 12 months, there is insufficient evidence to accurately identify populations in whom repeat CTAP imaging can be safely avoided or routinely recommended; (2) if CTAP with IV contrast is negative, we suggest against ultrasound unless there is concern for pelvic or biliary pathology; (3) we suggest that screening for depression and/or anxiety may be performed during the ED evaluation; and (4) we suggest an opioid-minimizing strategy for pain control. EXECUTIVE SUMMARY: The GRACE-2 writing group developed clinically relevant questions to address the care of adult patients with low-risk, recurrent, previously undifferentiated abdominal pain in the emergency department (ED). Four patient-intervention-comparison-outcome-time (PICOT) questions were developed by consensus of the writing group, who performed a systematic review of the literature and then synthesized direct and indirect evidence to formulate recommendations, following GRADE methodology. The writing group found that despite the commonality and relevance of these questions in emergency care, the quantity and quality of evidence were very limited, and even fundamental definitions of the population and outcomes of interest are lacking. Future research opportunities include developing precise and clinically relevant definitions of low-risk, recurrent, undifferentiated abdominal pain and determining the scope of the existing populations in terms of annual national ED visits for this complaint, costs of care, and patient and provider preferences. abstract_id: PUBMED:37119081 A New Clinical Prediction Rule for Infective Endocarditis in Emergency Department Patients With Fever: Definition and First Validation of the CREED Score. Background Infective endocarditis (IE) could be suspected in any febrile patients admitted to the emergency department (ED). This study was aimed at assessing clinical criteria predictive of IE and identifying and prospectively validating a sensible and easy-to-use clinical prediction score for the diagnosis of IE in the ED. Methods and Results We conducted a retrospective observational study, enrolling consecutive patients with fever admitted to the ED between January 2015 and December 2019 and subsequently hospitalized. Several clinical and anamnestic standardized variables were collected and evaluated for the association with IE diagnosis. We derived a multivariate prediction model by logistic regression analysis. The identified predictors were assigned a score point value to obtain the Clinical Rule for Infective Endocarditis in the Emergency Department (CREED) score. To validate the CREED score we conducted a prospective observational study between January 2020 and December 2021, enrolling consecutive febrile patients hospitalized after the ED visit, and evaluating the association between the CREED score values and the IE diagnosis. A total of 15 689 patients (median age, 71 [56-81] years; 54.1% men) were enrolled in the retrospective cohort, and IE was diagnosed in 267 (1.7%). The CREED score included 12 variables: male sex, anemia, dialysis, pacemaker, recent hospitalization, recent stroke, chest pain, specific infective diagnosis, valvular heart disease, valvular prosthesis, previous endocarditis, and clinical signs of suspect endocarditis. The CREED score identified 4 risk groups for IE diagnosis, with an area under the receiver operating characteristic curve of 0.874 (0.849-0.899). The prospective cohort included 13 163 patients, with 130 (1.0%) IE diagnoses. The CREED score had an area under the receiver operating characteristic curve of 0.881 (0.848-0.913) in the validation cohort, not significantly different from the one calculated in the retrospective cohort (P=0.578). Conclusions In this study, we propose and prospectively validate the CREED score, a clinical prediction rule for the diagnosis of IE in patients with fever admitted to the ED. Our data reflect the difficulty of creating a meaningful tool able to identify patients with IE among this general and heterogeneous population because of the complexity of the disease and its low prevalence in the ED setting. abstract_id: PUBMED:34187993 Evaluation of HEAR score to rule-out major adverse cardiac events without troponin test in patients presenting to the emergency department with chest pain. Background And Importance: Current guidelines for patients presenting to the emergency department with chest pain without ST-segment elevation myocardial infarction (non-STEMI) on electrocardiogram are based on troponin measurement. The HEART score is reportedly a reliable work-up strategy that combines clinical evaluation with troponin value. A clinical rule that could select very low-risk patients without the need for a blood test (HEAR score, being the HEART score without the troponin item) would be of great interest. Objectives: To prospectively assess the safety of a HEAR score &lt;2 to rule-out non-STEMI without troponin measurement. Secondary objective was to assess the safety of a sequential strategy that combines HEAR score and HEART (defined as two-step HEART strategy). Design, Settings And Participants: Prospective observational study in six emergency departments. Patients with nontraumatic chest pain and no alternative diagnosis were included and followed up for 45 day. Patients were considered at low-risk if the HEAR score was &lt;2 or, for the two-step HEART strategy, if the HEART score was &lt;4. Outcomes Measure And Analysis: The primary endpoint was the 45-day rate of major adverse cardiac events (MACE) in patients with a HEAR score &lt;2. A HEAR score based strategy was consider safe if the rate of the primary endpoint was below 1%, with an upper margin of the 95% confidence interval (CI) below 3%. Results: Among 1452 patients included, 1402 were analyzed and 97 (7%) had a MACE during the follow-up period. The HEAR score was &lt;2 in 279 (20%) patients and one presented a MACE [0.4% (95% CI: 0.01-1.98)]. The two-step HEART strategy classified low-risk an additional 476 patients (34%) and one of these 476 patients had a MACE [0.3% (95% CI: 0.03-0.95)]. The two-step HEART strategy would have theoretically avoided 360 troponin measurements (19%). Conclusions: In our prospective multicenter study, a HEAR based work-up strategy was safe, with a very low risk of MACE at 45 day. We also report that a two-step HEART-based strategy may safely allow significant reduction of troponin measurements in patients presenting to the emergency department with chest pain. abstract_id: PUBMED:29922729 The HEART score: A guide to its application in the emergency department. Chest pain is one of the most common, potentially serious presenting complaints for adult emergency department (ED) visits. The challenge of acute coronary syndrome (ACS) identification with appropriate disposition is quite significant. Many of these patients are low risk and can be managed non-urgently in the outpatient environment; other patients, however, are intermediate to high risk for ACS and should be managed more aggressively, likely with inpatient admission and cardiology consultation. The HEART score, a recently derived clinical decision rule aimed at the identification of risk in the undifferentiated chest pain patient, is potentially quite useful as an adjunct to physician medical decision-making. The HEART score identifies patients at low, intermediate, and high risk for short-term adverse outcome resulting from ACS. As is true of all such clinical decision rules, the physician should consider the information provided the HEART score yet exercise clinical judgment in the ultimate determination of management strategy in the adult chest pain patient suspected of ACS. abstract_id: PUBMED:26667086 Simplified Predictive Instrument to Rule Out Acute Coronary Syndromes in a High-Risk Population. Background: It is unclear whether diagnostic protocols based on cardiac markers to identify low-risk chest pain patients suitable for early release from the emergency department can be applied to patients older than 65 years or with traditional cardiac risk factors. Methods And Results: In a single-center retrospective study of 231 consecutive patients with high-risk factor burden in which a first cardiac troponin (cTn) level was measured in the emergency department and a second cTn sample was drawn 4 to 14 hours later, we compared the performance of a modified 2-Hour Accelerated Diagnostic Protocol to Assess Patients with Chest Pain Using Contemporary Troponins as the Only Biomarker (ADAPT) rule to a new risk classification scheme that identifies patients as low risk if they have no known coronary artery disease, a nonischemic electrocardiogram, and 2 cTn levels below the assay's limit of detection. Demographic and outcome data were abstracted through chart review. The median age of our population was 64 years, and 75% had Thrombosis In Myocardial Infarction risk score ≥2. Using our risk classification rule, 53 (23%) patients were low risk with a negative predictive value for 30-day cardiac events of 98%. Applying a modified ADAPT rule to our cohort, 18 (8%) patients were identified as low risk with a negative predictive value of 100%. In a sensitivity analysis, the negative predictive value of our risk algorithm did not change when we relied only on undetectable baseline cTn and eliminated the second cTn assessment. Conclusions: If confirmed in prospective studies, this less-restrictive risk classification strategy could be used to safely identify chest pain patients with more traditional cardiac risk factors for early emergency department release. abstract_id: PUBMED:32472247 Chest Pain Evaluation in the Emergency Department: Risk Scores and High-Sensitivity Cardiac Troponin. Purpose Of Review: As many as 10 million patients present annually to the emergency department in the USA with symptoms concerning for acute myocardial infarction. The use of risk scores for patients with chest pain or equivalent without ST-segment elevation on the electrocardiogram. The adaptation in the USA of high sensitivity troponin assays requires rethinking of how to best optimize troponin testing within a risk score. Recent Findings: Patients are risk stratified using a combination of validated risk scores, biomarkers, and both noninvasive and invasive testing. The advent of high-sensitivity troponins has served to augment existing risk scores in the identification of low-risk patients for early discharge, as well as led to the introduction of new rapid rule-out protocols by which acute myocardial infarction can be excluded by biomarker evaluation more quickly. The emergence of machine learning algorithms may further enhance provider's ability to quickly diagnose or exclude myocardial infarction in the emergency department. The addition of high sensitive troponin assays to established emergency department risk scores is providing new opportunities to improve the timeliness and accuracy of the evaluation of patients presenting with a possible myocardial infarction. Utilizing the time between troponin measures as a variable combined with clinical risk factors with new algorithms may further serve to improve diagnostic accuracy. abstract_id: PUBMED:27461090 Validation of the new Vancouver Chest Pain Rule in Asian chest pain patients presenting at the emergency department. Objectives: The new Vancouver Chest Pain (VCP) Rule recommends early discharge for chest pain patients who are at low risk of developing acute coronary syndrome (ACS), and thus can be discharged within 2 hours of arrival at the emergency department (ED). This study aimed to assess the performance of the new VCP Rule for Asian patients presenting with chest pain at the ED. Methods: This prospective cohort study involved patients attended to at the ED of a large urban centre. Patients of at least 25 years old, presenting with stable chest pain and a non-diagnostic ECG, and with no history of active coronary artery disease were included in the study. The main outcome measures were cardiac events, angioplasty, or coronary artery bypass within 30 days of enrolment. Results: The study included 1690 patients from 27 August 2000 to 1 May 2002, with 661 patients fulfilling the VCP criteria. Of those for early discharge, 24 had cardiac events and 13 had angioplasty or bypass at 30 days, compared to 91 and 41, respectively, for those unsuitable for discharge. This gave the rule a sensitivity of 78.1% for cardiac events, including angioplasty and bypass. Specificity was 41.0%, and negative predictive value (NPV) was 94.4%. Conclusion: We found the new VCP Rule to have moderate sensitivity and poor specificity for adverse cardiac events in our population. With an NPV of less than 100%, this means that a small proportion of patients sent home with early discharge would still have adverse cardiac events. abstract_id: PUBMED:34228849 Guidelines for reasonable and appropriate care in the emergency department (GRACE): Recurrent, low-risk chest pain in the emergency department. This first Guideline for Reasonable and Appropriate Care in the Emergency Department (GRACE-1) from the Society for Academic Emergency Medicine is on the topic: Recurrent, Low-risk Chest Pain in the Emergency Department. The multidisciplinary guideline panel used The Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach to assess the certainty of evidence and strength of recommendations regarding eight priority questions for adult patients with recurrent, low-risk chest pain and have derived the following evidence based recommendations: (1) for those &gt;3 h chest pain duration we suggest a single, high-sensitivity troponin below a validated threshold to reasonably exclude acute coronary syndrome (ACS) within 30 days; (2) for those with a normal stress test within the previous 12 months, we do not recommend repeat routine stress testing as a means to decrease rates of major adverse cardiac events at 30 days; (3) insufficient evidence to recommend hospitalization (either standard inpatient admission or observation stay) versus discharge as a strategy to mitigate major adverse cardiac events within 30 days; (4) for those with non-obstructive (&lt;50% stenosis) coronary artery disease (CAD) on prior angiography within 5 years, we suggest referral for expedited outpatient testing as warranted rather than admission for inpatient evaluation; (5) for those with no occlusive CAD (0% stenosis) on prior angiography within 5 years, we recommend referral for expedited outpatient testing as warranted rather than admission for inpatient evaluation; (6) for those with a prior coronary computed tomographic angiography within the past 2 years with no coronary stenosis, we suggest no further diagnostic testing other than a single, normal high-sensitivity troponin below a validated threshold to exclude ACS within that 2 year time frame; (7) we suggest the use of depression and anxiety screening tools as these might have an effect on healthcare use and return emergency department (ED) visits; and (8) we suggest referral for anxiety or depression management, as this might have an impact on healthcare use and return ED visits. abstract_id: PUBMED:8046525 Physician response to a prediction rule for the triage of emergency department patients with chest pain. Objective: To determine the response of physicians to a noncoercive prediction rule for the triage of emergency department patients with chest pain. Design: Prospective time-series intervention study. Setting: A university hospital emergency department. Participants/patients: 68 physicians, all of whom were responsible for the triage of at least one of 252 patients presenting to the emergency department with a chief complaint of acute chest pain. Intervention: A previously validated algorithmic prediction rule that was attached to the back of patient data forms in the emergency department. Measurements: Patients' clinical data were recorded by the examining physician in the emergency department or by a research nurse blinded to patient outcome. The physicians recorded their own estimates of the risk of acute myocardial infarction and their reactions to the prediction rule in a self-administered questionnaire completed at the time of triage. Main Results And Conclusions: The physicians reported that they looked at the prediction rule during the triage of 115 (46%) of the 252 patients. The likelihood of using the prediction rule decreased significantly with increasing level of physician training. The most common reasons given for disregarding the prediction rule were confidence in unaided decision making and lack of time. The physicians reported that of the 115 cases for which the prediction rule was used, only one triage decision (1%) was changed by it. Future research should explore how prediction rules can be designed and implemented to surmount the barriers highlighted by these data. Answer: The ROMEO (rule out myocardial events on "obs" ward) pathway was examined for its feasibility in a UK emergency department, and the results suggest that it is indeed feasible. The study involved a prospective analysis of outcomes for the first 100 patients entering the pathway from May to October 1999. The outcomes reviewed included serum troponin levels, serial ECG recordings, exercise test results, total length of stay, and final diagnoses. Follow-up results were obtained by telephoning patients after discharge to inquire about persisting or recurrent pain and further investigations after completing the ROMEO pathway. The findings showed that 82% of patients had myocardial damage excluded by serum troponin assay, and 76% of these completed exercise tolerance testing (ETT), with 92% of ETTs being negative. Some patients did not undergo ETT due to mobility problems, recent ETT, or if considered very low probability of cardiac pain on consultant review. A small percentage of patients (5%) had an increased initial troponin, and another 5% had an increased 12-hour troponin, leading to their referral for admission under general physicians. Seven percent were referred for other reasons such as late ECG changes or continuing or worsening pain. The median time for all patients over the period studied was 23 hours, with all patients being discharged within an hour of a negative ETT. Follow-up results indicated that 69% of eligible patients contacted by telephone had no further pain, attendances, or GP consultations, and 9% had further cardiological investigation or treatment. The study concluded that a rapid rule-out strategy like the ROMEO pathway is feasible in the UK healthcare setting and provides a standardized and consistent evaluation for low-risk patients with chest pain (PUBMED:12204983).
Instruction: Does wearing unstable shoes reduce low back pain and disability in nurses? Abstracts: abstract_id: PUBMED:25780261 Does wearing unstable shoes reduce low back pain and disability in nurses? A randomized controlled pilot study. Objective: To evaluate if wearing unstable shoes reduces low back pain and disability in nurses. Design: A randomized controlled trial. Setting: Hospitals and homecare. Subjects: A total of 20 matched female registered nurses with low back pain. The mean (standard deviation) age was 31 years (5) for the control and 34 years (6) for the intervention group; height was 161 cm (5) and 165 cm (7), respectively. Interventions: The intervention group received unstable shoes at Week 2 to wear for at least 36 h/week for a month. Main Measures: The Oswestry Low Back Pain Disability Questionnaire and a visual analogue pain scale. Results: The mean (standard deviation) pain level was 6 (1) at baseline vs. 6 (2) at Week 6 for the control group, and 5 (1) vs. 1 (1) for the intervention group. The mean (standard deviation) disability level was 31% (9) at baseline vs. 28% (7) at Week 6 for the control, and 27% (12) vs. 13% (5) for the intervention group. There were no significant changes over time on pain or disability levels for the control group. The intervention group reported lower levels of pain on Weeks 4 (mean difference ⩾-1.4, p ⩽ 0.009) and 6 (mean difference ⩾-3.1, p &lt; 0.001). Disability levels were also lower on Weeks 4 (mean difference = -4.5%, p NS) and 6 (mean difference = -14.1%, p = 0.020). Conclusions: Wearing unstable shoes reduced low back pain and disability in nurses and might be helpful as part of the back pain rehabilitation process. abstract_id: PUBMED:29333872 Effects and underlying mechanisms of unstable shoes on chronic low back pain: a randomized controlled trial. Objective: To investigate the effects that wearing unstable shoes has on disability, trunk muscle activity, and lumbar spine range of motion (ROM) in patients with chronic lower back pain (CLBP). Design: Randomized controlled trial. Setting: Orthopedic Surgery Service. Participants: We randomized 40 adults with nonspecific CLBP either to an unstable shoes group ( n = 20) or to the control group ( n = 20). Intervention: The participants in the unstable shoes group were advised to wear these shoes for a minimum of six hours a day for four weeks. Control group participants were asked to continue wearing their regular shoes. Outcome Measures: Our primary outcome was measurement of back-related dysfunction, assessed using the Roland-Morris Disability Questionnaire. Secondary outcomes included changes in electromyographic (EMG) activity of erector spinae (ES), rectus abdominis (RA), internus obliquus (IO), and externus obliquus (EO) muscles, and changes in lumbar spine ROM. Results: Between-group analysis highlighted a significant decrease in disability in the unstable shoes group compared to the control (-5, 95% confidence interval (CI) = -8.4 to -1.6). Our results revealed a significant increase in the percentage of RA, ES, IO, and EO EMG activity and in lumbar spine ROM in the unstable shoes group compared to the control group. Moreover, our results showed a significant negative correlation between disability and the percentage of ES, RA, and IO muscle activity at the end of the intervention. Conclusion: This study shows that the use of unstable shoes contributes to improvements in disability, which are likely related to increased trunk muscle activity and lumbar spine ROM. abstract_id: PUBMED:31328545 Unstable shoes for the treatment of lower back pain: a meta-analysis of randomized controlled trials. Objective: We aimed to perform a systematic review and meta-analysis to compare the treatment effects of unstable shoes and flat shoes on lower back pain patients. Data Sources: Literature databases, including PubMed, Web of Science, and EMBASE (up to June 2019), were searched systematically. Review Methods: Two authors independently screened the retrieved records and identified the randomized controlled trials where patients with lower back pain who wore unstable shoes as intervention and wore flat shoes as a control. Relevant data were extracted for meta-analysis using Review Manager 5.3 software. The Grading of Recommendations Assessment, Development and Evaluation approach was used to assess the pooled outcome evidence levels. Results: Five randomized controlled trials and 251 patients were included in the analysis. The meta-analysis results showed that there was a tendency toward a reduction in the Roland-Morris disability questionnaire score (mean difference (MD) -2.16, 95% confidence interval (CI) -4.28 to -0.03, I2 = 53%) and pain score (MD -0.84, 95% CI -1.66 to -0.02, I2 = 84%) in patients wearing unstable shoes compared to those wearing flat shoes. There was no significant difference in the life quality scores between the unstable shoe and flat shoe groups (MD -0.59, 95% CI -6.18 to 5.01, I2 = 0%). Functional disability and pain scores were determined to have very low-quality evidence, and life quality scores were determined to have low-quality evidence according to the Grading of Recommendations Assessment, Development and Evaluation analysis. Conclusion: Unstable shoes may be effective in treating lower back pain in the clinic, but the conclusion was limited by the current low-quality studies. abstract_id: PUBMED:29909231 Effects of unstable shoes on trunk muscle activity in patients with chronic low back pain. Introduction: Unstable shoes were developed as a walking device to strengthen the lower extremity muscles and reduce joint loading. Many studies have reported increased muscle activity throughout the gait cycle in most of the lower limb muscles in healthy adults using these shoes. However, no previous studies have explored the effects of wearing unstable shoes on trunk muscle activity in patients with chronic low back pain (CLBP). Therefore, the aim of the present study was to compare the activity of selected trunk muscles in patients with CLBP during a gait test while walking wearing unstable shoes or conventional flat shoes (control). Methods: Thirty-five CLBP patients (51.1 ± 12.4 y; 26 ± 3.8 kg/m2; 9.3 ± 5.2 Roland Morris Disability Questionnaire score) were recruited from the Orthopedic Surgery Service at the Hospital to participate in this cross-sectional study. All the participants underwent gait analysis by simultaneously collecting surface electromyography (EMG) data from erector spinae (ES), rectus abdominis (RA), obliquus internus (OI), and obliquus externus (OE) muscles, while walking on a treadmill with flat control shoes or experimental unstable shoes. Results: The results showed significantly higher %EMG activity in the ES (mean difference: 1.8%; 95% CI: 1.3-2.2), RA (mean difference: 1.5%; 95% CI: 0.3-2.7), and OI (mean difference: 1.5%; 95% CI: 0.2-2.8) in the unstable versus the flat-shoe condition, with a large effect size for the ES (Cohen's d = 1.27). Conclusions: Based on these findings, the use of unstable shoes may be implicated in promoting spine stability, particularly in improving neuromuscular control of the trunk muscles in CLBP treatment. abstract_id: PUBMED:24985691 Effects of unstable shoes on chronic low back pain in health professionals: a randomized controlled trial. Objective: The aim of this study was to evaluate the effectiveness of unstable shoes in reducing low back pain in health professionals. Methods: Of a volunteer sample of 144 participants, 40 with nonspecific chronic low back pain were eligible and enrolled in this study. Participants were randomized to an intervention group, who wore unstable shoes (model MBT Fora), or a control group, who wore conventional sports shoes (model Adidas Bigroar). The participants had to wear the study shoes during their work hours, and at least 6 hours per workday, over a period of 6 weeks. The primary outcome was low back pain assessed on a Visual Analog Scale. The secondary outcomes were patient satisfaction, disability evaluated using Roland-Morris questionnaire and quality of life evaluated using EQ-VAS. Results: The intervention group showed a significant decrease in pain scores compared to the control group. The rate of satisfaction was higher in the intervention group (79%) compared to the control group (25%). There was no significant difference for the Roland-Morris disability questionnaire score and the EQ-VAS scale. Conclusions: The results of this clinical trial suggest that wearing unstable shoes for 6 weeks significantly decreases low back pain in patients suffering from chronic low back pain but had no significant effect on quality of life and disability scores. abstract_id: PUBMED:25854301 Effects of unstable shoes on trunk muscle activity and lumbar spine kinematics. Background: In patients with neuromuscular disease and a forced vital capacity (FVC) of &lt;30% of the predictive value, scoliosis correction operation was Background. An unstable shoe was developed as a walking device to strengthen the lower extremity muscles and reduce joint loading. A large number of studies have reported increased electromyographic (EMG) activity throughout the gait cycle in most of the lower limb muscles, and significant kinematic changes in the lower extremity. However, no studies have investigated the effects of wearing unstable shoes on spine kinematics and trunk muscle activity during gait. Aim: To compare trunk muscle activity and lumbar spine range of motion (ROM) during gait using an unstable shoe and a conventional stable control shoe. Design: Cross-sectional study. Setting: A Biomechanics laboratory. Population: Forty-eight healthy voluntary participants (24.5±5.6 years and 22.7±6.8 kg/m2). Methods: Subjects underwent gait analysis while simultaneously collecting surface EMG data of erector spinae (ES) and rectus abdominis (RA) and lumbar spine sagittal plane ROM while treadmill walking wearing regular shoes and unstable shoes. Results: The results showed that the unstable shoes resulted in significantly higher ES and RA EMG muscle activity levels in all gait phases compared to control shoes (P&lt;0.001). In addition, the unstable shoe condition showed a significantly higher mean (mean difference: 3.1º; 95% CI 2.2º to 4º) and maximum (mean difference: 4.5º; 95% CI 2.6º to 6.5º) lumbar spine extension values (P&lt;0.001). Conclusions: Unstable shoes increase trunk muscle activity (ES, RA) and lumbar lordosis during gait compared to control shoes. Clinical Rehabilitation Impact: Based on these findings, the use of unstable shoes may have potential implications in promoting spine tissue health, particularly in strengthening trunk muscles in healthy population or in low back pain treatment. abstract_id: PUBMED:23928715 Effectiveness of rocker sole shoes in the management of chronic low back pain: a randomized clinical trial. Study Design: Multicenter, assessor-blind, randomized, clinical trial. Objective: To compare the effectiveness of rocker sole footwear to traditional flat sole footwear as part of the management for people with low back pain (LBP). Summary Of Background Data: During the past decade, persistent advertising has claimed that footwear constructed with a rocker sole will reduce LBP. However, there is no robust evidence to support these claims. Methods: One hundred fifteen people with chronic LBP were randomized to wear rocker sole shoes or flat sole shoes for a minimum of 2 hours each day while standing and walking. Primary outcome was the Roland Morris Disability Questionnaire (RMDQ). In addition, participants attended an exercise and education program once a week for 4 weeks and wore their assigned shoes during these sessions. Participants were assessed without their knowledge of group allocation prerandomization, and at 6 weeks, 6 months, and 1 year (main outcome point). Analysis was by intention-to-treat method. Results: At 12 months, data from 44 of 58 (77.2%) of the rocker sole group and 49 of 57 (84.5%) of the flat sole group were available for analysis. In the rocker sole group, mean reduction in RMDQ was -3.1 (95% CI [confidence interval], -4.5 to -1.6), and in the flat sole group, it was -4.4 (95% CI, -5.8 to -3.1) (a greater negative value represents a greater reduction in disability). At 6 months, more people wearing flat shoes compared with those wearing rocker shoes demonstrated a minimal clinically important improvement in disability (53.2% and 31.1%, respectively; P = 0.03). Between-group differences were not significant for RMDQ or any secondary outcomes (e.g., pain) at any time. People reporting pain when standing and walking at baseline (n = 59) reported a greater reduction in RMDQ at 12 months in the flat sole group (-4.4 [95% CI, -6.0 to -2.8], n = 29) than the rocker sole group (-2.0 [95% CI, -3.6 to -0.4], n = 30) (P &lt; 0.05). Conclusion: Rocker sole shoes seem to be no more beneficial than flat sole shoes in affecting disability and pain outcomes in people with chronic LBP. Flat shoes are more beneficial for LBP aggravated by standing or walking. Level Of Evidence: N/A. abstract_id: PUBMED:32954802 The effects of shoes and insoles for low back pain: a systematic review and meta-analysis of randomized controlled trials. The aim of this review was to examine the effects of shoes and insoles on low back pain (LBP). Seven electronic databases were searched from their inception to May 2020. The methodological quality of the 14 included studies was assessed by PEDro scale. Quality of evidence was assessed using GRADE. Moderate evidence on the disability questionnaire score (SMD, 0.52; 95% CI, 0.28 to 0.77; P &lt; 0.001) and pain score (SMD, 0.61; 95% CI, 0.36 to 0.85; P &lt; 0.001) of the custom-made orthotics for chronic LBP compared with no orthotics/insoles intervention was found. Meta-analysis results also showed moderate evidence on the disability questionnaire score (SMD, 0.44; 95% CI, 0.05 to 0.82; P =0.03) in patients who wore unstable shoes compared with regular shoes. Pain and life quality scores showed low-quality evidence of unstable shoes for chronic LBP. Custom-made orthotics and unstable shoes can be recommended to patients as a management option of chronic LBP. abstract_id: PUBMED:9794057 Components of initial and residual disability after back injury in nurses. Study Design: A pre- versus postintervention with concurrent control group design was used to investigate the effect of a workplace-based early intervention program on perception of disability in nurses with low back injury. Objectives: This report examines changes over time in the components of the Oswestry Low Back Pain and Disability Questionnaire in two groups of back-injured nurses-those who received the early intervention program (study) and those who were not offered the program (control). Summary Of Background Data: Early intervention programs can decrease morbidity, time lost from work, and compensation costs. Although perception of disability decreases, some residual disability remains, the nature of which is not clear. Methods: The Oswestry Low Back Pain and Disability Questionnaire scores of 40 study nurses and 118 control nurses at time of injury and at 6 months after injury were examined. Analysis of variance was used to compare changes in mean overall scores over time. The proportion of nurses reporting disability on individual components of the Oswestry Low Back Pain and Disability Questionnaire at each time period was compared with the results of a chi-square test. Results: Overall Oswestry Low Back Pain and Disability Questionnaire scores were similar between study and control nurses at time of injury, but were significantly lower in study nurses at 6 months after injury. However, scores of individual Oswestry Low Back Pain and Disability Questionnaire components that related to job demands increased over time; this was most apparent in lifting, particularly in study nurses. Conclusions: Although overall perception of disability decreased 6 months after injury, particularly in study nurses, disability in job-related activities persisted. Residual disability after back injury should be addressed in workplace-based prevention programs. abstract_id: PUBMED:23449540 Measurement of lumbar lordosis in static standing posture with and without high-heeled shoes. Objective: Some doctors and therapists believe that wearing high-heeled shoes causes increased lumbar lordosis and that this may be a cause of low back pain. The purpose of this study was to evaluate whether high-heeled shoes increase lumbar lordosis and to do so with more reliable methods and a larger sample size than used in previous studies. Methods: Fifty participants from a chiropractic university were included in a test group (32 female and 18 male) and 9 in a control group (3 female and 6 male). A Spinal Mouse was used to measure lumbar lordosis in test participants barefoot and then again with 3- or 4-in high-heeled shoes after a 10-minute adaptation period of walking and sitting and standing while wearing the shoes. Reliability of the testing conditions was evaluated with 9 barefoot control participants before and after an identical adaptation period, and intra- and interexaminer reliability of Spinal Mouse measurements was tested by use of a wooden model built to mimic the proportions of a human spine. Results: Both groups showed non-significant decreases in lordosis between the first and second scans (high heels: 23.4° to 22.8°, P = .17; control: 18.8° to 17.6°, P = .16). Scans of the wooden spine model were highly reliable (intra- and interexaminer intraclass correlation coefficients &gt; .999). Conclusions: Consistent with most previous studies, high-heeled shoes did not affect lumbar lordosis in most people while standing. Future research could investigate the effect of shoes during dynamic conditions or identify affected subgroups. Answer: Yes, wearing unstable shoes appears to reduce low back pain and disability in nurses. A randomized controlled pilot study found that nurses in the intervention group who wore unstable shoes for at least 36 hours per week for a month reported lower levels of pain and disability compared to the control group. The intervention group experienced a significant reduction in pain levels from a mean of 5 (1) at baseline to 1 (1) and in disability levels from 27% (12) to 13% (5) by the end of the study period (PUBMED:25780261). Additionally, a randomized controlled trial investigating the effects of unstable shoes on chronic low back pain (CLBP) showed that wearing these shoes for a minimum of six hours a day for four weeks led to a significant decrease in disability and an increase in trunk muscle activity and lumbar spine range of motion (ROM) in the unstable shoes group compared to the control group (PUBMED:29333872). A meta-analysis of randomized controlled trials also indicated a tendency toward a reduction in disability and pain scores in patients wearing unstable shoes compared to those wearing flat shoes, although the evidence was determined to be of very low to low quality (PUBMED:31328545). Furthermore, a study that evaluated the effectiveness of unstable shoes in reducing low back pain in health professionals found that wearing unstable shoes for 6 weeks significantly decreased low back pain in patients suffering from chronic low back pain (PUBMED:24985691). In summary, the evidence from these studies suggests that wearing unstable shoes can be beneficial in reducing low back pain and disability in nurses and potentially other populations with chronic low back pain.
Instruction: Outcomes after metal-on-metal hip resurfacing: could we achieve better function? Abstracts: abstract_id: PUBMED:25293565 Metal-on-metal hip arthroplasty In Denmark 4,456 metal-on-metal (MoM) hip prostheses have been implanted. Evidence demonstrates that some patients develope adverse biological reactions causing failures of MoM hip arthroplasty. Some reactions might be systemic. Failure rates are associated with the type and the design of the MoM hip implant. A Danish surveillance programme has been initiated addressing these problems. abstract_id: PUBMED:23480886 Pseudotumour in metal-on-metal hip arthroplasty Pseudotumours are sterile inflammatory lesions that can be found in soft tissues surrounding metal-on-metal and more rare metal-on-polyethylene hip arthroplasties. They may cause local tissue and bone destruction and necessitate revision surgery. The pathogenesis of these lesions remain unclear; however, cups in an adverse position lead to high metal ion levels which is associated with pseudotumours. The prodromal symptom of pseudotumour is persistent inguinal pain, but lesions may also be silent. All patients with MoM hip arthroplasties should be monitored according to the national recommendations. abstract_id: PUBMED:28707830 Patients with metal-on-metal hip arthroplasty: evaluation and management of complications The potentially severe complications related to metal-on-metal (MoM) hip replacements have led to a dramatic decrease of their use. Large diameter heads are more likely to fail than smaller diameters, but complications have been described even with « small » diameters. Therefore, monitoring of MoM arthroplasties is mandatory. This includes physical examination, X-Rays, metal ion levels, and potentially cross-sectional imaging. Despite pathophysiology of adverse reactions to metal debris (ARMD) is better understood, their evolution and the potential systemic complications remain unclear. Symptomatic hip arthroplasties, elevated ions levels, and ARMD may lead to revision of the components. In such a situation, an adaquate stategy must be achieved given the high potential for complications. abstract_id: PUBMED:25708400 The future role of metal-on-metal hip resurfacing. Purpose: The purpose of this review was to assess the ten to 15-year outcomes of metal-on-metal hip resurfacing (MoM HR) when performed at designing and independent centres, and make recommendations for the future use of MoM HR. Methods: Studies reporting ten to 15-year outcomes for modern MoM HR devices from both designing and independent centres were reviewed. Outcomes from these studies were assessed to allow the formulation of recommendations for the future use of MoM HR. Results: Two MoM HR designs, the Birmingham Hip Resurfacing (BHR) and Conserve Plus, have outcomes reported at a minimum of ten years. The BHR was the only device with outcomes reported at a minimum of ten years by both designing (overall survival of up to 95.8 % at 15 years) and independent surgeons (overall survival of 87.1-94.5 % at ten years). Implant survival in these seven BHR studies was influenced by the pre-operative diagnosis (primary osteoarthritis had better outcomes), gender (male patients had better outcomes), and femoral component head size (larger sizes had better outcomes). In contrast to independent centres, designing surgeons reported acceptable outcomes in female patients undergoing BHR. Conclusions: There remains a role for MoM HR in young active male patients with primary osteoarthritis, provided the surgeon has sufficient experience in the procedure, the implant has an established record, and the patient is aware of the potential risks associated with MoM bearings and HR. Very experienced HR surgeons may also consider this procedure in females provided they meet the refined inclusion criteria described (including femoral head sizes of 46 mm and above). abstract_id: PUBMED:26381801 Survey on the use and behaviour of metal-metal hip replacements in Spain Background: Following medical device alerts published in different countries of problems with metal-on-metal total hip replacements, the Spanish Agency of Medicines and Medical Devices (AEMPS) in collaboration with the Spanish Hip Society Surgery designed a national survey to gather information on the use and behaviour of these hip implants. Methods: The survey consisted of a questionnaire sent by e-mail to 283 clinical centre recipients of metal-on-metal hips to be filled in by surgeons with expertise in the field. Results: A total of 257 questionnaires were completed. The response rate of the clinical centres was 36.7%. A total of 97.7% of the responses reported that clinical and radiological follow-ups are carried out, and 79.6% undertook metal ion analyses (chromium and cobalt). A large majority (83.6%) of the responders who had who used surface implants, and 70% of those with large-head implants reported peri-operative complications. The most common complication was pain (25% with surface implants and 30.8% with large-head implants). Currently 80.8% of those responding were considering abandoning implanting of these hip replacements. Conclusions: Despite the many limitations to this study, the survey has allowed us to obtain in a quick first view of the implant scenario of Metal on Metal hip implants in Spain, and to determine the type of patient implanted, the time of implantation, and the experience/expertise of the surgeons, and the type of follow-up carried out. abstract_id: PUBMED:24636447 Detection of metal ions in hair after metal-metal hip arthroplasty Objective: There is an increase in the levels of metals in the serum and urine after the implantation of some models of metal-metal hip prosthesis. It has recently been demonstrated that there is an association between these levels and the levels found in hair. The aim of this study is to determine the presence of metals in hair, and to find out whether these change over time or with the removal of the implant. Material And Method: The levels of chromium, cobalt and molybdenum were determined in the hair of 45 patients at 3, 4, 5, and 6 years after a hip surface replacement. The mean age was 57.5 years, and two were female. Further surgery was required to remove the replacement and implant a new model with metal-polyethylene friction in 11 patients, 5 of them due to metallosis and a periarticular cyst. Results: The mean levels of metals in hair were chromium 163.27 ppm, cobalt 61.98 ppm, and molybdenum 31.36 ppm, much higher than the levels found in the general population. A decrease in the levels of chromium (43.8%), molybdenum (51.1%), and cobalt (91.1%) was observed at one year in the patients who had further surgery to remove the prosthesis. Conclusions: High concentrations of metals in the hair are observed in hip replacements with metal-metal friction, which decrease when that implant is removed. The determination of metal ions in hair could be a good marker of the metal poisoning that occurs in these arthroplasty models. abstract_id: PUBMED:28755114 Metal-on-metal surface hip arthroplasty in patients with abnormal Coxanatomy: preliminary results. The purpose of this study was to evaluate early to intermediate results of metal-on-metal (MoM) hip resurfacing arthroplasty in patients with abnormal hip anatomy. We evaluated nine MoM hip resurfacing arthroplasty in eight patients with abnormal coxanatomy performed at a district general hospital in the UK between March 1999 and November 2002. One patient had undergone bilateral sequential hip resurfacing procedure. These patients were defined to have abnormal coxanatomy by virtue of previous dysplastic disease of hip in three cases, previous Legg-Calve-Perthes disease, multiple epiphyseal dysplasia, T cell acute lymphoblastic leukaemia, trauma and sepsis in one case each. The mean follow-up was 40.8 months. The mean age at primary operation was 35 years (range: minimum 21 years; maximum 44 years). There were six male and two female patients. There were six right-sided and three left-sided procedures. All patients had satisfactory outcomes. There were no deep infections, dislocations, or femoral neck fractures. Although this is a short series, MoM resurfaced hips with appropriate case selection can yield satisfactory short-term to intermediate-term results in the young and active patients with abnormal hip anatomy. abstract_id: PUBMED:18373996 Outcomes after metal-on-metal hip resurfacing: could we achieve better function? Objective: To report functional outcomes after metal-on-metal (MOM) hip resurfacing. Design: A cohort of 126 MOM hip resurfacing operations were reviewed 1 year after surgery. Setting: Hospital trust specializing in orthopedic surgery. Participants: Sixty-seven right and 59 left hips were reviewed in patients (N=120; 71 men, 49 women; mean age, 56+/-9y; range, 24-76y). Interventions: Not applicable. Main Outcome Measures: Administered once at follow-up. Function was measured using the Oxford Hip Score (OHS), Hip disability and Osteoarthritis Outcome Score, and UCLA Activity Score. Complications, pain, range of motion, Trendelenburg test, strength, walking, single-leg stand, stair climbing, and 10-m walk time were assessed. Results: Overall examination was satisfactory with few complications. High functional levels were reported. The median OHS was 15 and median UCLA Activity Score 7 (active). For 25%, outcome was poor with persistent pain, reduced hip flexion (mean, 94.46 degrees +/-12.7 degrees ), decreased strength (P&lt;.001), restricted walking, and functional limitations. Conclusions: Information about outcomes is important for patients undergoing surgery. Hip resurfacing remains an emergent technology, with further follow-up and investigation warranted. One explanation for suboptimal recovery may be current rehabilitation, originally developed after total hip arthroplasty. Rehabilitation tailored to hip resurfacing, paced for this active population and progressed to higher demand activities, may improve outcomes. abstract_id: PUBMED:27817993 Patient-Reported Outcomes After Revision of Metal-on-Metal Total Bearings in Total Hip Arthroplasty. Background: Failure of metal-on-metal (MOM) total hip arthroplasty (THA) bearings is often accompanied by an aggressive local reaction associated with destruction of bone, muscle, and other soft tissues around the hip. Little is known about whether patient-reported physical and mental function following revision THA in MOM patients is compromised by this soft tissue damage, and whether revision of MOM THA is comparable with revision of hard-on-soft bearings such as metal-on-polyethylene (MOP). Methods: We identified 75 first-time MOM THA revisions and compared them with 104 first-time MOP revisions. Using prospective patient-reported measures via the Veterans RAND-12, we compared Physical Component Score and Mental Component Score function at preoperative baseline and postoperative follow-up between revision MOM THA and revision MOP THA. Results: Physical Component Score did not vary between the groups preoperatively and at 1 month, 3 months, and 1 year postoperatively. Mental Component Score preoperatively and 1 and 3 months postoperatively were lower in patients in the MOM cohort compared with patients with MOP revisions (baseline: 43.7 vs 51.3, P &lt; .001; 1 month: 44.9 vs 53.3, P &lt; .001; 3 months: 46.0 vs 52.3, P = .016). However, by 1 year, MCS scores were not significantly different between the revision cohorts. Conclusion: Postrevision physical function in revised MOM THA patients does not differ significantly from the outcomes of revised MOP THA. Mental function is markedly lower in MOM patients at baseline and early in the postoperative period, but does not differ from MOP patients at 1 year after revision. This information should be useful to surgeons and physicians facing MOM THA revision. abstract_id: PUBMED:36362734 Minimum 10-Year Results of Modular Metal-On-Metal Total Hip Arthroplasty. Background: this study aimed to assess the long-term outcomes of (a minimum of 10-years) total hip arthroplasty with a metal-on-metal acetabular prosthesis. Methods: Eighty-nine primary total hip arthroplasties (82 patients) were performed using a Pinnacle modular metal-on-metal acetabular prosthesis. Clinical hip function outcomes were evaluated using the Japanese Orthopaedic Association hip score preoperatively and at the final follow-up. Radiological analysis was performed at the final follow-up and magnetic resonance imaging in all hips postoperatively. Results: Out of 82 patients, 17 were excluded who were followed up for &lt;10 years. Of the remaining 65 patients (70 hips), 19 (20 hips) developed pseudotumors during 2−10 years postoperatively. After 10 and 13 years, the survival rates of revision endpoint were 93.6% and 90.4%, respectively. Clinical hip function outcomes had improved significantly at the final follow-up. In the radiological analysis, the mean cup angle of inclination and mean ratio of femoral offset on the operated hip to the contralateral hip was highest in patients with revision surgery for adverse reactions to metal debris. Conclusions: This study showed a 29.0% prevalence of pseudotumors. Some cases required revisions even after 10 years following surgery. Regular clinical surveillance is recommended for the early detection of adverse reactions to metal debris. Answer: The outcomes after metal-on-metal (MoM) hip resurfacing have been a topic of interest due to the potential for severe complications associated with these implants. A cohort of 126 MoM hip resurfacing operations reviewed one year after surgery showed that overall examination was satisfactory with few complications and high functional levels were reported. The median Oxford Hip Score (OHS) was 15 and median UCLA Activity Score was 7 (active). However, for 25% of the patients, the outcome was poor with persistent pain, reduced hip flexion, decreased strength, restricted walking, and functional limitations. One explanation for suboptimal recovery may be the current rehabilitation, which was originally developed after total hip arthroplasty. It is suggested that rehabilitation tailored to hip resurfacing, paced for this active population and progressed to higher demand activities, may improve outcomes (PUBMED:18373996). Furthermore, the long-term outcomes of MoM hip resurfacing have been evaluated, with studies reporting that two MoM hip resurfacing designs, the Birmingham Hip Resurfacing (BHR) and Conserve Plus, have outcomes reported at a minimum of ten years. The BHR was the only device with outcomes reported at a minimum of ten years by both designing and independent surgeons, with overall survival of up to 95.8% at 15 years and 87.1-94.5% at ten years. Implant survival was influenced by pre-operative diagnosis, gender, and femoral component head size. It is concluded that there remains a role for MoM hip resurfacing in young active male patients with primary osteoarthritis, provided the surgeon has sufficient experience, the implant has an established record, and the patient is aware of the potential risks associated with MoM bearings and hip resurfacing (PUBMED:25708400). In summary, while MoM hip resurfacing can lead to satisfactory outcomes, there is a subset of patients who experience poor function post-surgery. Tailored rehabilitation programs and careful patient and implant selection may improve functional outcomes. Regular clinical surveillance is also recommended for the early detection of adverse reactions to metal debris (PUBMED:36362734).
Instruction: Cytoreductive surgery and HIPEC in treatment of colorectal peritoneal carcinomatosis: experiment or standard care? Abstracts: abstract_id: PUBMED:30948870 Cytoreductive Surgery in Combination with HIPEC in the Treatment of Peritoneal Sarcomatosis. Cytoreductive surgery (CRS) with hyperthermic intraperitoneal chemotherapy (HIPEC) is an effective treatment for peritoneal carcinomatosis, but it has been debated for peritoneal sarcomatosis. The purpose of the study is the presentation of perioperative and long-term results of CRS and hyperthermic intraoperative intraperitoneal chemotherapy in patients with peritoneal sarcomatosis. Retrospective study in a prospectively maintained database of 20 patients that underwent 29 CRS + HIPEC for peritoneal sarcomatosis. Clinical and histopathologic variables were correlated to survival. Complete cytoreduction was possible in 86.2% of the cases. The hospital mortality and morbidity rate were 0 and 20.7%, respectively. The median follow-up was 26 months, and recurrence was recorded in 20 cases (69%). The median and 5-year survival was 55 ± 13 (34-58) months and 43%, respectively. Prior surgical score (PSS) was the single variable related to survival (p = 0.018). The histologic subtype of the tumor was related to recurrence (p &lt; 0.001). CRS and HIPEC in peritoneal sarcomatosis may offer a survival benefit in selected patients with low hospital mortality. The variety of histologic types of sarcomatosis has not made possible the identification of subgroups of patients that may be offered significant benefit by CRS and HIPEC. Further studies are required. abstract_id: PUBMED:27065711 Preoperative Preparation and Patient Selection for Cytoreductive Surgery and HIPEC. Peritoneal dissemination is a significant variable affecting long term survival of abdominal cancer patients. A generally accepted clinical point of view is that peritoneal dissemination is tantamount to distant organ metastases. This implies it to be a terminal condition. Current practice dictates that if peritoneal dissemination is observed intraoperatively, the curative therapeutic options are deferred and comprehensive systemic chemotherapy remains the only option with a dismal prognosis. The past few years have generated lot of interest in management of peritoneal carcinomatosis. Prof Paul Sugarbaker has researched, validated and fine-tuned the concept of cytoreductive surgery with peritonectomy procedure (Sugarbaker technique) and perioperative chemotherapy as HIPEC &amp; EPIC. Recognition of a HIPEC centre is based on an infrastructure equipped with basic knowledge of the tumor biology, oncosurgical techniques, technical knowhow for HIPEC administration, intensive care unit etc. There are some aspects which need to be accorded special consideration. Comprehensive therapy of Cytoreduction surgery (CRS) and hyperthermic intraperitoneal chemotherapy (HIPEC) is initiated with exploration and cytoreductive surgery and includes visceral resections and peritonectomy procedure when achieved optimally results in complete, visible resection of all cancer within the abdomen and pelvis. Subsequent to CRS, HIPEC forms an integral part of the surgical procedure. This approach involves conceptual changes in both the route and timing of chemotherapy administration. Patient selection is of utmost importance. The greatest impediment to lasting benefits from intraperitoneal chemotherapy remains an improper patient selection. Currently, there are four important clinical assessments of peritoneal metastasis that need to be used to select patients ie; histopathological type of tumour, radiological distribution of disease, peritoneal cancer index and completeness of cytoreduction. Patients undergoing HIPEC surgery face the usual physiological insults of a major surgery in addition to the thermal stress secondary to intraperitoneal administration of heated chemotherapy agent. A team approach of everyone involved in care of these patients is known to improve patient outcomes. It has also been observed that with the necessary preoperative &amp; perioperative steps, the morbidity and mortality for this treatment can be brought down as comparable to any other major abdominal surgeries. abstract_id: PUBMED:35021837 Peritoneal surface malignancy spread reoperations after cytoreductive surgery + HIPEC. Introduction: Peritoneal malignancies (PM) are observed in about 1030% of patients suffering from gastrointestinal malignant diseases, both in connection with the primary surgical management or as metachronous metastases due to cancer recurrence. Methods: In the 1980s a new method of cytoreductive surgery (CRS) + HIPEC (hyperthermic intraperitoneal chemotherapy) was introduced. Today, we consider this method to be the gold standard for treatment of pseudomyxoma peritonei and peritoneal mesothelioma. The method increases overall survival (OS) of patients diagnosed with colorectal cancer, primary peritoneal and ovarian cancers. However, the disease recurs after this demanding treatment in the certain group of patients, approximately in 2544% of patients treated for pseudomyxoma peritonei, and in 40% and up to 82% of those treated for mesothelioma and colorectal cancer, respectively. Based on literary data (PubMed-Medline, last 5 years) and our own experience we present the basic factors associated with tumor recurrence, possibility of treatment using repeated CRS + HIPEC, data regarding second-look operations, and as applicable, prophylactic HIPEC. Conclusion: The method CRS + HIPEC provides an effective treatment of peritoneal carcinomatosis even in cases of recurrence. The second-look operations and prophylactic HIPEC may favorably affect the prognosis after primary R0 resections. abstract_id: PUBMED:37892663 Current Evidence for the Use of HIPEC and Cytoreductive Surgery in Gastric Cancer Metastatic to the Peritoneum. Gastric cancer (GCa) is an aggressive malignancy, representing the third leading cause of cancer mortality worldwide. The poor prognosis of GCa can be associated with the prevalence of peritoneal metastasis (PM). Current international and national GCa treatment guidelines only recommend palliative treatment options for patients with PM. Since the 1980s there have been multiple single arm trials, randomized controlled trials, and metanalysis investigating the use of cytoreductive surgery (CRS) and hyperthermic intraperitoneal chemotherapy (HIPEC) in patients with advanced GCa, with or without PM. Results from these studies have been encouraging, with some large-volume centers even incorporating HIPEC into their treatment algorithms for patients with advanced GCa. Additionally, there are several ongoing trials that, when completed, will increase our understanding of the efficacy of CRS &amp; HIPEC in patients with GCa metastatic to the peritoneum. Herein we review the current evidence, ongoing trials, consensus guidelines, and future considerations regarding the use of CRS &amp; HIPEC in patients suffering from GCa with PM. abstract_id: PUBMED:26504417 Cytoreductive surgery (SRC) and hyperthermic intraperitoneal chemotherapy (HIPEC) for treatment of peritoneal carcinomatosis: Our initial experience and technical details. Objective: The aim of this study is to present our initial experience in peritoneal carcinomatosis treatment and the technical details of cytoreductive surgery (CRS) and hyperthermic intraperitoneal chemotherapy (HIPEC) in the light of current literature. Material And Methods: Data of 27 consecutive patients who were treated with CRS and HIPEC for peritoneal carcinomatosis in Medical Park Samsun Hospital, between November 2012 and September 2014 were retrospectively reviewed. Treatment indication and management were evaluated at the multidisciplinary oncology council. All patients underwent CRS and HIPEC with the aim of complete cytoreduction. Patients with unresectable disease and/or palliative surgery were excluded from analysis. Perioperative complications were classified according to Clavien-Dindo classification, and HIPEC-related side effects were identified using National Cancer Institute Common Terminology Criteria for Adverse Events (CTCAE) criteria. Demographic, clinical and histopathological data of the patients were analyzed. Results: The mean age was 54 (32-72). Nineteen patients were female. The origin of peritoneal carcinomatosis was colorectal cancer in 12 patients, ovarian cancer in 12 patients, gastric cancer in 2 patients and pseudomyxoma peritonei in 1 patient. The mean Peritoneal Carcinomatosis Index was 12 (3-32), with a mean operative time of 420 (300-660) minutes. Perioperative morbidity, HIPEC-related toxicity and perioperative mortality were observed in eight (30%), one (3.7%) and four patients (14.8%), respectively. During a mean follow up of 13 (1-22) months, overall and disease-free survival rates were 95.8% and 82.6%, respectively. Two patients with colorectal cancer (after 9 and 12 months) and one patient with ovarian cancer (after 11 months) had intra-abdominal recurrence. One patient with ovarian cancer had liver metastases 13 months after surgery, and underwent resection of segments 6-7. The remaining patients are being followed-up without any recurrence. Conclusion: Cytoreductive surgery and HIPEC have favorable results in the treatment of patients with peritoneal carcinomatosis. Compatible with the literature, surgical outcomes of the presented series are encouraging for this treatment modality that have been recently popularized in our country. Careful perioperative evaluation, proper patient selection and multidisciplinary approach are essential for success in curative treatment of peritoneal carcinomatosis. abstract_id: PUBMED:29335392 Cytoreductive surgery and HIPEC in the treatment of peritoneal metastases of sarcomas and other rare malignancies. .................................... Cytoreductive surgery and HIPEC in the treatment of peritoneal metastases of sarcomas and other rare malignancies. abstract_id: PUBMED:27065706 The Initial Indian Experience with Cytoreductive Surgery and HIPEC in the Treatment of Peritoneal Metastases. Worldwide, cytoreductive surgery (CRS) and hyperthermic intraperitoneal chemotherapy (HIPEC) has been used for nearly 3 decades to treat peritoneal metastases (PM), improve quality of life, and prolong survival substantially in selected patients. In India, the use of the combined modality of treatment dates back a decade with majority of the efforts taking place within the last 5 years. The first PSOGI workshop (India) held in April 2015, at Bangalore, India offered an opportunity for Indian surgeons performing CRS and HIPEC to share their experience. To study the methodologies of CRS and HIPEC (hospital set up, equipment, training and surgical background) as well as the outcomes in terms of perioperative morbidity and mortality and short and long term survival of patients treated in India, Indian surgeons who had treated at least 10 patients with this combined modality were invited to present their experience. Data collection was retrospective. Analysis of the pooled data was carried out. Eight surgeons treated 384 patients with CRS and HIPEC over a period of 10 years. The commonest primary sites were ovary (as first line therapy n = 124), followed by appendix, including pseudomyxoma peritonei (n = 99), colorectum (n = 77), recurrent ovary (as second line therapy, n = 33), stomach (n = 15), primary peritoneal cancer (n = 10), peritoneal mesothelioma (n = 9) and rare tumors in 17 patients. The weighted mean PCI for all 384 patients was 18.25. 349/384 patients (90.88 %) had a complete cytoreduction (completeness of cytoreduction score of CC-0/1). Grade 3-5 complications developed in 108 patients (27.34 %) and 30 day mortality occurred in 28 (7.29 %) patients. This study showed that CRS and HIPEC can be performed with an acceptable morbidity and mortality in Indian patients. Most of the surgeons are on the learning curve and further improvement in these outcomes is expected over a period of time. Pooling of data related to both common and rare peritoneal cancers would be useful in knowing the disease behavior, response to treatment and outcomes in Indian patients. The 2015 PSOGI meeting provided a unique platform for data presentation with feedback from international experts in the field of peritoneal surface oncology. Future meetings are planned to expand the evaluation of Indian data and progress. abstract_id: PUBMED:27065709 The role of Cytoreductive Surgery and Hyperthermic Intraperitoneal Chemotherapy (HIPEC) in Ovarian Cancer: A Review. Ovarian cancer is one of the leading causes of cancer related deaths in women worldwide. It is usually diagnosed in an advanced stage (Stages III and IV) when peritoneal cancer spread has already occurred. The standard treatment comprises of surgery to remove all macroscopic disease followed by systemic chemotherapy. Despite all efforts, it recurs in over 75 % of the cases, most of these recurrences being confined to the peritoneal cavity. Recurrent ovarian cancer has a poor long term outcome and is generally treated with multiple lines of systemic chemotherapy and targeted therapy. The propensity of ovarian cancer to remain confined to the peritoneal cavity warrants an aggressive locoregional approach. The combined treatment comprising of cytoreductive surgery (CRS) that removes all macroscopic disease and HIPEC (Hyperthermic Intraperitoneal Chemotherapy) has been effective in providing long term survival in selected patients with peritoneal metastases of gastrointestinal origin. Intraperitoneal chemotherapy used as adjuvant therapy has shown a survival benefit in ovarian cancer. This has prompted the use of CRS and HIPEC in the management of ovarian cancer as a part of first line therapy and second line therapy for recurrent disease. This article reviews the current literature and evidence for the use of HIPEC in ovarian cancer. abstract_id: PUBMED:27347669 Cytoreductive surgery and hyperthermic intraperitoneal chemotherapy in the management of gastrointestinal cancers with peritoneal metastases: Progress toward a new standard of care. Peritoneal metastases from gastrointestinal cancer was, in the past, accepted as an inevitable component of the natural history of these diseases. It is a major cause of intestinal obstruction, fistula formation, and bowel perforation as the recurrent malignancy progresses to a terminal condition. Peritoneal metastases may be caused by full thickness penetration of the bowel wall by the primary cancer or by spilled cancer cells released into the peritoneal space by surgical trauma. Two new surgical technologies that have evolved to manage peritoneal metastases are cytoreductive surgery (CRS) and hyperthermic intraperitoneal chemotherapy (HIPEC). This combined treatment strategy uses peritonectomy procedures and visceral resections to reduce the disease in the abdomen and pelvis to a macroscopic volume. Then, HIPEC is used to preserve the complete cytoreduction by controlling the minimal residual disease. Since the extent of peritoneal metastases, as measured by the peritoneal cancer index (PCI), is crucial to a favorable outcome, prognostic indicators are used to select patients for treatment. The combined treatment may be used to prevent peritoneal metastases in gastrointestinal cancer patients having a resection of the primary malignancy. This is especially important in gastric cancer patients with serosal invasion. The combined treatment may be used synchronously with the primary cancer resection if peritoneal metastases are already apparent. The treatment is most frequently used with metachronous peritoneal metastases diagnosed in follow-up. Cure of peritoneal metastases is an option in selected patients and its knowledgeable use is progressing towards a new standard of care. abstract_id: PUBMED:27065713 Complications of Cytoreductive Surgery and HIPEC in the Treatment of Peritoneal Metastases. The combined treatment concept of cytoreductive surgery (CRS) and Hyperthermic intraperitoneal chemotherapy (HIPEC) has shown to be an efficient therapeutic option for selected patients with primary and secondary peritoneal carcinomatosis (PC). This strategy represents the standard of care for diseases like pseudomyxoma peritonei and peritoneal mesothelioma, and offers the best long-term results for PC from colorectal cancer. Despite these results, skepticism exists regarding this therapeutic approach partly because of its perceived high toxicity. In this article, we review the current evidence on complications that can occur after CRS and HIPEC and the risk factors associated with increased incidence of morbidity and mortality. Answer: Cytoreductive surgery (CRS) combined with hyperthermic intraperitoneal chemotherapy (HIPEC) has evolved as a treatment strategy for peritoneal carcinomatosis, including that originating from colorectal cancer. The evidence suggests that this combined treatment approach is moving towards a new standard of care rather than being considered experimental. CRS and HIPEC have been shown to be effective in providing long-term survival in selected patients with peritoneal metastases of gastrointestinal origin, including colorectal cancer (PUBMED:27347669). The treatment involves peritonectomy procedures and visceral resections to reduce the disease in the abdomen and pelvis to a macroscopic volume, followed by HIPEC to control minimal residual disease (PUBMED:27347669). This approach has become the standard of care for diseases like pseudomyxoma peritonei and peritoneal mesothelioma and offers the best long-term results for peritoneal carcinomatosis from colorectal cancer (PUBMED:27065713). The role of CRS and HIPEC in the management of peritoneal metastases from colorectal cancer is supported by the fact that it can lead to a survival benefit in selected patients, with acceptable morbidity and mortality rates comparable to other major abdominal surgeries (PUBMED:27065711). Moreover, the method has been shown to increase overall survival in patients diagnosed with colorectal cancer, and even in cases of recurrence, CRS and HIPEC provide an effective treatment option (PUBMED:35021837). Despite the positive outcomes, skepticism exists due to the perceived high toxicity of the treatment. However, with careful perioperative evaluation, proper patient selection, and a multidisciplinary approach, the surgical outcomes are encouraging (PUBMED:26504417). The greatest impediment to lasting benefits from intraperitoneal chemotherapy remains improper patient selection, emphasizing the importance of selecting appropriate candidates for this treatment (PUBMED:27065711). In conclusion, CRS and HIPEC are increasingly being recognized as a standard care option for the treatment of colorectal peritoneal carcinomatosis, provided patients are carefully selected and managed by experienced multidisciplinary teams. The approach is no longer considered merely experimental but is being integrated into treatment algorithms for peritoneal metastases from colorectal cancer, with ongoing research likely to further define its role (PUBMED:27347669; PUBMED:27065713).
Instruction: Is referral of postsurgical colorectal cancer survivors to cardiac rehabilitation feasible and acceptable? Abstracts: abstract_id: PUBMED:26729381 Is referral of postsurgical colorectal cancer survivors to cardiac rehabilitation feasible and acceptable? A pragmatic pilot randomised controlled trial with embedded qualitative study. Objectives: (1) Assess whether cardiac rehabilitation (CR) is a feasible and acceptable model of rehabilitation for postsurgical colorectal cancer (CRC) survivors, (2) evaluate trial procedures. This article reports the results of the first objective. Design And Setting: A pragmatic pilot randomised controlled trial with embedded qualitative study was conducted in 3 UK hospitals with CR facilities. Descriptive statistics were used to summarise trial parameters indicative of intervention feasibility and acceptability. Interviews and focus groups were conducted and data analysed thematically. Participants: People with CRC were considered for inclusion in the trial if they were ≥ 18 years old, diagnosed with primary CRC and in the recovery period postsurgery (they could still be receiving adjuvant therapy). 31% (n=41) of all eligible CRC survivors consented to participate in the trial. 22 of these CRC survivors, and 8 people with cardiovascular disease (CVD), 5 CRC nurses and 6 CR clinicians participated in the qualitative study. Intervention: Referral of postsurgical CRC survivors to weekly CR exercise classes and information sessions. Classes included CRC survivors and people with CVD. CR nurses and physiotherapists were given training about cancer and exercise. Results: Barriers to CR were protracted recoveries from surgery, ongoing treatments and poor mobility. No adverse events were reported during the trial, suggesting that CR is safe. 62% of participants completed the intervention as per protocol and had high levels of attendance. 20 health professionals attended the cancer and exercise training course, rating it as excellent. Participants perceived that CR increased CRC survivors' confidence and motivation to exercise, and offered peer support. CR professionals were concerned about CR capacity to accommodate cancer survivors and their ability to provide psychosocial support to this group of patients. Conclusions: CR is feasible and acceptable for postsurgical CRC survivors. A large-scale effectiveness trial of the intervention should be conducted. Trial Registration Number: ISRCTN63510637. abstract_id: PUBMED:34333716 Needs for information about lifestyle and rehabilitation in long-term young adult cancer survivors. Background: Healthy lifestyle and rehabilitation may mitigate late effects after cancer treatment, but knowledge about lifestyle and rehabilitation information needs among long-term young adult cancer survivors (YACSs) (≥ 5 years from diagnosis) is limited. The present study aimed to examine such information needs among long-term YACSs, and identify characteristics of those with needs. Material And Methods: The Cancer Registry of Norway identified long-term YACSs diagnosed with breast cancer, colorectal cancer, non-Hodgkin lymphoma, leukemia, or malignant melanoma at the age of 19-39 years, between 1985 and 2009. Survivors were mailed a questionnaire, in which respondents reported their information needs on physical activity, diet, and rehabilitation services 5-30 years post-diagnosis. Descriptive statistics and logistic regression analyses were used to examine the prevalence of information needs and associated factors. Results: Of 1488 respondents (a response rate of 42%), 947 were included. Median age at diagnosis was 35 years (range 19-39) and median observation time since diagnosis was 14 years (range 5-30). In total, 41% reported information needs for information about physical activity, 45% about diet, and 47% about rehabilitation services. Information needs were associated with higher treatment intensity, increasing number of late effects, and an unhealthy lifestyle. Conclusion: A large proportion of long-term YACSs report information needs regarding lifestyle and/or rehabilitation more than a decade beyond treatment. Assessments of such information needs should become a part of long-term care of these cancer survivors. abstract_id: PUBMED:38268281 The unmet rehabilitation needs of colorectal cancer survivors after surgery: A qualitative meta-synthesis. Aim: To systematically review and synthesize the findings of qualitative research on the unmet rehabilitation needs of colorectal cancer survivors (CRC) after surgery. Design: A qualitative meta-synthesis registered with PROSPERO (CRD42022368837). Methods: CNKI, Wanfang Data, PubMed, Scopus, Embase, Cochrane, Medline, PsychINFO and CINAHL were systematically searched for qualitative studies on the rehabilitation needs of CRC survivors after surgery from the inception of each database to September 2022. Results: A total of 917 relevant reports were initially collected and 14 studies were finally included. A total of 49 needs were extracted and divided into 15 categories in 6 integrated findings: (1) the need to adopt healthy eating habits; (2) the need for exercise motivation and exercise guidance; (3) the conflicting needs to return to work; (4) unaddressed physiological needs; (5) spiritual needs; (6) the need for multi-dimensional social support. Patient Or Public Contribution: Not applicable. abstract_id: PUBMED:27965868 The feasibility and acceptability of trial procedures for a pragmatic randomised controlled trial of a structured physical activity intervention for people diagnosed with colorectal cancer: findings from a pilot trial of cardiac rehabilitation versus usual care (no rehabilitation) with an embedded qualitative study. Background: Pilot and feasibility work is conducted to evaluate the operational feasibility and acceptability of the intervention itself and the feasibility and acceptability of a trials' protocol design. The Cardiac Rehabilitation In Bowel cancer (CRIB) study was a pilot randomised controlled trial (RCT) of cardiac rehabilitation versus usual care (no rehabilitation) for post-surgical colorectal cancer patients. A key aim of the pilot trial was to test the feasibility and acceptability of the protocol design. Methods: A pilot RCT with embedded qualitative work was conducted in three sites. Participants were randomly allocated to cardiac rehabilitation or usual care groups. Outcomes used to assess the feasibility and acceptability of key trial parameters were screening, eligibility, consent, randomisation, adverse events, retention, completion, missing data, and intervention adherence rates. Colorectal patients' and clinicians' perceptions and experiences of the main trial procedures were explored by interview. Results: Quantitative study. Three sites were involved. Screening, eligibility, consent, and retention rates were 79 % (156/198), 67 % (133/198), 31 % (41/133), and 93 % (38/41), respectively. Questionnaire completion rates were 97.5 % (40/41), 75 % (31/41), and 61 % (25/41) at baseline, follow-up 1, and follow-up 2, respectively. Sixty-nine percent (40) of accelerometer datasets were collected from participants; 31 % (20) were removed for not meeting wear-time validation. Qualitative study: Thirty-eight patients and eight clinicians participated. Key themes were benefits for people with colorectal cancer attending cardiac rehabilitation, barriers for people with colorectal cancer attending cardiac rehabilitation, generic versus disease-specific rehabilitation, key concerns about including people with cancer in cardiac rehabilitation, and barriers to involvement in a study about cardiac rehabilitation. Conclusions: The study highlights where threats to internal and external validity are likely to arise in any future studies of similar structured physical activity interventions for colorectal cancer patients using similar methods being conducted in similar contexts. This study shows that there is likely to be potential recruitment bias and potential imprecision due to sub-optimal completion of outcome measures, missing data, and sub-optimal intervention adherence. Hence, strategies to manage these risks should be developed to stack the odds in favour of conducting successful future trials. Trial Registration: ISRCTN63510637. abstract_id: PUBMED:17177905 How acceptable is a referral and telephone-based outcall programme for men diagnosed with cancer? A feasibility study. The objective of this study was to determine the feasibility and acceptability of a referral and outcall programme from a telephone-based information and support service, for men newly diagnosed with colorectal or prostate cancer. A block randomized controlled trial was performed involving 100 newly diagnosed colorectal and prostate cancer patients. Patients were referred to the Cancer Information Support Service (CISS) through clinicians at diagnosis. Clinicians were randomized into one of three conditions. Active referral 1: specialist referral with four CISS outcalls: (1)&lt;or=1 week of diagnosis; (2) at 6 weeks; (3) 3 months; and (4) 6 months post diagnosis. Active referral 2: specialist referral with one CISS outcall&lt;or=1 week of diagnosis. Passive referral: specialist recommended patient contacts CISS, but contact at the patient's initiative. Patients completed research questionnaires at study entry (before CISS contact), then 4 and 7 months post diagnosis. Overall, 96% of participants reported a positive experience with the referral process; 87% reported they were not concerned about receiving the calls; and 84% indicated the timing of the calls was helpful. In conclusion, the referral and outcall programme was achievable and acceptable for men newly diagnosed with colorectal or prostate cancer. abstract_id: PUBMED:37307154 Care of Cancer Survivors: Special Issues for Colorectal Cancer Survivors. Colorectal cancer (CRC) survival is influenced by numerous factors, including age, sex, race and ethnicity, familial cancer syndromes, stage and location of tumor, and comorbid conditions. The 5-year survival rate for patients with stage I CRC is 91%, but it is only 15% for patients with stage IV CRC. These survivors may experience multiple health issues. Gastrointestinal dysfunction is common, even years after treatment. This can include chronic diarrhea, occurring in approximately half of patients, and fecal incontinence, which is common after radiation therapy. Bladder dysfunction can occur due to surgical injury or radiation therapy. Many patients also experience sexual dysfunction. Standard therapies can be used to manage many of these symptoms and conditions. Patients with colostomy typically experience decreased quality of life. Referral to an ostomy therapist or wound, ostomy, and continence nurse may be beneficial. Pelvic radiation therapy can reduce bone mineral density (BMD) and increase fracture risk, so patients with rectal cancer who have received such therapy should undergo BMD monitoring. CRC survivors should undergo surveillance for recurrent CRC with interval colonoscopy, measurement of carcinoembryonic antigen levels, and computed tomography scan of the chest, abdomen, and/or pelvis. The intervals for and duration of surveillance depend on the cancer stage. Family physicians can help support CRC survivors through survivorship programs, shared care models, multidisciplinary interventions, and community partnerships. abstract_id: PUBMED:26842528 Utilization of supportive care by survivors of colorectal cancer: results from the PROFILES registry. Purpose: In an equitable healthcare system, healthcare utilization should be predominantly explained by patient-perceived need and clinical need factors. This study aims to analyze whether predisposing, enabling, and need factors are associated with the utilization of supportive care (i.e., dietary care, oncological nursing care, physical therapy, psychological care, or participation in a rehabilitation program consisting of an exercise component and a psycho-educational component) among survivors of colorectal cancer in the Netherlands. Methods: Cross-sectional data of 3957 survivors of colorectal cancer (1-11 years after diagnosis) were used. Clinical data from the Eindhoven Cancer Registry were linked to questionnaire data from the PROFILES registry. Regression analyses were used to examine which predisposing, enabling, and need factors were associated with self-reported utilization of supportive care. Results: Utilization of supportive care was primarily associated with younger age, patient-perceived need (i.e., lower physical health, anxious mood, depressive mood, and fatigue), and clinical need (i.e., tumor stage, radiotherapy, chemotherapy, comorbidity, having a stoma and lower BMI) factors. Conclusions: In the Netherlands, utilization of supportive care by survivors of colorectal cancer is primarily associated with younger age, patient-perceived need, and clinical need factors. Apart from the association with younger age, the utilization of supportive care services seems to be quite equitable. Further research is needed to determine whether there is indeed inequity in the provision of supportive care to older survivors, or whether older survivors are less in need of supportive care. abstract_id: PUBMED:37702764 A quality improvement project to optimize access to psychosocial care for cancer survivors who experience fear of recurrence. Background: The prevalence of moderate to high levels of fear of cancer recurrence (FCR) in cancer survivors may vary from 22% to 87%, although most are not usually referred to psychosocial support. The After Cancer Treatment Transition (ACTT) clinic in Women's College Hospital (Toronto) provides follow-up care to cancer survivors but in a sample of 2893 patients seen April 2019 to March 2022, only 1.5% were referred to a social worker for psychosocial needs. A single-question screening tool is currently available to screen for FCR. Objective: To evaluate the use of the single-question screening tool for FCR among cancer survivors and its impact on social work referrals. Results: Between July and October 2022, 788 patients were seen in the ACTT clinic. Generally, most patients in ACTT are breast cancer survivors (75%), and the remaining survivors are a mix of other cancer types (colorectal cancer, ovarian cancer, thyroid cancer, melanoma). Three hundred thirty (41.9%) ACTT patients completed the single-question screening tool for FCR. Most screened patients were female (96%), the average age was 60 years, and most were diagnosed with breast cancer (90%). Among screened patients, 37 (11%) indicated a moderately severe to high level of FCR and efforts were made to refer these 37 patients to a social worker. In the end, 22 (59.5%) patients with moderately severe/high levels of FCR were offered and accepted referral to a social worker. In comparison to the 1.5% referred to social work (among 2893 patients) prior to FCR screening, referrals increased to 6.7% (among 330 screened). Conclusion: Use of a single-question FCR screening tool improved identifying cancer survivors in need of psychosocial support and improved access to a social worker. abstract_id: PUBMED:24004730 The return to work of cancer survivors: the experience in Spain. Background: Because of improvements in diagnosis and treatment, cancer survival has increased in recent decades. Cancer survivors may experience problems when returning to their normal lives, particularly with returning to work. In the past few decades, a number of studies examining the work-return process of cancer survivors have been conducted in a select group of countries, but comparable studies are lacking in other countries, including Spain. Objective: To review the research literature on cancer and work in Spain and the design and methodology of the interventions studied. Methods: A systematic literature review was performed on return to work and employment in Spanish cancer survivors with the databases PubMed, Medline and Spanish database IME. Results: Eight studies were reviewed and analyzed. The studies had a mean sample size of 115 participants. Two of the studies predominantly focused on mixed cancer populations; 3 of the studies focused on breast cancer patients, 1 study focused on head and neck cancer patients, 1 study focused on colorectal cancer patients and 1 study focused on patients with lymphoma. Conclusions: Further research in Spain and other countries is necessary, and efforts should be made to support the re-employment of cancer patients. abstract_id: PUBMED:34787775 The prevalence and risk of symptom and function clusters in colorectal cancer survivors. Purpose: Our purpose was to describe the prevalence and predictors of symptom and function clusters in a diverse cohort of colorectal cancer survivors. Methods: We used data from a cohort of 909 adult colorectal cancer survivors. Participants were surveyed at a median of 9 months after diagnosis to ascertain the co-occurrence of eight distinct symptom and functional domains. We used factor analysis to identify co-occurring domains and latent profile analysis (LPA) to identify subgroups of survivors with different symptom and function clusters. Multinomial logistic regression models were used to identify risk/protective factors. Results: Factor analysis demonstrated a single underlying factor structure that included all eight health domains with depression and anxiety highly correlated (r = 0.87). The LPA identified three symptom and function clusters, with 30% of survivors in the low health-related quality of life (HRQOL) profile having the highest symptom burden and lowest functioning. In multivariable models, survivors more likely to be in the low HRQOL profile included being non-White, female, those with a history of cardiac or mental health conditions, and chemotherapy recipients. Survivors less likely to be in the low HRQOL profile included those with older age, greater financial well-being, and more spirituality. Conclusion: Nearly one-third of colorectal cancer survivors experienced a cluster of physical and psychosocial symptoms that co-occur with clinically relevant deficits in function. Implications For Cancer Survivors: Improving the identification of risk factors for having the highest symptom and lowest function profile can inform the development of clinical interventions to mitigate their adverse impact on cancer survivors' HRQOL. Answer: Yes, referral of postsurgical colorectal cancer survivors to cardiac rehabilitation (CR) is both feasible and acceptable. A pragmatic pilot randomised controlled trial with embedded qualitative study conducted in 3 UK hospitals with CR facilities found that no adverse events were reported during the trial, suggesting that CR is safe for this population. The trial showed that 62% of participants completed the intervention as per protocol and had high levels of attendance. Health professionals who attended the cancer and exercise training course rated it as excellent. Participants perceived that CR increased colorectal cancer survivors' confidence and motivation to exercise, and offered peer support. However, CR professionals expressed concerns about CR capacity to accommodate cancer survivors and their ability to provide psychosocial support to this group of patients. The study concluded that a large-scale effectiveness trial of the intervention should be conducted (PUBMED:26729381). Additionally, another study, the Cardiac Rehabilitation In Bowel cancer (CRIB) study, which was a pilot randomised controlled trial of cardiac rehabilitation versus usual care for post-surgical colorectal cancer patients, aimed to test the feasibility and acceptability of the protocol design. The study found that while there were potential recruitment biases and issues with completion of outcome measures, missing data, and sub-optimal intervention adherence, strategies could be developed to manage these risks in future trials. This suggests that with proper planning, CR can be a feasible and acceptable intervention for colorectal cancer survivors (PUBMED:27965868).
Instruction: Can the vascular specialist improve patient awareness about advanced directives? Abstracts: abstract_id: PUBMED:27102851 Can the vascular specialist improve patient awareness about advanced directives? Introduction: In France, the Leonetti law, adopted on April 22, 2005, stipulates the regulations concerning advanced directives. This is a patient's right that is not well known and rarely applied. In 2015, a new law project was thus presented in which the French National Authority for Health recommended that doctors, including all specialists, bring up the subject, especially during consultation. Objectives: To evaluate the vascular specialist's possibility to mention the topic of advanced directives during consultations. Method: A single and non-interventional prospective study conducted with the help of patients who consulted a private practitioner vascular specialist: recurrent patients regularly consulting a private practitioner vascular specialist were included. First-time consultants, minors and patients to whom it was not adapted to speak about the subject were not included. Results: Between July 27 and September 23, 2015, 159 consecutive patients were examined. Fifty-five first-time consultants and four patients for whom the interview was unsuitable were excluded. In all, 100 patients were questioned. None of them refused to talk about the subject. Women made up a majority of the population (63 %) with an average age of 67 years (23-97). The principal diagnostics were common to vascular medicine consultations: deep vein thrombosis (20 %), peripheral arterial disease (15 %), varicose veins (11 %), lymphedema (11 %) and leg ulcers (9 %). Thirteen percent of the people had a history of cancer. Half of the patients had had follow-ups for over 10 years. The average time devoted to discussing the topic was 12minutes (5-40). Only 22 % of the patients declared having been familiar with advance directives. Once informed however, 78 % chose to write up an adapted form: 36 % with the help of their doctor and 42 % with a doctor and a relative. Seventy-three percent of the consultants thought that talking about the advance directives would reinforce the confidence link between the doctor and the patient. Conclusion: In private practice vascular medicine, it seems possible to mention the subject of advance directives, as recommended by the French authorities. The procedure is well perceived by the patients. It nevertheless implies allotting a non-negligible amount of additional consultation time. The reinforcement of the doctor-patient relationship suggested by these results should be confirmed by a qualitative study made up of meetings. abstract_id: PUBMED:32189370 Experiences of Post Anaesthetic Unit Recovery Nurse facilitating Advanced Directives in the immediate postanaesthetic period: A phenomenological study. Aims: The aims of this study were to develop an understanding of the lived experience of the Post Anaesthetic Unit Recovery Nurse facilitating Advanced Directives and implications for patient-centred care. Design: Interpretive phenomenological analysis. Methods: Homogenized purposive sampling of six Registered Nurses using in-depth semi-structured interviews. Interviews were conducted between June-July 2018. Analysis was performed using interpretive phenomenology analysis. Results: Post Anaesthetic Recovery Nurses experienced a 'Grey Zone' when facilitating Advanced Directives postanaesthetic. The 'Grey Zone' is defined through four themes; The 'Trigger' of the anaesthetic characterized by physiological instability; 'Confusion and Frustration' featuring balancing of roles as a clinician and advocate during patient decline; 'Consistent Paternalism' by medical staff in the consideration of Advanced Directives; and 'Disempowerment' where nurses faced issues of advocacy, personal distress, a lack of literature or protocols, and handover of information. Conclusion: The lived experience of nurses facilitating Advanced Directives postanaesthetic may be distressing. Further research is required to understand the implications of Advanced Directives following an anaesthetic. Education and development of protocols are recommended to optimize patient-centred care. Impact: Post Anaesthetic Unit Recovery Nurses experienced a 'Grey Zone' when facilitating Advanced Directives, defined through four themes. Advanced Directives may appear to be clear, however, the anaesthetic may trigger physiological instability leading to confusion and frustration in interpretation and application of Advanced Directives. Confusion and Frustration were experienced while the attitudes of Consistent Paternalism were encountered when advocating for patient wishes, resulting in Disempowerment. Post Anaesthetic Unit Recovery Nurses may become empowered through acknowledging and describing the 'Grey Zone'. abstract_id: PUBMED:17628262 Patient awareness of risk factors for peripheral vascular disease. Our aim was to document patient awareness of the risk factors that predispose to peripheral vascular disease (PVD) before and after consultation with a vascular specialist. Two cohorts of patients attending vascular outpatient clinics were interviewed before or after consultation with a vascular surgeon. They were interviewed according to an agreed protocol to determine if they knew that they had PVD and if they knew what the risk factors for vascular disease were. They were specifically asked about smoking, diabetes, hypertension, and hypercholesterolemia. Of 102 patients recruited, 52 were interviewed prior to specialist vascular assessment and 50 after such an assessment. Seventy-nine percent of patients knew that they had PVD before assessment and 96% knew that they had PVD after specialist assessment (P = 0.009). Overall, 60% of patients acknowledged that they received advice about vascular risk factors and 33% recalled receiving such advice from their general practitioner. There was a statistically significant improvement in patient awareness of smoking (73-90%, P = 0.028) and diabetes (23-66%, P = 0.001) as vascular risk factors after specialist consultation. There was no improvement with regard to hypertension and hypercholesterolemia. Identifying and modifying risk factors is an essential part of the treatment of patients with PVD. This study demonstrates that patient awareness of vascular risk factors is generally low and further work is required to establish means for vascular surgical units to improve education for patients with PVD. abstract_id: PUBMED:24077035 Advanced directives: nurses' and physicians' representations in 2012 In cancer patients, decision-making process is crucial and patient's involvement is described as a central component. In 2005, a new tool appears to convey patient's opinion even if he is not able to communicate anymore: advanced directives (AD). Unfortunately, their documentation is marginal. The objective of this study was to investigate nurses' and physicians' representations towards AD. A questionnaire had been sent to hospitals, public health facilities and liberal practitioners during February 2012. We collected responses from 42/251 physicians (17 %) and 80/198 nurses (40 %). Sixty percent of participants reported that they were not familiar with the legislative framework for AD. For physicians, main barriers were patient cognitive impairment (P = 0.004) and lack of information on the clinical situation (P = 0.004). For nurses, difficulties were toward end of life and prognosis discussion (P = 0.002), clinical situation evolution since AD documentation (P = 0.008), time frame for AD application (P &lt; 0.001) and the fact that final decision is made by physician alone (P = 0.015). AD should be part of a good medical practice and literature has highlighted the benefit of AD on patient's quality of life. End of life discussion therefore requires dedicated time and specific training for physicians and nurses to improve the rate of patients with AD. abstract_id: PUBMED:22864298 Awareness of advance directives among Korean nurses. Awareness of advance directives (AD) among 293 nurses working in acute hospitals was evaluated through a structured questionnaire. Nurses were poorly acquainted with AD. Education about AD and related concepts are required in college and field experience to improve practice and communication with patients at the end of life. abstract_id: PUBMED:19069743 At the heart of advance directives: integrity Beyond recognition of patient's autonomy, isn't the main stake of advanced directives patient's and doctor's integrity? This article tests this hypothesis by analysing the multiple dimensions of integrity: physical integrity, existential coherence, moral integrity enlightened by Canguilhem's, Bruaire's, Ey's, Gauchet's, Taylor's and Ricoeur's philosophical approaches. abstract_id: PUBMED:19069731 Advance directives in France: legal aspects France knows advanced directives since Leonetti law which appeared in 2005. Such advanced directives help medical doctors to decide any possible withdrawing or any possible withholding. This law authorizes neither euthanasia nor assisted suicide. abstract_id: PUBMED:18376523 Advance directives in palliative care units Patients with advanced illness and their caregivers request information at all stages of the disease process. They experience fear of pain, of indignity, of abandonment and of the unknown. Open and direct discussions can ease many of these fears. Advance directives may be useful tools to improve communication and satisfaction with decision-making at the end-of-life. Each patient hospitalized in palliative care unit should be informed about advances directives and be encouraged to complete them. However, it is of importance to respect patients' pace and to accept that some may not want to be involved in such process. abstract_id: PUBMED:30141525 Awareness, approval and completion of advance directives in older adults in Switzerland. Background: Advance directives enable people to describe their preferences for medical treatment (living will) and/or to appoint a healthcare proxy who may decide on their behalf should they lose decision-making capacity. Advance directives are potentially important in determining the course of end-of-life care, as deaths are frequently preceded by end-of-life treatment decisions, which often require someone to make decisions on the patient's behalf. Switzerland introduced legally binding advance directives through its new child and adult protection law of 2013. But there is still no comprehensive evidence on older persons' awareness, attitudes and behaviours with regard to advance directives in Switzerland. Aim And Method: Our study aimed to assess levels of awareness, approval and completion of advance directives, as well as their respective associations with sociodemographic characteristics in the Swiss population aged 55 and older. Our study was cross-sectional and used data from the Survey of Health, Ageing and Retirement in Europe (SHARE), which included a special module on end-of-life issues in wave 6 (2015) in Switzerland (n = 2085). Results: Two years after the introduction of advance directives in Switzerland, 78.7% of adults aged 55 years and older had heard of them prior to the survey and 24% reported that they had completed one. Awareness of advance directives was higher in the German-speaking part of Switzerland (91%) than in the Italian- (57.1%) and French-speaking (43.3%) regions (p &lt;0.001). Advance directive completion also differed significantly between the German- (28.7%), French- (10.3%) and Italian-speaking (17.9%) regions of Switzerland (p &lt;0.001). Overall, 76.7% of Swiss adults aged 55 and older generally approved of advance directives, i.e., they either reported having already completed one or were planning to do so in the future. Of those who had not yet completed an advance directive, 32.9% believed that it was still "too early" for them to do so and 30.1% believed that they would not need one. Levels of awareness, approval and completion of advance directives also varied significantly by sex, age, education level and household composition. Discussion: Our results show some potential for improvement in levels of advance directive awareness and, especially, completion among older adults, notably in the French- and Italian-speaking Switzerland. In view of the generally high levels of approval of advance directives, our findings point to important barriers to their completion by older persons that should be addressed by policy makers in order to ensure an effective translation of individual intentions to complete an advance directive sometime in the future into concrete and timely actions toward this end. abstract_id: PUBMED:19069738 Evolution of the significance of advance directives in the ethos of medicine Patient's right to formulate advanced directives is a result of the joint evolution of laws and the medical ethos. How is this bringing together at the same time the recognization of patient autonomy and objectifs of the art of medicine? Due to the impossibility of answering the question in a general way, the issue will be studied by referring to the positions of the Belgium order of doctors. Answer: Yes, the vascular specialist can improve patient awareness about advanced directives. A study conducted in France evaluated the possibility of vascular specialists mentioning the topic of advanced directives during consultations. The results showed that none of the 100 patients questioned refused to talk about the subject, and once informed, 78% chose to write up an adapted form of advance directives. This suggests that the procedure is well perceived by patients, although it requires additional consultation time. The study also indicated that discussing advance directives could reinforce the doctor-patient relationship (PUBMED:27102851). Moreover, patient awareness of risk factors for peripheral vascular disease (PVD) improved after consultation with a vascular specialist, demonstrating that specialists can effectively educate patients on important health-related topics (PUBMED:17628262). While this study focused on PVD risk factors, the improvement in patient awareness post-consultation suggests that vascular specialists could similarly enhance awareness of advance directives. However, it is important to note that facilitating discussions about advance directives can be complex and may involve overcoming barriers such as lack of familiarity with the legislative framework, patient cognitive impairment, and difficulties in discussing end-of-life and prognosis (PUBMED:24077035). Additionally, nurses' experiences in facilitating advance directives post-anesthesia revealed challenges such as confusion, frustration, and disempowerment, indicating the need for education and development of protocols to optimize patient-centered care (PUBMED:32189370). In conclusion, vascular specialists have the potential to improve patient awareness about advance directives, but doing so may require dedicated time, specific training, and the development of supportive protocols and educational materials.
Instruction: Are changes in blood-ethanol concentration during storage analytically significant? Abstracts: abstract_id: PUBMED:17727317 Are changes in blood-ethanol concentration during storage analytically significant? Importance of method imprecision. Background: Knowledge about the stability of drugs and metabolites in biological fluids is important information when the analytical results are evaluated and interpreted. This study examines changes in blood-ethanol concentration (BEC) during the storage of specimens for up to 12 months at 4 degrees C. Methods: Venous blood samples were taken from drunk drivers in evacuated glass tubes containing sodium fluoride and potassium oxalate as chemical preservatives. The concentrations of ethanol in blood were determined in duplicate by headspace gas chromatography on arrival at the laboratory and again after storage in a refrigerator at 4 degrees C for up to 12 months. Results: The relationship between the standard deviation (SD) of analysis of ethanol at concentration intervals of 0.2 mg/g (BEC) was defined by the linear regression equation SD=0.00243+0.0104 BEC (r=0.99). At a mean BEC of 1.64 mg/g, the SD was 0.019 mg/g which corresponds to a coefficient of variation of 1.1%. The mean decrease in BEC (+/-SD) between first and second analysis was 0.105+/- 0.0686 mg/g (t=19.3, d.f.=158, p&lt;0.001) and the loss of alcohol was positively correlated with the duration (days) of storage (r=0.44, p&lt;0.001), although with large inter-tube variations. A correlation also existed (r=0.23, p&lt;0.01) between the loss of ethanol and the starting BEC. When blood samples (n=49) were opened 17 times to remove aliquots for analysis over 6.5 months, the BEC decreased by 0.217+/-0.054 mg/g compared to a fall of 0.101+/-0.076 mg/g in tubes kept unopened. None of the blood samples showed a significant increase in BEC after storage. Conclusions: To be considered analytically significant, the BEC had to decrease by 0.013 (2.6%), 0.028 (1.9%) and 0.045 mg/g (1.8%), respectively at starting concentrations of 0.5, 1.5 and 2.5 mg/g. abstract_id: PUBMED:28856195 Decreases in blood ethanol concentrations during storage at 4 °C for 12 months were the same for specimens kept in glass or plastic tubes. Background: The stability of ethanol was investigated in blood specimens in glass or plastic evacuated tubes after storage in a refrigerator at 4 °C for up to 12 months. Methods: Sterile blood, from a local hospital, was divided into 50 mL portions and spiked with aqueous ethanol (10% w/v) to give target concentrations of 0.20, 1.00, 2.00 and 3.00 g/L. Ethanol was determined in blood by headspace gas chromatography (HS-GC) with an analytical imprecision of &lt;3% (coefficient of variation, CV%). Aliquots of blood were re-analysed after 2, 7, 14, 28, 91, 182 and 364 days of storage at 4 °C. Results: The standard deviation (SD) of analysis by HS-GC was 0.0059 g/L at 0.20 g/L and 0.0342 g/L at 3.00 g/L, corresponding to CVs of 2.9% and 1.1%, respectively. The decreases in blood ethanol content were analytically significant after 14-28 days of storage for both glass and plastic tubes The mean (lowest and highest) loss of ethanol after 12 months storage was 0.111 g/L (0.084-0.129 g/L) for glass tubes and 0.112 g/L (0.088-0.140 g/L) for plastic tubes. The corresponding percentage losses of ethanol were 43-45% at a starting concentration of 0.20 g/L and 3.9-4.1% at 3.00 g/L. Conclusion: The concentration of ethanol in blood gradually decreases during storage at 4 °C. After 12 months storage the absolute decrease in concentration was ~0.11 g/L when the starting concentration ranged from 0.20 to 3.0 g/L. Decreases in ethanol content were the same for specimens kept in glass or plastic evacuated tubes. abstract_id: PUBMED:25672467 Comparison of blood ethanol stabilities in different storage periods. Introduction: Measurements of blood ethanol concentrations must be accurate and reliable. The most important factors affecting blood ethanol stability are temperature and storage time. In this study, we aimed to compare ethanol stability in plasma samples at -20 °C for the different storage periods. Materials And Methods: Blood samples were collected from intoxicated drivers (N=80) and initial plasma ethanol concentrations were measured immediately. Plasma samples were then stored at -20 °C and re-assessed after 2, 3, 4, or 5 months of storage. Differences between the initial and stored ethanol concentrations in each group (N=20) were analyzed using Wilcoxon matched-pairs test. The deviation from the initial concentration was calculated and compared with Clinical Laboratory Improvement Amendments (CLIA'88) Proficiency Testing Limits. Relationships between the initial concentrations and deviations from initial concentrations were analyzed by Spearman's correlation analysis. For all statistical tests, differences with P values of less than 0.05 were considered statistically significant. Results: Statistically significant differences were observed between the initial and poststorage ethanol concentrations in the overall sample group (P&lt;0.001). However, for the individual storage duration groups, analytically significant decreases were observed only for samples stored for 5 months, deviations from the initial concentrations exceeded the allowable total error (TEa). Ethanol decreases in the other groups did not exceed the TEa. Conclusion: According to our results, plasma ethanol samples can be kept at -20 °C for up to 3-4 months until re-analysis. However, each laboratory should also establish its own work-flow rules and criterion for reliable ethanol measurement in forensic cases. abstract_id: PUBMED:36823469 Preanalytical Factors Influencing the Stability of Ethanol in Antemortem Blood and Urine Samples. The quantitative analysis of ethanol in blood and other biological specimens is a commonly requested service from forensic science and toxicology laboratories worldwide. The measured blood alcohol concentration (BAC) constitutes important evidence when alcohol-related crimes are investigated, such as drunken driving or drug-related sexual assault. This review article considers the importance of various preanalytical factors that might influence changes in the ethanol concentration in blood after collection and before analysis or reanalysis after various periods of storage. When blood samples were collected by venipuncture from living subjects in evacuated tubes containing sodium fluoride (NaF) preservative, there was no evidence that the BAC increased after collection. Most studies found that the BAC decreased after collection depending on storage conditions, such as time and temperature, and the amount of NaF preservative. After the storage of blood specimens in a refrigerator (4oC) for up to 1-4 weeks, the changes in the BAC were not analytically significant. After storage for up to 12 months at 4oC, under the same conditions, the BAC decreased on average by 0.01-0.02 g%. The loss of ethanol does not appear to depend on the type of evacuated tubes used (glass or plastic), nominal volume (5 mL or 10 mL) or the amount of NaF preservative. Urine alcohol concentrations were also stable after various periods of storage, although in cases of glycosuria and urinary tract and/or Candida infections, the addition of NaF (1% w/v) was essential to prevent post-sampling synthesis of ethanol. abstract_id: PUBMED:36199211 Lack of fermentation in antemortem blood samples stored unstoppered in various locations. A common defense challenge when antemortem blood ethanol results are presented at trial is the assertion that ethanol was formed in the blood tube after the blood draw through fermentation of the blood glucose by Candida albicans (C. Albicans). In contrast, decades of research into the stability of ethanol in antemortem blood collected for forensic purposes have consistently shown that any analytically significant change in ethanol concentration is a decrease and initially, ethanol-negative blood remains ethanol-negative with storage. For there to be any possibility of fermentation to occur by C. Albicans in an antemortem blood sample there must be a plausible mechanism for introduction of C. Albicans into the blood. One mechanism proffered at trial is environmental contamination resulting from ambient air drawn into the evacuated blood collection tube. Blood was drawn from ethanol-free individuals into 6 and 10-ml gray-top Vacutainer® tubes containing sodium fluoride and 6-ml Vacutainer® tubes without a preservative. Following the blood draws, the tubes were stored unstoppered at room temperature for 24 or 48 h in various locations. Following unstoppered storage, the tubes were stoppered and stored refrigerated (~4°C), left at room temperature (~22°C), or placed in an oven (37°C). The refrigerated blood was analyzed for ethanol using headspace gas chromatography after both 5 days and 32 months. Unrefrigerated blood samples were analyzed after being stored at room temperature or in an oven for up to 30 days. Ethanol was not detected in any of the blood tubes after storage regardless of storage time, storage temperature, or preservative concentration. abstract_id: PUBMED:35088902 Testing antemortem blood samples for ethanol after four to seven years of refrigerated storage. The previous studies on ethanol stability in antemortem blood samples stored under various conditions have shown that ethanol concentration decreases with storage. The feasibility of measuring a forensically meaningful blood ethanol concentration in antemortem blood samples stored refrigerated (~4°C) from 4-7 years after the blood draw was evaluated in this research. All blood samples were collected into two 10-ml gray top Vacutainer® tubes as part of police driving under the influence investigations. In 29 cases, blood in the tube originally analyzed was retested after 5-7 years of refrigerated storage. Blood in 41 cases was analyzed in a previously unopened blood tube from the case after 4-7 years of refrigerated storage. The first analysis of blood in each case occurred within 35 days of the blood draw. Initial blood ethanol concentrations ranged from 0.094 g/dl to 0.301 g/dl. No samples showed an increase in ethanol concentration with storage that exceeded the uncertainty of the initial measurement. All decreases in ethanol concentration were less than 0.020 g/dl. The mean differences in ethanol concentration in previously opened and unopened tubes were -0.014 g/dl and -0.010 g/dl, respectively. The results of this research support that antemortem blood in previously opened and unopened refrigerated blood tubes can be analyzed for ethanol content more than 4 years and as much as 7 years after the blood draw and provide a result consistent with the amount of ethanol loss expected from a test done within 1-3 years of the blood draw. abstract_id: PUBMED:27981556 Ethanol concentration changes in blood samples during medium-term refrigerated storage. Objective: Stability of blood alcohol concentration (BAC) in laboratory samples is of great importance when it is necessary to perform repeated analyses. Materials And Methods: We have analyzed the stability of BAC in 50 samples, which were taken from apprehended drivers, kept at -18ºC, without preserving agents. Quantitative analyses were performed using headspace sampling gas chromatography (HS-GC) with flame ionizing detection (FID). Samples were analyzed immediately after collection (C1), and after 60 (C60), 120 (C120) and 180 (C180) days. A group of 50 samples, which were kept closed for 180 days at -18ºC, was utilized as a control. Results: We found a significant decrease in BAC between C1 and C180 (= 0.224; SD= 0.144; t = 10.98; p&lt;0.001), and between C1 and C60, C60 and C120, C120 and C180. There was a significant positive correlation (r=0.8) between starting concentration C1, and the value of BAC changes (ΔC). Linear regression analysis (R2=0.64) implies the degree of validity to the proposed model of ΔC change regarding initial BAC. There were significant changes in ΔC between the two groups. Conclusions: These data underline the significance of air chamber percent (CA%) and ethanol evaporation due to ventilation between liquid and gas phase as a mechanism of ethanol decay. abstract_id: PUBMED:32692407 Testing Antemortem Blood for Ethanol Concentration from a Blood Kit in a Refrigerator Fire. The stability of ethanol in antemortem blood stored under various conditions has been widely studied. Antemortem blood samples stored at refrigerated temperature, at room temperature, and at elevated temperatures tend to decrease in ethanol concentration with storage. It appears that the stability of ethanol in blood exposed to temperatures greater than 38°C has not been evaluated. The case presented here involves comparison of breath test results with subsequent analysis of blood drawn at the time of breath testing. However, the blood tubes were in a refrigerator fire followed by refrigerated storage for 5 months prior to analysis by headspace gas chromatography. The subject's breath was tested twice using an Intoxilyzer 8000. The subject's blood was tested in duplicate using an Agilent headspace gas chromatograph. The measured breath ethanol concentration was 0.103 g/210 L and 0.092 g/210 L. The measured blood ethanol concentration was 0.0932 g/dL for both samples analyzed. Although the mean blood test result was slightly lower than the mean breath test result, the mean breath test result was within the estimated uncertainty of the mean blood test result. Even under the extreme conditions of the blood kit being in a refrigerator fire, the measured blood ethanol content agreed well with the paired breath ethanol test. abstract_id: PUBMED:36604777 Ethanol stability in unpreserved refrigerated antemortem blood. Ethanol stability in preserved antemortem blood has been widely studied since it is a common practice in cases involving suspected impaired driving to collect antemortem blood in evacuated blood tubes containing sodium fluoride. In some situations, antemortem blood is submitted to a forensic laboratory for ethanol analysis in evacuated blood tubes that contain only an anticoagulant. There has been limited research on ethanol stability in antemortem blood stored without a preservative. On two occasions, antemortem blood was collected from five ethanol-free individuals into 6-ml Vacutainer® tubes containing only 10.8 mg potassium EDTA. The blood tubes were spiked with ethanol to approximately either 0.08 or 0.15 g/dl. Dual-FID headspace gas chromatography was used to analyze 58 blood tubes, 29 from each session, for ethanol 1 day after sample collection and again after 1 year of refrigerated storage (~4°C). Statistically significant decreases in ethanol were detected at the 0.05 level of significance. Mean decreases in ethanol after 1 year of storage for the 0.08 and 0.15 g/dl samples were 0.013 and 0.010 g/dl, respectively. The mean ethanol decrease across all tubes was 0.012 g/dl. The range of decreases for the 58 blood tubes was 0.003-0.018 g/dl. The mean ethanol decreases measured in this unpreserved antemortem blood are comparable in magnitude to those previously observed in antemortem blood containing sodium fluoride after 1 year of refrigerated storage. Ethanol did not increase in the antemortem blood samples despite the absence of sodium fluoride. abstract_id: PUBMED:36109200 Storage with ethanol attenuates the red blood cell storage lesion. Background: Current management of hemorrhagic shock relies on control of surgical bleeding along with resuscitation with packed red blood cells and plasma in a 1-to-1 ratio. Transfusion, however, is not without consequence as red blood cells develop a series of biochemical and physical changes during storage termed "the red blood cell storage lesion." Previous data has suggested that ethanol may stabilize the red blood cell membrane, resulting in improved deformability. We hypothesized that storage of packed red blood cells with ethanol would alter the red blood cell storage lesion. Methods: Mice underwent donation and storage of red blood cells with standard storage conditions in AS-3 alone or ethanol at concentrations of 0.07%, 0.14%, and 0.28%. The red blood cell storage lesion parameters of microvesicles, Band-3, free hemoglobin, annexin V, and erythrocyte osmotic fragility were measured and compared. In additional experiments, the mice underwent hemorrhage and resuscitation with stored packed red blood cells to further evaluate the in vivo inflammatory impact. Results: Red blood cells stored with ethanol demonstrated decreased microvesicle accumulation and Band-3 levels. There were no differences in phosphatidylserine or cell-free hemoglobin levels. After hemorrhage and resuscitation with packed red blood cells stored with 0.07% ethanol, mice demonstrated decreased serum levels of interleukin-6, macrophage inflammatory protein-1α, keratinocyte chemokine, and tumor necrosis factor α compared to those mice receiving packed red blood cells stored with additive solution-3. Conclusion: Storage of murine red blood cells with low-dose ethanol results in decreased red blood cell storage lesion severity. Resuscitation with packed red blood cells stored with 0.07% ethanol also resulted in a decreased systemic inflammatory response in a murine model of hemorrhage. Answer: Changes in blood ethanol concentration (BEC) during storage are indeed analytically significant, as demonstrated by several studies. The stability of ethanol in blood samples is a critical factor when interpreting analytical results, especially in forensic and toxicological investigations. One study found that the mean decrease in BEC after storage at 4°C for up to 12 months was 0.105±0.0686 mg/g, with the loss of alcohol positively correlated with the duration of storage. The study concluded that for the BEC decrease to be considered analytically significant, it had to decrease by 0.013 mg/g (2.6%) at a starting concentration of 0.5 mg/g, by 0.028 mg/g (1.9%) at 1.5 mg/g, and by 0.045 mg/g (1.8%) at 2.5 mg/g (PUBMED:17727317). Another study reported that after 12 months of storage at 4°C, the absolute decrease in ethanol concentration was approximately 0.11 g/L, regardless of whether the blood was stored in glass or plastic tubes. The percentage loss of ethanol ranged from 43-45% at a starting concentration of 0.20 g/L to 3.9-4.1% at 3.00 g/L (PUBMED:28856195). A third study indicated that plasma ethanol samples could be stored at -20°C for up to 3-4 months without analytically significant decreases, but samples stored for 5 months showed deviations from the initial concentrations that exceeded the allowable total error (PUBMED:25672467). Further research supports that antemortem blood samples can be analyzed for ethanol content more than 4 years and up to 7 years after the blood draw when stored refrigerated, with decreases in ethanol concentration being less than 0.020 g/dl (PUBMED:35088902). In summary, the concentration of ethanol in blood gradually decreases during storage, and these changes are analytically significant, particularly when considering the implications for forensic and legal cases. The extent of the decrease depends on factors such as the duration and conditions of storage, the initial ethanol concentration, and whether the blood samples contain preservatives like sodium fluoride (PUBMED:17727317, PUBMED:28856195, PUBMED:25672467, PUBMED:35088902).
Instruction: Is day-1 postoperative review necessary after pars plana vitrectomy? Abstracts: abstract_id: PUBMED:34666493 ENDOTHELIAL CELL LOSS AFTER PARS PLANA VITRECTOMY. Aims: To analyse the changes in endothelial cell density (ECD) after pars plana vitrectomy (PPV) and to identify the factors implicated. Patients And Methods: This was a prospective, consecutive, and non-randomised, case-control study. All 23-gauge vitrectomies were performed by a single surgeon at a tertiary centre. ECD was measured at baseline before surgery and on postoperative Days 30, 90, and 180. The fellow eye was used as the control eye. The primary outcome was a change in ECD after PPV. Results: Seventeen patients were included in this study. The mean age of the patients was 65 years. The mean ECD count at baseline was 2340 cells/mm2. The median ECD loss in the vitrectomised eye was 3.6 %, 4.0 %, and 4.7 % at Days 30, 90, and 180, respectively, compared to +1.94 %, +0.75 %, +1.01 %, respectively, in the control eye. The relative risk of ECD loss after PPV was 2.48 (C.I. 1.05-5.85, p = 0.0247). The pseudophakic eyes lost more ECD than the phakic eyes, but this was not statistically significant. There were no significant differences in diagnosis, age, surgical time, or tamponade used after surgery. Conclusions: Routine pars plana vitrectomy had an impact on the corneal endothelial cells until Day 180 post-op. The phakic status was slightly protective against ECD loss after PPV, although it was not statistically significant. The pathophysiology of corneal cell damage after routine PPV remains unclear. Further studies are required to confirm these findings. abstract_id: PUBMED:30776842 Pars plana vitrectomy Pars plana vitrectomy (PPV) allows the treatment of a multitude of vitreoretinal disorders involving the vitreous, the retina or the choroid. There is a multitude of possible surgical sequences: peeling maneuvers, liquid perfluorocarbone, removal of traction and media opacities, retinopexy or destructive photocoagulation, and as ultima ratio, retinotomies and retinectomies. The intravitreal tamponade serves as a substitute in the vitreous cavity, allowing photoreceptors and retinal pigment epithelium (RPE) to reconnect. Due to the many potential complications, close monitoring is required after pars plana vitrectomy during the early postoperative period. Late-onset complications are usually associated to the dynamics of the underlying disease. abstract_id: PUBMED:29215958 Postoperative Complications of Pars Plana Vitrectomy for Diabetic Retinal Disease. Despite recent advances in the medical management of diabetic retinal disease, there remain established indications for vitreoretinal surgery in the treatment of severe proliferative diabetic retinopathy. These include non-clearing vitreous hemorrhage and tractional retinal detachment. Advances in surgical instrumentation, technique, and experience have led to improved visual outcomes, as well as a corresponding decrease in the incidence of postoperative complications. However, the presence of systemic and ocular factors in diabetic patients increases the risk of adverse events compared to non-diabetic individuals. This review will focus on the most important postoperative complications following pars plana vitrectomy, with specific considerations for the diabetic patient. abstract_id: PUBMED:26056429 Aqueous misdirection following pars plana vitrectomy and silicone oil injection. Purpose: To report a retrospective series of seven phakic eyes of seven patients suffering from a malignant glaucoma-like syndrome following pars plana vitrectomy and silicone oil (SO) injection. Materials And Methods: Seven eyes with retinal detachment treated with pars plana vitrectomy with or without scleral buckling with SO tamponade. This was followed by cataract extraction to manage the elevated intraocular pressure (IOP). Results: This was a retrospective review of seven cases that received pars plana vitrectomy and SO with or without scleral buckling for different causes of retinal detachment (three were rhegmatogenous and four were tractional). After a period ranging from 1 week to 1 month, they presented with malignant glaucoma-like manifestations; high IOP, shallow axial anterior chamber, and remarkable decrease of visual acuity. Atropine eye drops and anti-glaucoma medical treatment (topical and systemic) had been tried but failed to improve the condition. Dramatic decrease of IOP and deepening of the axial anterior chamber was observed in all cases in the first postoperative day after phacoemulsification and posterior chamber foldable intraocular lens implantation with posterior capsulotomy. Conclusion: Aqueous misdirection syndrome may be observed following pars plana vitrectomy and SO tamponade. This must be differentiated from other causes of post vitrectomy glaucoma. Cataract extraction with posterior capsulotomy controls the condition. abstract_id: PUBMED:23467378 Use of 25% sulfur hexafluoride gas mixture may minimize short-term postoperative hypotony in sutureless 25-gauge pars plana vitrectomy surgery. Background: The purpose of this study was to compare postoperative intraocular pressures and percentage of vitreous cavity gas fill one day following 25-gauge pars plana vitrectomy with 20% versus 25% sulfur hexafluoride (SF6) gas fill. Methods: This was a retrospective review of 187 consecutive cases of 25-gauge pars plana vitrectomy with complete fluid/gas exchange. The main outcome measures included percentage of gas fill of the vitreous cavity and intraocular pressure on postoperative day one. Results: Fifty eyes underwent 25-gauge pars plana vitrectomy with 20% SF6 tamponade and 137 with 25% SF6 tamponade. On postoperative day one in the 20% SF6 group, there were five (10%) patients with hypotony (intraocular pressure ≤ 5 mmHg) and none in the 25% SF6 group. Mean intraocular pressure was 9 ± 2.5 mmHg and 16.8 ± 2.4 mmHg for the 20% SF6 and 25% SF6 groups, respectively (P &lt; 0.01). None of the patients had postoperative intraocular pressure &gt; 23 mmHg. Mean vitreous cavity gas fill on postoperative day one was 70.7% ± 10% in the 20% SF6 group and 89.5% ± 2.2% in the 25% SF6 group (P &lt; 0.01). There was no difference in the number of phakic patients needing cataract surgery between the groups. Conclusion: A slightly expansile concentration of 25% SF6 gas can be safely and beneficially used in 25-gauge vitrectomy surgery to increase the amount of gas fill in the vitreous cavity and prevent postoperative hypotony. abstract_id: PUBMED:26315702 Is day-1 postoperative review necessary after pars plana vitrectomy? Purpose: This study aimed to determine the proportion of patients requiring alteration in management based on the findings of the day-1 postoperative visit after pars plana vitrectomy, and to identify clinical characteristics that predict the need for unexpected intervention. Patients And Methods: A retrospective case note review was conducted of all patients who underwent pars plana vitrectomy and who then attended for review on the first postoperative day. All patients received routine prophylactic anti-glaucoma medication. Results: Two hundred and seventy-three patients examined on day 1 following vitrectomy were studied. Indications for surgery included retinal detachment, epiretinal membrane, macular hole, vitreous haemorrhage, diabetic eye disease, and floaters. Twenty-gauge (20G) vitrectomy was performed in 124 eyes (45%); 23-gauge (23G) vitrectomy was performed in 149 eyes (55%). Phacoemulsification was performed concurrently in 51/273 (19%) eyes. Ten patients (3.7%) required unexpected intervention on day 1 owing to intraocular pressure (IOP) &gt;30 (2/273), IOP &lt;6 (5/273), or unexpected return to theatre for anterior chamber washout (3/273). There was no difference in intervention rate or day-1 IOP between 20G and 23G cases. Hypotony was less common if gas tamponade was used (χ(2)-test, P&lt;0.001). Patients undergoing combined phacoemulsification and 20G vitrectomy were significantly more likely to require intervention on day 1 than patients undergoing 20G vitrectomy alone (15.0 vs 1.9%, P=0.029, Fisher's exact test) but this was not the case for patients undergoing 23G vitrectomy (0 vs 4.2%, Fisher's exact test, P=0.58). Conclusions: The intervention rate on the first day after vitrectomy is low and day-1 postoperative review can be safely omitted in the majority of patients undergoing vitrectomy. abstract_id: PUBMED:27595884 Orbital emphysema with exophthalmos following transconjunctival pars plana vitrectomy The case example presented shows that an orbital emphysema with exophthalmos can occur as a rare complication of a transconjunctival pars plana vitrectomy. Close monitoring of the patient symptoms, ocular motility, intraocular pressure and the fundus showed no evidence of compressive optic neuropathy or perfusion abnormalities through orbital vessels. The exophthalmos resolved spontaneously within a few days without any consequences. abstract_id: PUBMED:28101048 Five-Port Combined Limbal and Pars Plana Vitrectomy for Infectious Endophthalmitis. Pars plana vitrectomy for acute infectious endophthalmitis can be challenging due to severe inflammation in the anterior chamber creating significant media opacity. We describe a surgical technique combining limbal based vitrectomy and pars plana vitrectomy to manage acute infectious endophthalmitis. Limbal based vitrectomy facilitates removal of anterior chamber fibrin and inflammatory membranes for safe and optimal posterior pars plana vitrectomy. abstract_id: PUBMED:21532996 Pars plana vitrectomy in advanced coats' disease. Purpose: To report 5 cases of advanced Coats' disease managed with pars plana vitrectomy and silicone oil tamponade. Methods: Five patients with advanced Coats' disease and serous or tractional retinal detachment underwent pars plana vitrectomy with internal drainage, endolaser photocoagulation and silicone oil tamponade. One patient had combined phacoemulsification-vitrectomy surgery. Of the 5 patients, 1 had intravitreal hemorrhage and a retinal macrocyst and 1 had a retinal cyst. Follow-up period was 1-6 years. Results: All patients had improved visual acuity after surgery. No intraoperative or postoperative complications were observed in any of the patients. The retina was attached and the disease was stable in all patients during follow-up. Two patients had cataract formation, and in one of them the cataract was successfully managed with phacoemulsification surgery. Conclusion: Pars plana vitrectomy, subretinal fluid drainage, and long-term silicone oil tamponade are effective methods in the management of advanced Coats' disease. Early and prompt management can prevent visual loss and secondary complications. abstract_id: PUBMED:2583633 Lens opacities following pars plana vitrectomy Lens opacities and particularly nuclear sclerosis belong to the most frequent complications after pars plana vitrectomy. Although the causative mechanism is not yet completely understood, the infusion cannula and infusion technique are likely to have important influence on the development of postoperative lens opacities. In order to reduce the flow of the infusion fluid directed to the posterior lens surface we developed a new cannula. Because of its posterior curvature and opening as well as a turbulent flow of the fluid the impairment of the posterior lens surface is diminished. To study the clinical efficacy of the new infusion cannula we examined 235 patients after pars plana vitrectomy during an observation period of 12 months. 14.7% of the total group developed progressive nuclear sclerosis. Significant cataract formation was much less frequent in eyes operated on because of macular pucker (7.3%). This is a lower incidence compared with other investigators. Answer: The necessity of a day 1 postoperative review after pars plana vitrectomy is a matter of clinical judgment based on the potential for postoperative complications and the need for early intervention. According to a retrospective case note review, the intervention rate on the first day after vitrectomy is low, and day-1 postoperative review can be safely omitted in the majority of patients undergoing vitrectomy (PUBMED:26315702). This study found that only 3.7% of patients required unexpected intervention on day 1, with intraocular pressure issues being the primary reason for intervention. The study also noted that hypotony was less common if gas tamponade was used, and that patients undergoing combined phacoemulsification and 20G vitrectomy were more likely to require intervention than those undergoing 20G vitrectomy alone. However, this was not the case for patients undergoing 23G vitrectomy. Given these findings, it appears that while the risk of needing an intervention on the first day post-vitrectomy is relatively low, certain patient groups may benefit from a day 1 postoperative review. Clinicians may consider factors such as the type of vitrectomy performed, whether gas tamponade was used, and whether concurrent procedures like phacoemulsification were performed when deciding on the necessity of a day 1 postoperative review.
Instruction: Is it possible to train surgeons for rural Africa? Abstracts: abstract_id: PUBMED:21191583 Is it possible to train surgeons for rural Africa? A report of a successful international program. Background: The critical shortage of surgeons and access to surgical care in Africa is increasingly being recognized as a global health crisis. Across Africa, there is only one surgeon for every 250,000 people and only one for every 2.5 million of those living in rural areas. Surgical diseases are responsible for approximately 11.2% of the total global burden of disease. Even as the importance of treating surgical disease is being recognized, surgeons in sub-Saharan Africa are leaving rural areas and their countries altogether to practice in more desirable locations. Methods: The Pan-African Academy of Christian Surgeons (PAACS) was formed in 1997 as a strategic response to this profound need for surgical manpower. It is training surgical residents through a 5-year American competency-based model. Trainees are required to be of African origin and a graduate of a recognized medical school. Results: To date, PAACS has established six training programs in four countries. During the 2009-2010 academic year, there were 35 residents in training. A total of 18 general surgeons and one pediatric surgeon have been trained. Two more general surgeons are scheduled to finish training in 2011. Four graduates have gone on to subspecialty training, and the remaining graduates are practicing general surgery in rural and underserved urban centers in Angola, Guinea-Conakry, Ghana, Cameroon, Republic of Congo, Kenya, Ethiopia, and Madagascar. Conclusions: The PAACS has provided rigorous training for 18 African general surgeons, one of whom has also completed pediatric surgery training. To our knowledge, this is the only international rural-based surgical training program in Africa. abstract_id: PUBMED:30178129 Increasing and Retaining African Surgeons Working in Rural Hospitals: An Analysis of PAACS Surgeons with Twenty-Year Program Follow-Up. Background: African surgical workforce needs are significant, with largest disparities existing in rural settings. Pan-African Academy of Christian Surgeons (PAACS), a primarily rural-based general surgery training program, has published successes in producing rural African surgeons; however, long-term follow-up data are unreported. The goal of our study was to define characteristics of PAACS alumni surgeons working in rural hospitals, documenting successes and illuminating strategies for trainee recruitment and retention. Method: PAACS' twenty-year surgery residency database was reviewed for 12 programs throughout Africa regarding trainee demographics and graduate outcomes. Characteristics of PAACS' graduate surgeons were further analyzed with a 42-question survey. Results: Among active PAACS graduates, 100% practice in Africa and 79% within their home country. PAACS graduates had 51% short-term and 35% long-term (beyond 5 years) rural retention rate (less than 50,000 population). Conclusion: Our study shows that PAACS general surgery training program has a high retention rate of African surgeons in rural settings compared to all programs reported to date, highlighting a multifaceted, rural-focused approach that could be emulated by surgical training programs worldwide. abstract_id: PUBMED:23953404 Assessment of rural soundscapes with high-speed train noise. In the present study, rural soundscapes with high-speed train noise were assessed through laboratory experiments. A total of ten sites with varying landscape metrics were chosen for audio-visual recording. The acoustical characteristics of the high-speed train noise were analyzed using various noise level indices. Landscape metrics such as the percentage of natural features (NF) and Shannon's diversity index (SHDI) were adopted to evaluate the landscape features of the ten sites. Laboratory experiments were then performed with 20 well-trained listeners to investigate the perception of high-speed train noise in rural areas. The experiments consisted of three parts: 1) visual-only condition, 2) audio-only condition, and 3) combined audio-visual condition. The results showed that subjects' preference for visual images was significantly related to NF, the number of land types, and the A-weighted equivalent sound pressure level (LAeq). In addition, the visual images significantly influenced the noise annoyance, and LAeq and NF were the dominant factors affecting the annoyance from high-speed train noise in the combined audio-visual condition. In addition, Zwicker's loudness (N) was highly correlated with the annoyance from high-speed train noise in both the audio-only and audio-visual conditions. abstract_id: PUBMED:35929365 Royal Australasian College of Surgeons Rural Health Equity Strategic Action Plan: excellence through equity. Wherever there are people there will be a need for surgical care. Rural people have all kinds of problems and need all kinds of surgeons. The Royal Australasian College of Surgeons (RACS) Rural Health Equity Strategic Action Plan (RHESAP) was endorsed by Council in December 2020. The goal is to increase the rural surgical workforce and increase access to care, through providing motivated surgeons with the training they need to work where they are needed most. The Royal Australasian College of Surgeons Surgical Education and Training Programs (SET) aim to train generalist surgeons across all nine surgical disciplines. To increase the rural surgical workforce and increase access to care, we need to select for rural origin, rural medical school and rural work experience, provide all trainees with the opportunity for positive rural work exposure with an aligned rural curriculum, and we need to support surgeons already living and working in rural areas. In future, with persistent health inequity for underserved populations and the impacts of climate change, we anticipate an increasing need for a culturally and emotionally intelligent, broad-scope surgical workforce, across all surgical disciplines, with the skills, confidence and motivation to work collaboratively and effectively in surgical teams, in areas of need and limited resource environments, including globally. abstract_id: PUBMED:37643797 Characterizing Canadian rural general surgeons: trends over time and 10-year replacement needs. Background: Recruiting residents to practise rurally begins with an accurate characterization of rural surgeons. We sought to identify and analyze demographic trends among rural surgeons in Canada and to predict the rural workforce requirements for the next decade. Methods: In this retrospective observational study, we assessed the demographic and practice characteristics of rural general surgeons in Canada, defined as surgeons working in cities with a population of 100 000 or less. Surgeons were identified using the websites of provincial colleges of physicians and surgeons. Demographic characteristics included year and country of medical degree achievement, fellowship status and primary practice location. We developed a model predicting future rural workforce requirements based on the following assumptions: that the current ratio of rural surgeons to rural patients is adequate, that the rural population will increase by 1.1% annually, that a rural surgeon's career length is 36 years, and that 85 graduates will enter the workforce annually. Results: Our study sample included 760 rural general surgeons. The majority graduated after 1989 (75%), were Canadian medical graduates (73%) and did not complete a fellowship (82%). There was a significant shift toward rural surgeons being trained in Canada, from 37% of surgeons graduating before 1969 to 91% of those graduating after 2009 (p &lt; 0.001). Modelling predicts 282 rural general surgeons will retire by 2031, with 88 new surgeons needed to account for the population growth. Therefore, we predict a demand for 370 rural surgeons over the next decade, meaning 43% of general surgery graduates will need to enter rural practice. Conclusion: Rural general surgeons in Canada vary widely in their background demographic characteristics. Future opportunities in rural general surgery are projected to increase. Recruitment and training of general surgery graduates to serve Canada's rural communities remains essential. abstract_id: PUBMED:29638087 Where are general surgeons located in South Africa? Background: Human resources are the backbone of health-care delivery systems and the lack of surgical workforce in developing countries is often the greatest challenge to providing surgical care. The workforce availability and composition is an important indicator of the strength of the health system. This study aimed to analyse the distribution of general surgeons within South Africa. Method: A descriptive analysis of the general surgical workforce in South Africa was performed. The total number of specialist and non-specialist general surgeons working in the public sector in South Africa was documented between the periods from the 1 October 2014 until 31 December 2014. Results: There were significant disparities in the number and distribution of general surgeons in South Africa. There were 1.78 specialist general surgeons per 100 000, of which 0.69 per 100 000 specialist general surgeons were working in the public sector. There were 2.90 non-specialist general surgeons per 100 000. There were 6 specialist general surgeons per 100 000 insured population working in the private sector, which is comparable with the United States (US). Urban provinces such as Gauteng, the Western Cape and KwaZulu-Natal had the largest number of specialist general surgeons per 100 000. These areas had the largest number of medical aid beneficiaries and nearly 60% of specialist general surgeons were estimated to work exclusively in the private sector. Conclusion: There was a major shortage of surgical providers in South Africa, and in particular the public sector. abstract_id: PUBMED:34963333 A National Survey of Hand Surgeons: Understanding the Rural Landscape. Background: Twenty percent of the US population is described as being rural and may have limited access to hand surgeons, especially on an emergency basis. Little is known about case type, call hours, employment status, and other relevant details of rural hand surgery. Methods: We surveyed members of the American Society of Surgery for the Hand to begin to describe the problem. Results: There were 471 responses from 2256 members surveyed with 387 completing 100% of questions asked. Ninety (19%) identified themselves as primarily located in a rural population and 381 (81%) in a metropolitan region. In our study, rural hand surgeons were more likely to be employed by a community hospital, followed by independent private practice, multispecialty group, academics, and then locum tenens. Rural surgeons' practices were 80% solely hand surgery, while metropolitan surgeons' practices were 89% (P &lt; .01). Metropolitan surgeons felt that of the transfers from rural facilities, 46% did not need emergency hand care and that 60% of the time, there was not actually a need for specialty hand surgery care. Conclusions: Our survey begins to shed light on the details of rural hand surgery practice. We found that rural surgeons are more likely to be employed in community hospitals and take more call. When available, hand surgery specialists could prevent unnecessary transfer of patients to metropolitan areas. More work needs to be done to describe the differences between rural and metropolitan hand surgery practices as well as create rural hand surgeons. abstract_id: PUBMED:33735686 Why Interested Surgeons Are Not Choosing Rural Surgery: What Can We Do Now? Background: There is a growing deficit of rural surgeons, and preparation to meet this need is inadequate. More research into stratifying factors that specifically influence choice in rural versus urban practice is needed. Methods: An institutional review board-approved survey related to factors influencing rural practice selection and increasing rural recruitment was distributed through the American College of Surgeons. The results were analyzed descriptively and thematically. Results: Of 416 respondents (74% male), 287 (69%) had previous rural experience. Of those, 71 (25%) did not choose rural practice; lack of professional or hospital support (30%) and lifestyle (26%) were the primary reasons. A broad scope of practice was most important among surgeons (52%), who chose rural practice without any previous rural experience. Over 60% of urban practitioners agreed that improved lifestyle and financial advantages would attract them to rural practice. The thematic analysis suggested institutional support, affiliation with academic institutions, and less focus on subspecialty fellowship could help increase the number of rural surgeons. Conclusions: Many factors influence surgeons' decisions on practice location. Providing appropriate hospital support in rural areas and promoting specific aspects of rural practice, including broad scope of practice to those in training could help grow interest in rural surgery. Strong collaboration with academic institutions for teaching, learning, and mentoring opportunities for rural surgeons could also lead to higher satisfaction, security, and potentially higher retention rate. These results provide a foundation to help focus specific efforts and resources in the recruitment and retention of rural surgeons. abstract_id: PUBMED:37622449 Creating a sustainable rural general surgery workforce: what enables fellows to return as rural general surgeons? Introduction: In the context of shortfalls in rural general surgeon supply, this research aims to explore why rural general surgical Fellows returned and remained after fellowship at a single rural centre in Victoria, Australia. Fellowship positions post achievement of Fellowship of the Australasian College of Surgeons are traditionally not funded by government because they currently fall outside the accredited rural training post funding provided by the federal government. This article aims to explore if fellowship positions can be an important part in sustaining the rural general surgery workforce. Methods: Semi-structured interviews were conducted with nine former general surgery Fellows from a single rural Australian institution. Interviews were recorded, transcribed, coded and themed to undertake analysis according to thematic analysis. Results: This research demonstrates that consultant rural general surgeons can be recruited from a fellowship year when emphasis is placed on: (1) creating a positive workplace culture with safe working hours, (2) ensuring diversification of the general surgical case mix, (3) facilitating opportunities for schooling and work for the surgeon's family, and (4) preferentially selecting for those who identify as rural general surgeons. Rural towns can effectively recruit general surgeons when families are supported with career and school opportunities, and the newly qualified surgeon can initially commit to a 12-month position so that opportunities can be assessed by the entire family unit. Fellowship positions (post completion of general surgical training) allow young surgeons to 'try before they buy' prior to moving to a rural area. Conclusion: Ensuring a sustainable general surgical workforce in a rural community requires employee and surgical leadership to ensure a collaborative and progressive culture, which offers work diversity, supports the family lifestyle and petitions for selecting those who embody the rural general surgeon identity. Post-fellowship positions can enable young general surgeons to have exposure to the realities of a rural lifestyle, which is likely to have a positive effect on recruitment. Due to the return investment of the fellowship program, we propose that the federal government should look at funding post-fellowship positions to improve rural recruitment. abstract_id: PUBMED:34552400 Trends in Rural Outreach by Orthopedic Surgeons. Background: Sixty million rural residents have limited access to orthopedic care due to a small rural orthopedic surgery workforce. Increases in specialized training add to the challenge of attracting orthopedic surgeons to rural communities. Answering the call for research on models to meet the needs of rural orthopedic patients, we examine long-term trends in visiting consultant clinics (VCCs) in Iowa, a state with a large rural population. Methods: The Office of Statewide Clinical Education Programs (Carver College of Medicine) compiles an annual report of outreach clinic locations, frequencies and participating physicians. Trends in the total number of VCCs, days and locations (1989-2018) were analysed using joinpoint analysis. Results: Total clinic days grew rapidly from 1992-1997 (Average Percent Change: 19.7%) before a decline ending in 2009 (APC: -4.1%). A new growth period (2009-2013, APC: 7.5%) preceded another decline (APC: -3.6%) ending in 2018. The number of cities hosting a VCC grew from 56 (1989) to a peak of 90 (1999) and fell an average of 0.9% a year thereafter. More than 80% of all VCCs in the last ten years were offered 2 or more times per month. The average participation rate for Iowa-based orthopedic surgeons was 44%. The mean number of VCCs staffed by a single physician was 1.32 (std. dev. = 0.53) with a median of 1. The average number of VCC days per month for a participating physician was 3.22 (std. dev. = 2.41) with a median of 2.66. Conclusion: The VCC model of rural outreach is sustainable (30+ year history) and self-funded. Most clinics occur with sufficient frequency to allow timely follow-up care. This model of rural outreach is supported by the participation of a large segment (44%) of Iowa's orthopedic surgeons. Visiting orthopedic surgeons provide access to care in 65 of the 76 Critical Access Hospitals in Iowa offering orthopedic services compared to 8 staffed by a local orthopedic surgeon.Level of Evidence: V. Answer: Yes, it is possible to train surgeons for rural Africa. The Pan-African Academy of Christian Surgeons (PAACS) has demonstrated success in training surgical residents through a 5-year American competency-based model, specifically targeting African medical school graduates to address the critical shortage of surgeons in rural areas across Africa. To date, PAACS has established six training programs in four countries, with 35 residents in training during the 2009-2010 academic year. A total of 18 general surgeons and one pediatric surgeon have been trained, with the majority of graduates practicing general surgery in rural and underserved urban centers in various African countries (PUBMED:21191583). Additionally, a twenty-year follow-up study of PAACS showed that 100% of active PAACS graduates practice in Africa, with 79% within their home country, and a significant rural retention rate, highlighting the program's effectiveness in producing and retaining rural African surgeons (PUBMED:30178129).
Instruction: Does gender or age affect the efficacy and safety of tolterodine? Abstracts: abstract_id: PUBMED:12187215 Does gender or age affect the efficacy and safety of tolterodine? Purpose: We compared the importance of patient age and gender relative to the intensity of baseline symptoms of overactive bladder in the therapeutic response to the muscarinic receptor antagonist tolterodine. Materials And Methods: Data from an open label, observational study of 2,250 patients with overactive bladder treated for 12 weeks with tolterodine were analyzed for alterations in frequency, urgency and urge incontinence, and for global efficacy and tolerability using logistic regression analysis, stratifying for gender, age, baseline symptom intensity and tolterodine dose. Results: Gender or tolterodine dose were not consistently associated with altered treatment efficacy. Greater age was associated with a slight but statistically significant decrease in treatment efficacy. Patients with great baseline symptom intensity had greater treatment associated improvement but a lesser chance to become symptom-free. Even with a large number of patients no statistically significant gender or age associated alterations in the tolerability of tolterodine treatment were detected. Conclusions: The extent of the therapeutic response to tolterodine is largely determined by the extent of baseline symptoms. While gender does not affect the efficacy or tolerability of tolterodine in a clinically relevant manner, advanced age is associated with a slight decrease in efficacy but not in tolerability. abstract_id: PUBMED:19761716 Influence of age, gender, and race on pharmacokinetics, pharmacodynamics, and safety of fesoterodine. Objective: Fesoterodine, a new antimuscarinic agent for overactive bladder, undergoes immediate and extensive hydrolysis by nonspecific esterases to 5-hydroxymethyl tolterodine (5-HMT), the metabolite principally responsible for its antimuscarinic activity. Formation of 5-HMT does not require cytochrome P450 (CYP)-mediated metabolism, but its further metabolism and inactivation involves CYP3A4 and CYP2D6 isoenzymes. Subject age, gender, and race can play a key role in inter-subject variability in pharmacokinetics and thus efficacy and safety of drugs. This article examines the effects of age, gender, and race on the pharmacokinetics and pharmacodynamics of fesoterodine. Methods: Data from two randomized, double-blind, placebo-controlled, parallel-group trials in healthy subjects are presented: Study 1 investigated the effects of race (white vs. black men) and Study 2 investigated the effects of age (young vs. old men) and gender (elderly men vs. elderly women) on the pharmacokinetics and pharmacodynamics of single doses of fesoterodine 8 mg. In both studies, the primary endpoints were area under the concentration-time curve up to the last sample (AUC0-tz) and maximum concentration (Cmax) of 5-HMT in plasma. Pharmacodynamic variables included spontaneous salivary secretion (Studies 1 and 2) and residual urine volume (Study 2 only). The two studies included 5 groups of 16 subjects each (randomized 3 : 1 to fesoterodine or placebo): white men aged 18 - 45 years, black men aged 18 - 45 years (Study 1); young white men aged 18 - 40 years, elderly white men aged &gt; 65 years, and elderly white women aged &gt; 65 years (Study 2). Results: There were no clinically meaningful differences in the primary endpoints between white and black subjects or between young white men, elderly white men, and elderly white women. Mean AUC0-tz was 70.7 ng/ml x h in whites and 64.1 ng/ml x h in blacks; mean Cmax was 6.1 and 5.5 ng/ml in whites and blacks, respectively. Mean AUC0-tz in young white men, elderly white men, and elderly white women was 49, 48, and 54 ng/ml x h, respectively; mean Cmax in young white men, elderly white men, and elderly white women was 4.1, 3.8, and 4.6 ng/ml, respectively. Consistent with the anticholinergic pharmacology of fesoterodine, declines in salivary volume were observed in both studies, and elevations in residual urinary volume were observed, especially in elderly subjects, in Study 2. Fesoterodine was well tolerated, with common adverse events such as headache and dry mouth recognized as antimuscarinic class effects. Conclusions: Subject demographics, such as age, gender, and race, do not have a clinically meaningful effect on 5-HMT pharmacokinetics or pharmacodynamics after single-dose administration of fesoterodine 8 mg; thus, no dosage adjustment is required for fesoterodine based on age, gender, or race. abstract_id: PUBMED:31635815 Safety and Efficacy of Mirabegron: Analysis of a Large Integrated Clinical Trial Database of Patients with Overactive Bladder Receiving Mirabegron, Antimuscarinics, or Placebo. Background: Mirabegron, a β3-adrenoreceptor agonist, is an alternative drug to antimuscarinics for overactive bladder (OAB) symptoms. Objective: To summarise safety and efficacy reporting of mirabegron treatment for OAB symptoms. Design, Setting, And Participants: Pooled data analysed from 10 phase 2-4, double-blind, 12-wk mirabegron monotherapy studies in adults with OAB who had received one or more doses of study drug. Intervention: Mirabegron: 25 and 50mg; antimuscarinics: solifenacin (2.5, 5, and 10mg) and tolterodine extended release (4mg). Outcome Measurements And Statistical Analysis: Baseline OAB-related characteristics, intrinsic and extrinsic factors, and analyses by age (&lt;65 vs ≥65yr and &lt;75 vs ≥75yr) and sex were assessed. Solifenacin 2.5 and 10mg groups were not included in the efficacy analyses (small patient numbers). Safety was evaluated using the proportion of treatment-emergent adverse events. Efficacy variables were derived from bladder diaries (baseline and week 12). Results And Limitations: Baseline hypertension and diabetes were more frequent across treatment groups in the older versus younger age groups and in men versus women. Within sexes, frequencies were similar between treatment groups. Some differences were observed in baseline characteristics, including type of incontinence and medical history between sexes. No previously unreported safety concerns were identified. Improvements in efficacy (mean number of incontinence episodes/24h, micturitions/24h, urgency episodes/24h, volume voided/micturition, and nocturia episodes) versus placebo were observed in all treatment groups. Significant treatment-by-subgroup interactions included change from baseline in the mean number of incontinence episodes/24h by age (&lt;65 vs ≥65yr), nocturia by age (&lt;65 vs ≥65yr and &lt;75 vs ≥75yr), and urgency episodes by previous OAB medication. Conclusions: Data from this integrated database of 10 mirabegron studies reaffirm the safety and efficacy profiles of mirabegron, solifenacin, and tolterodine in adults of different age groups and sexes. Patient Summary: Overactive bladder is a complex of symptoms including a compelling desire to pass urine that leads to increased frequency, which may lead to a degree of incontinence if you do not reach the toilet in time and may wake you from sleep. We pooled data from 10 different studies of mirabegron in patients with overactive bladder symptoms, and looked at the effect in the total number of patients who received the treatment, as well as in different age groups and between men and women. No new safety concerns were identified, and mirabegron improved the symptoms of overactive bladder. abstract_id: PUBMED:30235544 A Comparison of the Pharmacokinetics and Drug Safety Among East Asian Populations. Global clinical studies conducted in various countries and regions are increasing. Race and extrinsic ethnic factors are key covariates that may affect the pharmacokinetics (PK), efficacy, and safety of the drug. Genetic similarity among East Asian populations has been confirmed; thus, PK, efficacy, and safety in these populations are expected to be similar, but this has not been confirmed. This study presents a comparison of PK and safety among East Asians from clinical studies sponsored by Pfizer. Four compounds with different characteristics, including mechanism of actions and PK profiles, were selected, and retrospective PK and safety comparisons in East Asians were conducted. No distinct differences were observed in PK and safety across the 4 compounds. These results are consistent with previous reports on PK comparisons and meet the expectations based on genetic similarity among East Asians. Extrapolation of these findings to other compounds should be done with caution, but these results should support the consideration of mutual use of clinical data among East Asian countries. abstract_id: PUBMED:24610862 The efficacy and tolerability of the β3-adrenoceptor agonist mirabegron for the treatment of symptoms of overactive bladder in older patients. Introduction: mirabegron is a β3-adrenoceptor agonist developed for the treatment of symptoms of overactive bladder (OAB). As the prevalence of OAB increases with age, a prospective subanalysis of individual and pooled efficacy and tolerability data from three 12-week, randomised, Phase III trials, and of tolerability data from a 1-year safety trial were conducted in order to evaluate the efficacy and tolerability of mirabegron in subgroups of patients aged ≥65 and ≥75 years. Methods: primary efficacy outcomes were change from baseline to final visit in the mean number of incontinence episodes/24 h and the mean number of micturitions/24 h. Tolerability was assessed by the incidence of treatment-emergent adverse events (TEAEs). Results: over 12 weeks mirabegron 25 mg and 50 mg once-daily reduced the mean numbers of incontinence episodes and micturitions/24 h from baseline to final visit in patients aged ≥65 and ≥75 years. Mirabegron was well tolerated: in both age groups, hypertension and urinary tract infection were among the most common TEAEs over 12 weeks and 1 year. The incidence of dry mouth, a typical anticholinergic TEAE, was up to sixfold higher among the older patients randomised to tolterodine than any dose of mirabegron. Conclusions: these analyses have demonstrated the efficacy of mirabegron over 12 weeks and the tolerability of mirabegron over 12 weeks and 1 year in OAB patients aged ≥65 and ≥75 years, supporting mirabegron as a therapeutic option in older patients with OAB. abstract_id: PUBMED:17651893 Clinical efficacy, safety, and tolerability of once-daily fesoterodine in subjects with overactive bladder. Objective: To determine the efficacy, tolerability, and safety of fesoterodine in subjects with overactive bladder (OAB). Methods: This was a multicentre, randomised, double-blind, placebo- and active-controlled trial with tolterodine extended release (ER) to assess the efficacy and safety of fesoterodine. Eligible subjects (&gt; or =18 yr) with increased micturition frequency and urgency and/or urgency urinary incontinence (UUI) were randomised to placebo, fesoterodine 4 mg, fesoterodine 8 mg, or tolterodine ER 4 mg for 12 wk. The primary efficacy variable was a change from baseline to week 12 in micturitions per 24 h. Co-primary end points included change from baseline to week 12 in UUI episodes per 24 h and Treatment Response ("yes" or "no," based on four-point treatment benefit scale). Secondary efficacy variables included mean volume voided per micturition, continent days per week, and number of urgency episodes. Results: At the end of treatment, subjects taking fesoterodine 4 and 8 mg had significant (p&lt;0.05) and clinically relevant improvements versus placebo in the primary, co-primary, and most secondary efficacy variables. Tolterodine ER (active control) also provided significantly greater improvement than placebo for most efficacy variables, confirming the sensitivity of the study design. A more pronounced effect was observed with fesoterodine 8 mg at most end points. Conclusions: Both doses of fesoterodine were significantly better than placebo in improving the symptoms of OAB and produced a significantly greater Treatment Response versus placebo. Efficacy was more pronounced with fesoterodine 8 mg compared with the other treatments. Active treatments were well tolerated. abstract_id: PUBMED:12028164 Efficacy, safety, and tolerability of extended-release once-daily tolterodine treatment for overactive bladder in older versus younger patients. Objectives: To evaluate the efficacy, safety, and tolerability of a new, once-daily extended-release (ER) formulation of tolterodine in treating overactive bladder in older (&gt; or =65) and younger (&lt;65) patients. Design: A 12-week double-blind, placebo-controlled clinical trial. Setting: An international study conducted at 167 medical centers. Participants: One thousand fifteen patients (43.1% aged &gt; or =65) with urge incontinence and urinary frequency. Intervention: Patients were randomized to treatment with tolterodine ER 4 mg once daily (qd) (n = 507) or placebo (n = 508) for 12 weeks. Measurements: Efficacy, measured with micturition charts (incontinence episodes, micturitions, volume voided per micturition) and subjective patient assessments, safety, and tolerability endpoints were evaluated, relative to placebo, according to two age cohorts: younger than 65 and 65 and older. Results: Mean age in the older and younger patient cohorts was 74 (range 65-93) and 51 (range 20-64), respectively. Compared with placebo, significant improvements in micturition chart variables with tolterodine ER showed no age-related differences. Irrespective of age, significantly more tolterodine ER recipients than placebo recipients reported an improvement in urgency symptoms. After 12 weeks of treatment with tolterodine ER, a fivefold increase in the percentage of patients able to finish tasks before voiding in response to urgency was noted in both age groups (&lt;65: from 6.5-32.8%, &gt; or =65: from 5.1-26.2%). Tolterodine ER recipients, irrespective of age, also had significant improvements in their bladder condition than did placebo recipients. Overall, a greater percentage of patients, irrespective of age, perceived any benefit with tolterodine ER than with placebo (P &lt;.001). Dry mouth (of any severity) was the most common adverse event in both the tolterodine ER and placebo treatment arms, irrespective of age (&lt;65: ER 22.7%, placebo 8.1%; &gt; or =65: ER 24.3%, placebo 7.2%). Few patients (&lt;2%) experienced severe dry mouth. No central nervous system, visual, cardiac (per electrocardiogram), or laboratory safety concerns were noted. Withdrawal rates due to adverse events on tolterodine ER 4 mg qd were comparable in the two age cohorts (&lt;65: 5.5%; &gt; or =65: 5.1%; P =.87). Conclusions: The new, once-daily ER formulation of tolterodine is efficacious, safe, and well tolerated in the treatment of patients with symptoms of overactive bladder, irrespective of age. abstract_id: PUBMED:15948744 Long-term safety, tolerability and efficacy of extended-release tolterodine in the treatment of overactive bladder in Japanese patients. Aim: To evaluate the long-term safety, tolerability and efficacy of extended-release (ER) tolterodine in Japanese patients completing 12-week treatment in a randomized, double-blind trial comparing tolterodine ER 4 mg once daily, oxybutynin 3 mg three times daily or placebo in patients with overactive bladder. Methods: Of 293 Japanese patients completing the 12-week study, 188 continued in the open-label trial and received tolterodine ER 4 mg once daily for 12 months, irrespective of previous treatment. The primary objective was to assess the safety of tolterodine ER for up to 52 weeks of treatment and at post-treatment follow-up. Secondary endpoints included changes in micturition diary variables, patient perception of bladder condition and urgency and treatment benefit. Results: Overall, 77% of patients completed 12 months of open-label treatment. Tolterodine ER was well tolerated and the most common adverse event was dry mouth (33.5%). In general, there was no increase in adverse event frequency with long-term treatment compared with short-term treatment. The efficacy of tolterodine ER was maintained over the 12-month period. The complete analysis showed a median reduction in incontinence episodes/week (-92.9%; mean reduction, -77.2%), a mean reduction in micturitions/24 h (-21.3%) and a mean increase in volume voided per micturition (19.6%). Of patients completing the 12-month study, 78.6% reported improvement in patient perception of bladder condition, 52.4% reported improvement in perception of urgency and 89.7% reported treatment benefit. Conclusions: Favorable safety, tolerability and efficacy of once-daily tolterodine ER was maintained over 12 months in a Japanese overactive bladder patient population. abstract_id: PUBMED:23390360 Long-term safety, efficacy, and tolerability of imidafenacin in the treatment of overactive bladder: a review of the Japanese literature. Imidafenacin is an antimuscarinic agent with high affinity for the M(3) and M(1) muscarinic receptor subtypes and low affinity for the M(2) subtype, and is used to treat overactive bladder. Several animal studies have demonstrated that imidafenacin has organ selectivity for the bladder over the salivary glands, colon, heart, and brain. In Phase I studies in humans, the approximately 2.9-hour elimination half-life of imidafenacin was shorter than that of other antimuscarinics such as tolterodine and solifenacin. Imidafenacin was approved for clinical use in overactive bladder in Japan in 2007 after a randomized, double-blind, placebo-controlled Phase II study and a propiverine-controlled Phase III study conducted in Japanese patients demonstrated that imidafenacin 0.1 mg twice daily was clinically effective for treating overactive bladder and was not inferior to propiverine for reduction of episodes of incontinence, with a better safety profile than propiverine. Several short-term clinical studies have demonstrated that imidafenacin also improves sleep disorders, nocturia, and nocturia-related quality of life. In addition, it is speculated that addon therapy with imidafenacin is beneficial for men with benign prostatic hyperplasia whose overactive bladder symptoms are not controlled by alpha-1 adrenoceptor antagonists. No cognitive impairment or influence of imidafenacin on the QTc interval has been observed. Although there have been very few relevant long-term clinical studies, the available information suggests the long-term efficacy, safety, and tolerability of imidafenacin, with less frequent severe adverse events, such as dry mouth and constipation. In addition, imidafenacin can be used safely for a long time even for cognitively vulnerable elderly patients with symptoms of overactive bladder. Thus, it is highly likely that imidafenacin is safe, efficacious, and tolerable to control symptoms of overactive bladder even over the long term. However, it remains unknown if the practical effectiveness of imidafenacin is applicable to ethnic groups other than Japanese. abstract_id: PUBMED:21251382 Comparisons of efficacy and safety of tolterodine and oxybutynin in children with idiopathic overactive bladder Objective: To compare the efficacy and safety of tolterodine and oxybutynin in the treatment of idiopathic overactive bladder in children. Methods: A total of 204 children with idiopathic overactive bladder were randomly divided into three groups (n=68 each): placebo, tolterodine-treated and oxybutynin-treated. The efficacy and safety were evaluated two weeks after treatment. Results: The effective rate was 25% in the placebo group, 89% in the tolterodine-treated group, and 92% in the oxybutynin-treated group. The effective rate in the two treatment groups was significantly higher than that in the placebo group (P&lt;0.05). There was a similar efficacy between the two treatment groups. The incidence of adverse events in the tolterodine-treated group (28%) was significantly lower than that in the oxybutnin-treated group (57%) (P&lt;0.05). Conclusions: Tolterodine has a similar efficacy to oxybutynin in the treatment of idiopathic overactive bladder in children, with better safety in pharmacotherapy. Answer: Gender does not significantly affect the efficacy or tolerability of tolterodine in a clinically relevant manner. However, age does have an impact, with greater age being associated with a slight but statistically significant decrease in treatment efficacy. Despite this decrease in efficacy, no statistically significant age-associated alterations in the tolerability of tolterodine treatment were detected (PUBMED:12187215). This suggests that while older patients may experience a slight reduction in the therapeutic response to tolterodine, they do not experience increased adverse effects compared to younger patients. Furthermore, a study comparing the efficacy, safety, and tolerability of extended-release once-daily tolterodine treatment for overactive bladder in older versus younger patients found no age-related differences in improvements or in the incidence of adverse events such as dry mouth. The study concluded that tolterodine is efficacious, safe, and well tolerated in treating patients with symptoms of overactive bladder, irrespective of age (PUBMED:12028164). In summary, while age may slightly affect the efficacy of tolterodine, it does not significantly impact the safety and tolerability of the drug. Gender does not appear to have a clinically meaningful effect on the efficacy or tolerability of tolterodine.
Instruction: Is off-pump revascularization better for patients with non-dialysis-dependent renal insufficiency? Abstracts: abstract_id: PUBMED:15138092 Is off-pump revascularization better for patients with non-dialysis-dependent renal insufficiency? Background: Renal dysfunction is a well-recognized complication following coronary artery bypass grafting (CABG). Coronary revascularization without cardiopulmonary bypass (CPB) has been shown to minimize renal injury in patients with normal preoperative renal function who undergo elective procedures. The purpose of this study was to define the effect of an off-pump revascularization strategy on the incidence of postoperative renal failure and survival of patients with preexisting renal dysfunction. Methods: From January 1, 1999, to December 1, 2002, a total of 371 patients were identified as having a preoperative creatinine concentration greater than or equal to 1.5 mg/dL. This number included 291 patients who did not need hemodialysis or peritoneal dialysis to support renal function. These patients were subdivided into those undergoing traditional CABG with CPB (103 patients) and those undergoing off-pump revascularization (188 patients) whose demographic, operative, and outcome information was retrospectively reviewed and compared. Results: The off-pump cohort was older than the on-pump cohort (70 +/- 9.6 versus 66 +/- 10.9 years; P =.002), had a lower prevalence of previous myocardial infarction (35% versus 50%; P =.008), and had a modestly higher mean left ventricular ejection fraction (0.47 +/- 0.01 versus 0.43 +/- 0.01; P =.017). Otherwise the groups were well matched. The mean preoperative serum creatinine and creatinine clearance values were not significantly different (1.8 +/- 0.5 versus 1.9 +/- 0.6 mg/dL [ P =.372] and 45.1 +/- 15.5 versus 46.8 +/- 17.2 mL/min [ P =.376] for the off-pump and on-pump cohorts, respectively). There was a significant reduction in postoperative renal failure (17% versus 9% of patients; P =.020) and need for new dialysis (10% versus 3% of patients; P =.022) when CPB was eliminated. Intermediate-term survival analysis revealed a survival benefit for the off-pump group (70% versus 57%) at 42 months, although this value did not reach statistical significance ( P =.143). Conclusion: The results of this study suggested that patients with preoperative non-dialysis-dependent renal insufficiency have more favorable outcome when revascularization is done off pump. Avoidance of CPB results in (1) a reduction in the incidence of postoperative renal failure; (2) a reduction in the need for new dialysis; and (3) improved in-hospital and midterm survival. abstract_id: PUBMED:22263182 On-Pump versus Off-pump Myocardial Revascularization in Patients with Renal Insufficiency: Early and Mid-term Results. Background: Myocardial revascularization in patients with renal insufficiency is challenging to the cardiac surgeon, irrespective of utilizing extracorporeal circulation. This study aimed to compare the number of bypass grafts and the mid-term results and to evaluate independent survival predictors in patients with renal insufficiency undergoing on-pump or off-pump myocardial revascularization. Materials And Methods: We retrospectively analyzed the data of 103 patients with renal insufficiency, who had isolated myocardial revascularization between January 1999 and January 2009. The patients were divided into two groups, the on-pump group and the off-pump group. Results: The off-pump group received a significantly greater number of distal arterial grafts than the on-pump group. However, the mean number of total grafts, the degree of complete revascularization, and survival rate of the patients were not significantly different between the two groups. Multivariate analysis showed the independent predictors for reduced mid-term survival were the number of total grafts and postoperative periodic renal replacement therapy. Off-pump myocardial revascularization does not decrease the number of bypass grafts or influence on the mid-term results for patients with renal insufficiency, compared to on-pump myocardial revascularization. Conclusion: Myocardial revascularization with a large number of total grafts has a beneficial effect on survival in patients with renal insufficiency, irrespective of utilizing extracorporeal bypass. abstract_id: PUBMED:17258568 Coronary artery bypass grafting with or without cardiopulmonary bypass in patients with preoperative non-dialysis dependent renal insufficiency: a randomized study. Objective: Preoperative renal insufficiency is a predictor of acute renal failure in patients undergoing coronary artery revascularization with cardiopulmonary bypass. Off-pump coronary artery bypass grafting has been shown to be less deleterious than on-pump bypass in patients with normal renal function, but the effect of this technique in patients with non-dialysis dependent renal insufficiency in a randomized study is unknown. Methods: From August 2004 through October 2005, 116 consecutive patients with preoperative non-dialysis-dependent renal insufficiency (glomerular filtration rate measured using the Modification of Diet in Renal Disease equation [MDRD GFR] &lt; or = 60 mL x min(-1) x 1.73 m(-2)) undergoing primary coronary artery bypass grafting were randomized to on-pump (n = 60) and off-pump (n = 56) groups. MDRD GFR and serum creatinine levels were measured preoperatively and postoperatively at days 1 and 5. The changes in renal function and clinical outcomes were compared between the two groups. Results: Preoperative characteristics were comparable between the two groups. The repeated-measures analysis of variance was performed on the data that showed worsening of renal function in the on-pump group compared with the off-pump group (serum creatinine, P &lt; .000; glomerular filtration rate, P &lt; .000). Further analysis of subgroups of patients with diabetes alone, hypertension alone, and combined hypertension and diabetes also showed significant deterioration renal function in the on-pump group compared with the off-pump group. In covariate analysis, diabetes has emerged as a significant covariate by serum creatinine criteria while compromised left ventricular function has emerged as a significant covariate by glomerular filtration rate criteria. These analyses showed that the use of cardiopulmonary bypass is significantly associated with adverse renal outcome (P &lt; .000). Three patents required hemodialysis in the on-pump group and none in the off-pump group. The mean number of grafts per patient was 3.85 +/- 0.86 and 3.11 +/- 0.89 in the on-pump and off-pump groups, respectively (P &lt; .001), but the indices of completeness of revascularization, 1.00 +/- 0.08 for off-pump coronary bypass and 1.01 +/- 0.08 for on-pump coronary bypass, were similar (P = .60). Conclusions: This study suggests that on-pump as compared with off-pump coronary artery bypass grafting is more deleterious to renal function in diabetic patients with non-dialysis dependent renal insufficiency. MDRD GFR is a more sensitive investigation than serum creatinine levels to assess renal insufficiency in patients undergoing coronary bypass. abstract_id: PUBMED:30236310 Long-Term Outcomes After Off-Pump Versus On-Pump Coronary Artery Bypass Grafting by Experienced Surgeons. Background: Long-term benefits of off-pump versus on-pump coronary artery bypass grafting (CABG) are controversial. Objectives: The authors sought to compare long-term survival and morbidity after on-pump versus off-pump CABG. Methods: Mandatory clinical and administrative registries from New Jersey Department of Health were linked to identify patients who underwent CABG between 2005 and 2011, by surgeons who had performed at least 100 off-pump or on-pump CABG operations. Survival, stroke, myocardial infarction, repeat revascularization, and new dialysis requirement were compared using Cox modeling, propensity scores, and instrumental variable analysis. Median follow-up was 6.8 years (range: 0 to 11.0 years); last follow-up date was December 31, 2015. Results: Among 42,570 CABG patients, 6,950 who underwent off-pump CABG and 15,295 who underwent on-pump CABG met study criteria. Off-pump CABG was associated with higher mortality (33.4% vs. 29.6% at 10 years; hazard ratio [HR]: 1.11; 95% confidence interval [CI]: 1.04 to 1.18; p = 0.002) compared with on-pump CABG. Off-pump CABG was associated with a higher risk of incomplete revascularization (15.7% vs. 8.8%; p &lt; 0.001), which was a predictor of late mortality (HR: 1.10; 95% CI: 1.03 to 1.17; p = 0.006); and higher rates of repeat revascularization (15.4% vs. 14.0% at 10 years; HR: 1.17; 95% CI: 1.01 to 1.37; p = 0.048). There were no significant differences in the rate of stroke, myocardial infarction, or new dialysis. Conclusions: In this mandatory clinical registry, off-pump was associated with increased incomplete revascularization, repeat revascularization, and mortality at 10 years compared with on-pump CABG, suggesting that on-pump CABG may be the appropriate choice for most patients undergoing surgical revascularization. abstract_id: PUBMED:17703615 Coronary artery bypass grafting in patients with dialysis-dependent renal failure From January 1995 to May 2003, 36 patients with dialysis-dependent renal failure underwent coronary artery bypass grafting. We performed the operation with cardiopulmonary bypass (group On) in 17 cases and without cardiopulmonary bypass (group Off) in 19 patients [off-pump coronary artery bypass grafting (OPCAB) 15, minimally invasive direct coronary artery bypass (MIDCAB) 4]. There were no statistical differences regarding mean age, sex, duration of dialysis, preoperative hypertension, diabetes and peripheral and cerebral vascular diseases. Mean operation time and the number of bypass grafts were 315 +/- 53 minutes, 2.8 +/- 0.8 grafts in group On and 284 +/- 78 minutes, 2.4 +/- 1.1 grafts in group Off, respectively (not significant). Seventeen patients (100%) of group On and 12 patients (63%) needed blood transfusion. Hospital stay after operation was significantly longer in group On (40 days) of group Off than that in group Off (26 days). After the operation, continuous hemodiafiltration (CHDF) was used in 10 cases (59%) in group On and 3 cases (16%) in group Off. In coronary artery bypass grafting (CABG) on dialysis patient, it is very effective to have various operation techniques, such as off-pump bypass and on-pump beating bypass. Also control of water-electrolyte balance using early postoperative CHDF is useful. However, off-pump cases could be controlled by conventional hemodialysis. abstract_id: PUBMED:31140548 Effects of cardiopulmonary bypass on dialysis-dependent patients. Background: End-stage renal disease is considered an independent risk factor for early and late survival after coronary artery bypass grafting. Methods: We retrospectively analysed patients with dialysis-dependent renal insufficiency who had undergone coronary artery bypass surgery between 2010 and 2017. Patients who were operated with the assistance of cardiopulmonary bypass (ONCAB) were in group 1 and those operated with off-pump coronary artery bypass surgery (OPCAB) were in group 2. We compared peri-operative morbidity and mortality rates and short-term results of the two groups. Results: There were 74 patients in group 1 and 36 in group 2. Blood transfusion requirement, drainage, need for intra-aortic balloon pump and duration of stay in intensive care unit was statistically significantly higher in group 1 (p &lt; 0.05). Also, postoperative creatine kinase (CK) and creatine kinasemuscle/brain (CKMB) values were statistically significantly higher in group 1 (p = 0.003). Conclusions: Coronary artery bypass grafting under ONCAB was a potential risk for morbidity and mortality in patients with end-stage renal disease. Performing OPCAB surgery may improve postoperative outcomes and should be kept in mind as a surgical option. abstract_id: PUBMED:17768577 Off-pump total myocardial revascularization in patients with left ventricular dysfunction. Objective: To assess off-pump myocardial revascularization in patients with significant left ventricular dysfunction. Methods: Four hundred and five patients with an ejection fraction less than 35% underwent myocardial revascularization without extracorporeal circulation. The procedure was performed with the aid of a suction stabilizer and the LIMA stitch. The distal anastomoses were performed first. Results: A total of 405 patients were evaluated whose mean age was 63.4 +/- 9.78 years. Two hundred and seventy-nine patients were men (68.8%). With regard to risk factors, 347 patients were hypertensive, 194 were smokers, 202 were dyslipidemic, and 134 had diabetes. Two hundred and sixty patients were classified as NYHA functional class III and IV. Twenty patients suffered from chronic renal disease and were under dialysis. Fifty-one underwent emergency surgery, and 33 had been previously operated on. The mean ejection fraction was 27.2 +/- 3.54%. The mean EuroSCORE was 8.46 +/- 4.41. The mean number of anastomoses performed was 3.03 +/- 1.54 per patient. Forty-nine patients (12%) needed an intra-aortic balloon inserted after induction of anesthesia, whereas 73 (18%) needed inotropic support during the perioperative period. As to complications, 2 patients (0.49%) had renal failure, 2 had mediastinitis (0.49%), 7 (1.7%) needed to be reoperated because of bleeding, 5 patients (1.2%) suffered acute myocardial infarction, and 70 patients (17.3%) experienced atrial fibrillation. Eighteen (4.4%) patients died. Conclusion: Based on the data above, we concluded that myocardial revascularization without extracorporeal circulation in patients with left ventricular dysfunction is a safe and effective technique, and an alternative for high-risk patients. Results obtained were better than those predicted by EuroSCORE. abstract_id: PUBMED:21962259 Impact of preoperative renal dysfunction on outcomes of off-pump coronary artery bypass grafting. Background: This study assessed whether preoperative renal insufficiency predisposes patients undergoing off-pump coronary artery revascularization to postoperative dialysis. Methods: From August 2004 through June 2009, 2,275 patients undergoing off-pump coronary artery bypass were categorized into five groups (stages) by glomerular filtration rate (GFR). Of these, 1,855 patients had renal insufficiency: stage 2: 1,406; stage 3: 428; stage 4: 21, and 414 had normal renal function, stage 1. Excluded were 6 patients with end-stage renal disease (stage 5). Preoperative variables and postoperative outcomes were compared among groups. Results: Preoperative patient characteristics were similar; however, patients with normal renal function were younger (p = 0.001). Serum creatinine rose significantly above baseline on the first postoperative day in the renal insufficiency groups (p = 0.001). The GFR groups had similar inotrope use, reexploration rate, duration of postoperative mechanical ventilation, postoperative stroke, wound infection, and mortality rate. Stage 4 patients had a higher incidence of postoperative myocardial infarction (p = 0.002). Stage 3 and 4 patients had an increased need for postoperative dialysis vs stage 1 patients (p = 0.002). Conclusions: Nonparametric contingency analysis showed patients with low preoperative GFR (stage 3 and 4, p &lt; 0.0001) and a history of smoking (p = 0.04) were at increased risk for postoperative dialysis. Patients who required postoperative inotropic support tended toward requiring postoperative dialysis (p = 0.06). Low preoperative ejection fraction (p = 0.83), class III or IV angina (p = 0.069), and postoperative blood transfusions were not associated with the need for postoperative dialysis in patients undergoing off-pump revascularization. abstract_id: PUBMED:20200622 Revascularization options in patients with chronic kidney disease. Cardiovascular disease is the leading cause of death in patients who have chronic kidney disease or end-stage renal disease and are undergoing hemodialysis. Chronic kidney disease is a recognized risk factor for premature atherosclerosis. Unfortunately, most major randomized clinical trials that form the basis for evidence-based use of revascularization procedures exclude patients who have renal insufficiency. Retrospective, observational studies suggest that patients with end-stage renal disease and severe coronary occlusive disease have a lower risk of death if they undergo coronary revascularization rather than medical therapy alone. Due to a lack of prospective studies, however, the relative merits of percutaneous versus surgical revascularization are merely a matter of opinion. Several small, retrospective studies have shown that coronary artery bypass grafting is associated with higher procedural death but better long-term survival than is percutaneous coronary intervention. This difference appears to result from poor long-term results of percutaneous coronary intervention in patients who have chronic kidney disease or end-stage renal disease.Because randomized trials comparing percutaneous coronary intervention and coronary artery bypass grafting have included patients undergoing balloon angioplasty and placement of bare-metal stents, their conclusions are suspect in the era of drug-eluting stents. In this review, we discuss different revascularization options for patients with chronic kidney disease, the outcomes of revascularization procedures, and the risk factors for adverse outcomes. abstract_id: PUBMED:28433995 Effect of Prophylactic Dialysis on Morbidity and Mortality in Non-Dialysis-Dependent Patients after Coronary Artery Bypass Grafting: A Pilot Study. Background/aims: Coronary artery bypass grafting (CABG) is associated with an increased risk of morbidity and mortality in patients with pre-existing renal dysfunction. Numerous measures have been implemented to overcome this problem; however, no improvement in outcomes has been achieved. This study was aimed at investigating the effects of prophylactic dialysis on mortality and morbidity in these patients. Methods: This randomized-controlled clinical trial enrolled 88 non-dialysis-dependent patients with chronic kidney disease awaiting CABG surgery. Thirty-nine randomly selected patients received dialysis 3 times prior to surgery, and 49 patients formed the control group. Kaplan-Meier analysis and Cox proportional-hazards models were used to identify factors associated with survival. Results: There was no significant difference in the development of morbidities between the groups (p = 0.413). A significant difference was evident in the average survival time (p = 0.037). Cox proportional-hazards models determined that the hazard ratio of death after surgery was 10.854-fold greater in non-dialysis patients than in patients who received dialysis (hazard ratio = 2). Conclusion: Prophylactic dialysis prior to CABG decreases mortality, but does not affect morbidity, in patients with renal insufficiency. Answer: The evidence from the provided abstracts suggests that off-pump revascularization may be associated with better outcomes for patients with non-dialysis-dependent renal insufficiency compared to on-pump revascularization. Several studies have indicated that off-pump coronary artery bypass grafting (CABG) can lead to a reduction in postoperative renal failure and the need for new dialysis, as well as improved in-hospital and midterm survival (PUBMED:15138092). Additionally, a randomized study showed that off-pump CABG was less deleterious to renal function compared to on-pump CABG, particularly in diabetic patients with non-dialysis dependent renal insufficiency (PUBMED:17258568). However, it is important to note that the benefits of off-pump versus on-pump CABG in this patient population are not universally agreed upon. One study found that off-pump CABG did not decrease the number of bypass grafts or influence mid-term results for patients with renal insufficiency compared to on-pump myocardial revascularization (PUBMED:22263182). Another study suggested that off-pump CABG was associated with higher mortality and higher rates of incomplete revascularization and repeat revascularization at 10 years compared with on-pump CABG (PUBMED:30236310). Despite these conflicting findings, the overall trend in the presented abstracts leans towards off-pump CABG being a safer option for patients with non-dialysis-dependent renal insufficiency, as it appears to be associated with less renal injury postoperatively (PUBMED:15138092; PUBMED:17258568; PUBMED:21962259). It is also suggested that off-pump surgery may improve postoperative outcomes in patients with end-stage renal disease (PUBMED:31140548). In conclusion, while there is evidence to support the use of off-pump revascularization in patients with non-dialysis-dependent renal insufficiency, the decision should be individualized based on the patient's overall health status, the surgeon's experience, and other relevant clinical factors. Further research, particularly large-scale randomized controlled trials, would be beneficial to provide more definitive guidance on this issue.
Instruction: Is there any interest to dose the azathioprine's metabolites during inflammatory bowel diseases? Abstracts: abstract_id: PUBMED:24225042 Is there any interest to dose the azathioprine's metabolites during inflammatory bowel diseases? Objective: The objective of our work is to search if there is a relation between azathioprine's metabolites (6-thioguanines nucleotides and 6-methyl mercaptopurines) and clinical efficacy and adverse effects of azathioprine in inflammatory bowel disease population. Method: We included patients with Crohn's disease or ulcerative colitis (UC) treated by azathioprine for a duration more than 1 year. Each patient had a dosage of azathioprine metabolites. Results: We included 43 Crohn's disease patients and 7 UC. Azathioprine was indicated for steroid dependancy in 23 cases, to prevent post-operative recurrence in 10 cases, to maintain clinical remission obtained by medical treatment in 17 patients. A clinical response to azathioprine (obtention of remission, absence of recurrence during the follow up) was observed in 34 patients. Conclusion: Our work confirms the relation between the doses of azathioprine metabolites and the myelotoxicity due to this molecule. abstract_id: PUBMED:34056527 Real-World Use of Azathioprine Metabolites Changes Clinical Management of Inflammatory Bowel Disease. Background: Thiopurines such as 6-mercaptopurine and azathioprine have complex metabolism, resulting in significant inter-individual differences in clinical efficacy and risk of drug toxicity, making conventional weight-based dosing inaccurate and potentially unsafe. Therapeutic drug monitoring (TDM) of thiopurine metabolites improves clinical outcomes through dose optimization and toxicity monitoring. Despite evidence for TDM, use is limited, due in part to test availability and awareness. The objectives of this study were twofold: (1) to investigate how thiopurine TDM impacts clinical management of IBD patients and (2) to evaluate proportion of patients outside therapeutic 6TGN levels or exhibiting signs of toxicity. Methods: Patients who received thiopurine TDM as part of routine care underwent chart review of demographics, disease activity, medication dosing, metabolite levels, and adverse events. Changes in clinical management following TDM were measured. Additionally, we conducted a retrospective review of clinical decision making blinded and unblinded to TDM result. Results: A total of 92 IBD patients were included. Levels of 6TGN were therapeutic in 29% of patients. 6TGN levels correlated weakly with weight-based dosing (r2 = 0.057, P = 0.02). Adverse reactions were observed in 6.5%. TDM informed clinical management in 64%. Significantly more changes to clinical management occurred in those with active disease than in remission (73% versus 48%; P = 0.02) and in those on mono- versus combination therapy (48% versus 27.5%; P = 0.03). Conclusions: TDM informs clinical decision making in over two-thirds of patients. The demonstrated poor efficacy of weight-based dosing and impact of TDM on clinical management contributes to the evidence supporting the need for greater availability and uptake of thiopurine TDM. abstract_id: PUBMED:23787247 Deletion of glutathione-s-transferase m1 reduces azathioprine metabolite concentrations in young patients with inflammatory bowel disease. Goals: To investigate, in young patients with inflammatory bowel disease (IBD) treated with azathioprine, the association between genetic polymorphisms of thiopurine-S-methyl-transferase (TPMT), inosine-triphosphate-pyrophosphatase (ITPA), and glutathione-S-transferases (GST), involved in azathioprine metabolism, the concentration of the main metabolites of azathioprine, thioguanine nucleotides (TGNs) and the methylated nucleotides (MMPN), and the dose of the medication. Background: Azathioprine is widely used in IBD as an immunosuppressive agent, particularly to maintain remission in patients with steroid refractory disease. Azathioprine is a prodrug and requires conversion to its active form mercaptopurine, which has no intrinsic activity, and is activated by the enzymes of the purine salvage pathway to TGNs. Polymorphisms in genes of enzymes involved in azathioprine metabolism influence the efficacy and toxicity of treatment. Study: Seventy-five young patients with IBD treated with azathioprine at least for 3 months were enrolled and genotyped for the selected genes; for these patients, TGN and MMPN metabolites were measured by high performance liquid chromatography in erythrocytes. Results: GST-M1 deletion was associated with lower TGN/dose ratio (P=0.0030), higher azathioprine dose requirement (P=0.022), and reduced response to therapy (P=0.0022). TPMT variant genotype was associated with lower MMPN concentration (P=0.0064) and increased TGN/dose ratio (P=0.0035). ITPA C94A polymorphism resulted in an increased MMPN concentration (P=0.037). Conclusions: This study describes the effect of candidate genetic polymorphisms in TPMT, ITPA, and GST-M1 on azathioprine pharmacokinetics in IBD patients, showing, for the first time, relevant effects of GST-M1 genotype on azathioprine metabolites concentration. abstract_id: PUBMED:29370945 Usefulness of thiopurine methyltransferase polymorphism study and metabolites measurement for patients treated by azathioprine Azathioprine is widely used in internal medicine and frequently implicated in occurrence of adverse events. Among these adverse events the bone marrow suppression, a dose-related one, is the most serious because of is potential morbidity and mortality. Severe myelosuppression, associated with abnormal AZA metabolism, is linked to the thiopurine methyltransferase (TPMT) genetic polymorphism that results in a high variability of its activity with 89% of patients with a normal activity, 11% with an intermediate activity, and 0.3% with very low activity leading to a very high risk of bonne marrow suppression. TPMT status can be assessed prior to AZA treatment by measuring enzyme activity or genotyping techniques to identify patients for which the standard dose is not advisable. Furthermore, azathioprine metabolites monitoring is helpful for the follow up of patients, especially in therapeutic failure, to distinguish non-compliant patients from under-dosed, "shunters" or resistant patients. abstract_id: PUBMED:23503453 Relationship between azathioprine dosage and thiopurine metabolites in pediatric IBD patients: identification of covariables using multilevel analysis. Background: Previous studies have reported no or only a very poor correlation between 6-methylmercaptopurine/6-thioguanine nucleotide (6-MeMPN/6-TGN) and azathioprine (AZA) dose in the treatment of inflammatory bowel disease (IBD). However, metabolite levels are often repeatedly measured yielding a hierarchical data structure that requires more appropriate data analysis. Methods: This study explored the relationship between the weight-based dosage of AZA and metabolites levels in 86 pediatric IBD patients using multilevel analysis. Other covariates related to patient characteristics and treatment were evaluated. Results: This is the first study to demonstrate positive correlations between AZA dose and 6-TGN and 6-MeMPN levels and 6-MeMPN/6-TGN ratio (P &lt; 0.001) in IBD children. Other novel predictors of metabolites were reported. Younger children exhibited lower 6-TGN and 6-MeMPN levels, probably suggesting age-related differences in metabolism and/or absorption of thiopurines. Coadministration of infliximab resulted in a significant increase in 6-TGN levels (P = 0.023). Moreover, alanine aminotransferase values positively correlated with 6-MeMPN levels (P = 0.032). The duration of AZA therapy, gender, and thiopurine methyltransferase activity were associated with metabolite levels. The wide interindividual variability in metabolite levels that accounted for 67.7%, 48.6%, and 49.4% of variance in the 6-TGN and 6-MeMPN levels and the ratio, respectively, were confirmed. Conclusions: The reliable AZA dose-metabolites relationship is useful for clinicians to guide the dosing regimen to maximize clinical response and minimize side effects or to consider alternative therapies when patients have preferential production of the toxic 6-MeMPN. These results may be of potential interest for optimizing thiopurine therapy to achieve safe and efficacious AZA use in pediatric IBD patients. abstract_id: PUBMED:25048487 Monitoring thiopurine metabolites in korean pediatric patients with inflammatory bowel disease. Purpose: This study aimed to assess the role of thiopurine S-methyltransferase (TPMT) and 6-thioguanine nucleotide (6-TGN) as predictors of clinical response and side effects to azathioprine (AZA), and estimate the optimal AZA dose in Korean pediatric inflammatory bowel disease (IBD) patients. Materials And Methods: One hundred and nine pediatric IBD patients in whom AZA treatment was required were enrolled. Thiopurine metabolites were monitored since September 2010. Among them, 83 patients who had prescribed AZA for at least 3 months prior to September 2010 were enrolled and followed until October 2011 to evaluate optimal AZA dose, adverse effects and disease activity before and after thiopurine metabolite monitoring. Results: The result of the TPMT genotype was that 102 patients were *1/*1 (wild type), four were *1/*3C, one was *1/*6, one was *1/*16 (heterozygote) and one was *3C/*3C (homozygote). Adverse effects happened in 31 patients pre-metabolite monitoring and in only nine patients post-metabolite monitoring. AZA dose was 1.4±0.31 mg/kg/day before monitoring and 1.1±0.46 mg/kg/day after monitoring (p&lt;0.001). However, there were no statistical differences in disease activity during metabolite monitoring period (p=0.34). Adverse effects noticeably decreased although reduction of the AZA dose since monitoring. Conclusion: TPMT genotype and thiopurine metabolite monitoring could be helpful to examine TPMT genotypes before administering AZA and to measure 6-TGN concentrations during prescribing AZA in IBD patients. abstract_id: PUBMED:30987408 Azathioprine Biotransformation in Young Patients with Inflammatory Bowel Disease: Contribution of Glutathione-S Transferase M1 and A1 Variants. The contribution of candidate genetic variants involved in azathioprine biotransformation on azathioprine efficacy and pharmacokinetics in 111 young patients with inflammatory bowel disease was evaluated. Azathioprine doses, metabolites thioguanine-nucleotides (TGN) and methylmercaptopurine-nucleotides (MMPN) and clinical effects were assessed after at least 3 months of therapy. Clinical efficacy was defined as disease activity score below 10. Candidate genetic variants (TPMT rs1142345, rs1800460, rs1800462, GSTA1 rs3957357, GSTM1, and GSTT1 deletion) were determined by polymerase chain reaction (PCR) assays and pyrosequencing. Statistical analysis was performed using linear mixed effects models for the association between the candidate variants and the pharmacological variables (azathioprine doses and metabolites). Azathioprine metabolites were measured in 257 samples (median 2 per patient, inter-quartile range IQR 1-3). Clinical efficacy at the first evaluation available resulted better in ulcerative colitis than in Crohn's disease patients (88.0% versus 52.5% responders, p = 0.0003, linear mixed effect model, LME). TGN concentration and the ratio TGN/dose at the first evaluation were significantly higher in responder. TPMT rs1142345 variant (4.8% of patients) was associated with increased TGN (LME p = 0.0042), TGN/dose ratio (LME p &lt; 0.0001), decreased azathioprine dose (LME p = 0.0087), and MMPN (LME p = 0.0011). GSTM1 deletion (58.1% of patients) was associated with a 18.5% decrease in TGN/dose ratio and 30% decrease in clinical efficacy. GSTA1 variant (12.8% of patients) showed a trend (p = 0.049, LME) for an association with decreased clinical efficacy; however, no significant effect on azathioprine pharmacokinetics could be detected. In conclusion, GSTs variants are associated with azathioprine efficacy and pharmacokinetics. abstract_id: PUBMED:25928802 Role of oxidative stress mediated by glutathione-s-transferase in thiopurines' toxic effects. Azathioprine (AZA), 6-mercaptopurine (6-MP), and 6-thioguanine (6-TG) are antimetabolite drugs, widely used as immunosuppressants and anticancer agents. Despite their proven efficacy, a high incidence of toxic effects in patients during standard-dose therapy is recorded. The aim of this study is to explain, from a mechanistic point of view, the clinical evidence showing a significant role of glutathione-S-transferase (GST)-M1 genotype on AZA toxicity in inflammatory bowel disease patients. To this aim, the human nontumor IHH and HCEC cell lines were chosen as predictive models of the hepatic and intestinal tissues, respectively. AZA, but not 6-MP and 6-TG, induced a concentration-dependent superoxide anion production that seemed dependent on GSH depletion. N-Acetylcysteine reduced the AZA antiproliferative effect in both cell lines, and GST-M1 overexpression increased both superoxide anion production and cytotoxicity, especially in transfected HCEC cells. In this study, an in vitro model to study thiopurines' metabolism has been set up and helped us to demonstrate, for the first time, a clear role of GST-M1 in modulating AZA cytotoxicity, with a close dependency on superoxide anion production. These results provide the molecular basis to shed light on the clinical evidence suggesting a role of GST-M1 genotype in influencing the toxic effects of AZA treatment. abstract_id: PUBMED:30107940 Clinical experience of optimising thiopurine use through metabolite measurement in inflammatory bowel disease. Introduction: Thiopurine therapy can be optimised by determining the concentration of the drug's metabolites. Patients And Methods: Retrospective analysis on a prospective database of 31 patients with inflammatory bowel disease who failed therapy with thiopurines. Thiopurine metabolites (6-thioguanine, 6-TGN and 6-methylmercaptopurine, 6-MMP) were measured by high-performance liquid chromatography (Laboratorios Cerba, Barcelona) and treatment was duly adjusted in accordance with the results. Clinical response was reassessed after six months. Result: Despite the appropriate theoretical dose of thiopurines being administered, the dose was insufficient in 45.6% of patients (nonadherence to treatment suspected in 6.45%) and 16.2% received an excessive dose or the drug was metabolised by other metabolic pathways. After treatment was optimised based on metabolite levels, only 25.8% (8/31) were prescribed a biological agent, while 74.2% of cases (23/31) were managed through dose optimisation alone. Discussion: Monitoring thiopurine metabolite levels may help clinicians to assess non-responsive patients before adding or switching to another drug (generally a biological agent), thereby avoiding any additional costs or potential toxicity. This strategy may also help to identify patients receiving an insufficient dose and those with an alternative metabolic pathway, who could be candidates for low-dose AZA with allopurinol, as well as patients who are suspected of being non-adherent. In three out of four patients, switching to a biological agent can be avoided. abstract_id: PUBMED:24707136 Pharmacogenetics of azathioprine in inflammatory bowel disease: a role for glutathione-S-transferase? Azathioprine is a purine antimetabolite drug commonly used to treat inflammatory bowel disease (IBD). In vivo it is active after reaction with reduced glutathione (GSH) and conversion to mercaptopurine. Although this reaction may occur spontaneously, the presence of isoforms M and A of the enzyme glutathione-S-transferase (GST) may increase its speed. Indeed, in pediatric patients with IBD, deletion of GST-M1, which determines reduced enzymatic activity, was recently associated with reduced sensitivity to azathioprine and reduced production of azathioprine active metabolites. In addition to increase the activation of azathioprine to mercaptopurine, GSTs may contribute to azathioprine effects even by modulating GSH consumption, oxidative stress and apoptosis. Therefore, genetic polymorphisms in genes for GSTs may be useful to predict response to azathioprine even if more in vitro and clinical validation studies are needed. Answer: Yes, there is interest in dosing azathioprine's metabolites during the treatment of inflammatory bowel diseases (IBD). Monitoring the levels of azathioprine metabolites, such as 6-thioguanine nucleotides (6-TGN) and 6-methylmercaptopurine (6-MMP), can inform clinical management and optimize treatment outcomes for IBD patients. Several studies have demonstrated the utility of therapeutic drug monitoring (TDM) of azathioprine metabolites in IBD. For instance, one study found that TDM informed clinical management in 64% of patients, with significant changes in clinical management occurring more frequently in those with active disease and those on monotherapy compared to combination therapy (PUBMED:34056527). Another study confirmed the relationship between azathioprine metabolite doses and myelotoxicity, a serious adverse effect of the drug (PUBMED:24225042). Genetic polymorphisms in enzymes involved in azathioprine metabolism, such as thiopurine-S-methyltransferase (TPMT), inosine-triphosphate-pyrophosphatase (ITPA), and glutathione-S-transferases (GST), can influence the efficacy and toxicity of treatment. For example, the deletion of GST-M1 was associated with lower TGN/dose ratios, higher azathioprine dose requirements, and reduced response to therapy (PUBMED:23787247; PUBMED:24707136). Additionally, TPMT genetic polymorphism is linked to variability in enzyme activity, which can lead to a high risk of bone marrow suppression in patients with low activity (PUBMED:29370945). Furthermore, the measurement of azathioprine metabolites can help distinguish between non-compliant patients, under-dosed patients, "shunters," or resistant patients, especially in cases of therapeutic failure (PUBMED:29370945). In pediatric IBD patients, a study demonstrated positive correlations between azathioprine dose and metabolite levels, with younger children exhibiting lower metabolite levels, suggesting age-related differences in metabolism or absorption (PUBMED:23503453). In conclusion, dosing azathioprine's metabolites during IBD treatment is of interest as it can guide clinicians in optimizing therapy to maximize clinical response, minimize side effects, and consider alternative therapies when necessary.
Instruction: Is a history of trauma associated with a reduced likelihood of cervical cancer screening? Abstracts: abstract_id: PUBMED:12401150 Is a history of trauma associated with a reduced likelihood of cervical cancer screening? Objective: We tested the hypothesis that a history of trauma (especially sexual trauma) was associated with a reduced likelihood of having had medically appropriate cervical cancer screening. Study Design: A case-control study using mailed self-report questionnaires. Population: The questionnaires were completed by an age-stratified random sample of adult women members of a large health maintenance organization. The sample included 364 women who had received medically appropriate cervical cancer screening and 372 who had not. Outcomes Measured: We defined cases as women who, according to their medical record, had not had cervical cancer screening within 2 years before the study. Controls were defined as women who had been screened. We evaluated exposures to trauma that we hypothesized to be associated with the case/control state. Results: Women who had been sexually abused in childhood were less likely to have had a Pap smear within the past 2 years (36.0% vs. 50.4%, P =.050). Other traumatic events were associated with Pap testing in bivariate analyses but not when demographic characteristics and clinic location were controlled. Childhood sexual abuse remained associated with reduced odds of Pap screening in logistic regression analyses that controlled for clinic location, demographics, attitudes about Pap screening, and posttraumatic stress disorder symptoms (adjusted OR = 0.56, 95% CI 0.34 to 0.91). Conclusions: These findings suggest that childhood sexual abuse may lead to decreased probability of screening for cervical cancer, potentially contributing to the poorer health seen in other studies of women who have been sexually abused. abstract_id: PUBMED:35049381 The Relationship Between Sexual Assault History and Cervical Cancer Screening Completion Among Women Veterans in the Veterans Health Administration. Background: Sexual assault affects one in three U.S. women and may have lifelong consequences for women's health, including potential barriers to completing cervical cancer screening and more than twofold higher cervical cancer risk. The objective of this study was to determine whether a history of sexual assault is associated with reduced cervical cancer screening completion among women Veterans. Materials and Methods: We analyzed data from a 2015 survey of women Veterans who use primary care or women's health services at 12 Veterans Health Administration facilities (VA's) in nine states. We linked survey responses with VA electronic health record data and used logistic regression to examine the association of lifetime sexual assault with cervical cancer screening completion within a guideline-concordant interval. Results: The sample included 1049 women, of whom 616 (58.7%) reported lifetime sexual assault. Women with a history of sexual assault were more likely to report a high level of distress related to pelvic examinations, and to report ever delaying a gynecologic examination due to distress. However, in the final adjusted model, lifetime sexual assault was not significantly associated with reduced odds of cervical cancer screening completion (OR 1.35, 95% CI 0.93-1.97). Conclusions: Contrary to our expectations, sexual assault was not significantly associated with gaps in cervical cancer screening completion. Three- to five-year screening intervals may provide sufficient time to complete screening, despite barriers. Trauma-sensitive care practices promoted in the VA may allow women to overcome the distress and discomfort of pelvic examinations to complete needed screening. ClinicalTrials.gov (#NCT02039856). abstract_id: PUBMED:34515127 Impact of Screening for Sexual Trauma in a Gynecologic Oncology Setting. Objectives: Sexual trauma poses a significant concern and is associated with heightened stress, negative health repercussions, and adverse economic effects. A history of abuse may increase a woman's risk of developing cancer, in particular cervical cancer. We analyzed the impact of screening for sexual abuse in a gynecologic oncology population. Methods: Patients were screened for sexual trauma in a gynecologic oncology clinic over 5 and a half years (April 1, 2011, to September 30, 2016) in this cohort study. The screening questions were selected by behavioral oncology physicians and integrated into the gynecologic history component of the new patient assessment. Patients who screened positive for a history of sexual abuse or intimate partner violence were offered a behavioral oncology referral. Providers were also questioned about the effect of screening on their practice. Results: Of the 1,423 consecutive patients screened for sexual trauma, a total of 164 patients (12%) disclosed a history of sexual abuse. Of the 133 patients who specified their age at the sexual trauma, the majority (107 [80%]) responded that they were a young child or early teen. Most patients (92%) declined counseling. Among individuals presenting with cancer, the distribution of cancer type was statistically different between those patients with and without a sexual trauma history (p = 0.0001). Conclusion: Screening for sexual trauma in a gynecologic oncologic population serves as a valuable opportunity to uncover a history of abuse that may increase a woman's susceptibility to cancer. This study demonstrates that screening for sexual abuse in a gynecologic oncology setting may be integrated into new patient interviews with minimal disruption. Identification of an undisclosed sexual trauma history allows for an opportunity to offer counseling and minimize the emotional distress that may be precipitated by treatment and exams. abstract_id: PUBMED:37874753 Adverse Childhood Experiences and Preventive Cervical Cancer Screening Behavior. Objectives: To examine associations between a history of adverse childhood experiences (ACEs) and receiving preventive cervical cancer screening and to investigate whether number and type of ACE exposures were predictive of cervical cancer screening uptake. Sample &amp; Setting: Data were from 11,042 adults who completed the 2020 Texas Behavioral Risk Factor Surveillance System survey. The U.S. Preventive Services Task Force guidelines were used to indicate whether individuals had received cervical cancer screening at recommended intervals. Methods &amp; Variables: Multiple logistic regression analysis was used to predict the likelihood of not having received the recommended preventive cancer screening by number and type of ACE exposures. Chi-square analysis was used to determine associations among demographic characteristics, cancer screening uptake, and ACE number and type. Results: Individuals with one to three ACEs and those with six or more ACEs were statistically more likely not to have received the recommended cervical cancer screenings compared to those with zero ACEs. A history of physical ACEs was associated with 3.88 times the likelihood of not having received the recommended cervical cancer screening. Implications For Nursing: To promote timely cervical cancer screening and prevent retraumatization of patients with a history of ACEs, providers should implement trauma-informed care principles in their healthcare settings. abstract_id: PUBMED:12040640 Cervical cytology screening in Vietnamese asylum seekers in a Hong Kong detention center. Population demographics and historical perspectives. Objective: To provide a brief review of the history and demographics of the Vietnamese asylum-seeking population in Hong Kong and their possible effects on the initiation of a cervical cytology screening program at a Hong Kong detention center. Study Design: Analysis of case histories, questionnaires and interviews with women in a detention center identified demographic features related to Pap smear history, knowledge of the Pap test, age at first intercourse and cigarette smoking status among women aged 17 years and over. Analysis of Pap smear uptake following initiation of a screening program was undertaken. Results: Of the 1,171 women in the detention center who were eligible for a Pap smear, 536 (45.8%) actually obtained one, although enrollment, which was strong at the initial offering, slowed considerably as the program progressed. None of the women had had a Pap smear prior to leaving Vietnam. Knowledge of the utility and risk status criteria for Pap testing was very limited. The majority (77.9%) of the subjects started sexual activity after age 20 years, and three (0.6%) smoked. There were four (0.2%) abnormal smears identified among those tested. Conclusion: Convincing evidence was obtained that the Pap test was not widely used in Vietnam among the asylum-seeking population, and its role in preventing cervical cancer was not well known to the women studied. The initial strong uptake of the Pap smear was not maintained. That may be attributable to psychosocial factors associated with detention under harsh conditions and trauma associated with fleeing Vietnam. abstract_id: PUBMED:22068042 A history of interpersonal trauma and the gynecological exam. Cervical cancer is preventable, in part, by routine Papanicolaou (Pap) testing, but some women avoid routine screening. African American women have the greatest mortality among all groups of women in the United States. Personal reasons have been found to contribute to screening avoidance behavior, such as a history of sexual abuse and intimate partner violence. Fifteen African American women with a trauma history participated in personal interviews. The Interaction Model of Client Behavior was employed for exploring the women's social influence, previous health care experience, cognitive appraisal, affective response, and motivation associated with routine Pap testing. Study findings suggest that providers need to assess and provide accurate information about Pap testing and cervical cancer to increase patients' knowledge. Personally reflecting on one's approach to conducting a woman's gynecologic exam (and how it is performed) might prevent triggering unwanted memories, making that visit a positive experience and facilitating repeat screening behavior. abstract_id: PUBMED:38334194 Transgender and non-binary peoples experiences of cervical cancer screening: A scoping review. Aim(s): To synthesise the literature about transgender and non-binary people's experiences of cervical cancer screening and identify ways to improve screening. Background: Transgender people often face barriers to accessing health services including cervical screening, where transgender people have a lower uptake than cisgender women. Design: A scoping review was undertaken following the Arksey and O'Malley (2005) framework and the PRISMA-ScR checklist. Following database searching of Medline via PubMed, Web of Science, Scopus and CINHAL, 23 papers published between 2008 and 2003 were included. Papers were included if they shared trans and non-binary people's experiences of cervical screening and were written in English. There were no date or geographical data restrictions due to the paucity of research. Results: Transgender people experience barriers to cervical screening including gender dysphoria, a history of sexual trauma, and mistrust in health professionals or health services, which can result in having negative experiences of screening or avoiding screening. Health professionals can help to create a positive experience by informing themselves about best practices for trans+ health. Conclusion: Changes are required to improve transgender people's experiences and uptake of cervical screening. Improving medical education about trans health and updating health systems would help to combat issues discussed. Implications For The Profession And/or Patient Care: Having an understanding of the reasons why accessing health services can be more difficult for transgender people will help health professionals to provide appropriate care for transgender patients. This paper details this in the context of cervical cancer screening and can be applied to other areas of healthcare. Reporting Method: We have adhered to relevant EQUATOR guidelines and used the PRISMA-ScR reporting method. No Patient or Public Contribution. abstract_id: PUBMED:1403996 The psychological costs of screening for cancer. The benefits of cancer screening programmes accrue to those who have cancer or identifiable precancerous conditions, and in whom the disease progression is slowed or halted by earlier intervention. The costs accrue to the rest of the population for whom there is no direct benefit to health. Attention has been given to the medical risks of screening procedures and to the economic costs, but there has been very little regard paid to the psychological costs. The aim of this paper is to evaluate the psychological impact of screening. Screening participants who are found to have untreatable disease, or for whom the interventions prove ineffective, have a greater proportion of their life as a cancer patient with all the associated psychological (and perhaps physical) distress, but no increase in their life expectancy. Those who receive false positive results may experience acute psychological distress produced by the prospect of a grave diagnosis before they are found to be free from serious disease. Even the procedure of screening itself, with the disturbance of the invitation, the discomfort of the tests and the wait for the diagnosis, can have a significant impact upon some patients. This paper evaluates the psychological costs which may be involved across the whole screening procedure, from the possible alarm of receiving an invitation to participate in screening, to the trauma of a cancer diagnosis for someone who had been unaware of any symptoms. abstract_id: PUBMED:35264120 Clinicians' perceptions of barriers to cervical cancer screening for women living with behavioral health conditions: a focus group study. Background: Women with behavioral health (BH) conditions (e.g., mental illness and substance abuse) receive fewer cervical cancer (CC) screenings, are diagnosed at more advanced cancer stages, and are less likely to receive specialized treatments. The aim of this study was to identify barriers that healthcare providers face in providing CC screening to women with BH conditions. Methods: Guided by the Consolidated Framework for Implementation Research, we conducted four focus groups in North Florida with 26 primary care and BH clinicians and staff to examine perceived barriers to CC screening among their patients with BH conditions to guide the future development of a tailored cervical cancer screening and follow-up intervention. Thematic analysis was used to analyze verbatim transcripts from audiotaped focus groups. Results: Three main themes of barriers emerged from the data: 1) BH conditions related barriers included a history of trauma, stigma and discrimination, and uncontrolled comorbid conditions, 2) System level barriers related to lack of integration between BH and primary care, and 3) Similar barriers to the general population including lack of health insurance, insufficient processes to send out reminders, and challenges with communicating with patients. Conclusions: Tailored CC screening interventions that address the unique needs of women with BH conditions are needed. Strategies that address improving trust between patients and healthcare providers, identifying avenues to improve receipt of screening during time-limited clinical visits, connecting BH and primary care providers, and addressing the social determinants of health have potential to improve CC screening rates for women with BH conditions. abstract_id: PUBMED:27738961 Cervical Cancer Screening and Its Associated Factors Among North Korean Defectors Living in South Korea. North Korean defectors (NKD) have many health problems related to insufficient nutrition, trauma from escaping, and being exposed to infectious diseases, but little research exists on their cancer screening. A total of 638 NKD participated in this cross-sectional survey. South Korean natives (SKN) who participated in the Korean National Health and Nutrition Examination Survey V were selected using age matching to each NKD. Fisher's exact tests and logistic regression were used for data analysis. The cervical cancer screening rate of NKD was significantly lower than for SKN (42 and 70 %, respectively; P &lt; .001). The adjusted proportions of cervical cancer screening for NKD in all age groups under 60 years (P &lt; .01) and having education beyond high school (P &lt; .001) were significantly lower than that of SKN. NKD who had education under a high school level were more likely to have cervical cancer screening compared to NKD with education beyond a high school level (OR 1.81, 95 % CI 1.12-2.93). NKD were less likely to receive appropriate cervical cancer screening compared to SKN, especially those aged 30-39 years or married. Tailored interventions for NKD are needed to improve cervical cancer screening compliance. Answer: Yes, a history of trauma is associated with a reduced likelihood of cervical cancer screening. Several studies have found that women with a history of trauma, particularly sexual trauma, are less likely to have undergone cervical cancer screening within medically recommended intervals. For instance, women who had been sexually abused in childhood were found to be less likely to have had a Pap smear within the past two years (PUBMED:12401150). Additionally, individuals with a history of adverse childhood experiences (ACEs) were more likely not to have received the recommended cervical cancer screenings, with a history of physical ACEs being associated with 3.88 times the likelihood of not having received the recommended screening (PUBMED:37874753). However, it is important to note that the relationship between trauma and cervical cancer screening is complex and may be influenced by various factors, including the healthcare setting and the availability of trauma-sensitive care practices. For example, in the Veterans Health Administration, where trauma-sensitive care practices are promoted, a history of sexual assault was not significantly associated with gaps in cervical cancer screening completion (PUBMED:35049381). This suggests that when appropriate care practices are in place, women with a history of trauma may be able to overcome barriers to screening. Furthermore, screening for sexual trauma in a gynecologic oncology setting has been shown to be a valuable opportunity to uncover a history of abuse that may increase a woman's susceptibility to cancer, allowing for the opportunity to offer counseling and minimize emotional distress that may be precipitated by treatment and exams (PUBMED:34515127). In summary, while a history of trauma is associated with a reduced likelihood of cervical cancer screening, the implementation of trauma-informed care and sensitive practices can help mitigate this effect and support women with trauma histories in completing necessary screenings.
Instruction: Immigration factors and prostate cancer survival among Hispanic men in California: does neighborhood matter? Abstracts: abstract_id: PUBMED:24477988 Immigration factors and prostate cancer survival among Hispanic men in California: does neighborhood matter? Background: Hispanics are more likely than other racial/ethnic groups in the United States to be diagnosed with later stage of prostate cancer, yet they have lower prostate cancer mortality rates. The authors evaluated the impact of nativity and neighborhood-level Hispanic ethnic enclave on prostate cancer survival among Hispanics. Methods: A total of 35,427 Hispanic men diagnosed with invasive prostate cancer from 1995 through 2008 in the California Cancer Registry were studied; vital status data were available through 2010. Block group-level neighborhood measures were developed from US Census data. Stage-stratified Cox proportional hazards models were used to assess the effect of nativity and ethnic enclave on prostate cancer survival. Results: In models adjusted for neighborhood socioeconomic status and other individual factors, foreign-born Hispanics were found to have a significantly lower risk of prostate cancer survival (hazards ratio [HR], 0.81; 95% confidence interval [95% CI], 0.75-0.87). Living in an ethnic enclave appeared to modify this effect, with the survival advantage slightly more pronounced in the high ethnic enclave neighborhoods (HR, 0.78; 95% CI, 0.71-0.86) compared with low ethnic enclave neighborhoods (HR, 0.86; 95% CI, 0.76-0.98). Conclusions: Despite lower socioeconomic status, Hispanic immigrants have better survival after prostate cancer than US-born Hispanics and this pattern was more striking among those living in ethnic enclaves. Identifying the modifiable individual and neighborhood-level factors that facilitate this survival advantage in Hispanic immigrants may help to inform specific interventions to improve survival among all patients. abstract_id: PUBMED:37876384 Overall Survival and Associations of Insurance Status Among Hispanic Men With High-Risk Prostate Cancer. Objectives Our objectives were to (1) determine the association between ethnicity and high-risk prostate cancer (PCa) survival and (2) determine whether this association is modified by insurance status. Methods We performed a retrospective review of the National Cancer Database (NCDB) from 2004 to 2017 of non-Hispanic White (NHW), Hispanic White (HW), or Black men with high-risk PCa. A multivariate Cox regression model was built to test the association between overall survival (OS) and race/ethnicity, insurance status, and their interaction, controlling for various socioeconomic and disease-specific variables. Results A total of 94,708 men with high-risk PCa were included in the analysis. Both HW and Black men had lower socioeconomic status characteristics and lower rates of private insurance. Race/ethnicity was significantly associated with OS in the adjusted analysis. Only Medicare demonstrated significantly worse OS. NHW (covariate-adjusted hazard ratio (aHR): 1.83, 95% CI: 1.45-2.32) and Black (aHR: 1.71, 05% CI: 1.34-2.19) men demonstrated significantly worse survival when compared to HW men. Subgroup analysis demonstrated significant differences occurring among HW men with private insurance/managed care when compared to those not insured, Medicaid, Medicare, and other government insurance types. Conclusion Despite socioeconomic and demographic disadvantages, HW men demonstrate improved OS compared to NHW men. Furthermore, HW men demonstrated improved OS compared to NHW men within nearly each insurance status type. This finding is likely the result of a complex multifactorial web and as such serves as an interesting hypothesis-generating study. abstract_id: PUBMED:35228665 Germline alterations among Hispanic men with prostate cancer. Background: Little is known about the true rate of pathogenic (P)/likely pathogenic (LP) germline alterations in Hispanic men with prostate cancer as most studies analyzing the prevalence of P/LP germline alterations were performed in a largely non-Hispanic white population (NHW). Methods: We performed a retrospective analysis of two separate cohorts of men with prostate cancer: (1) a multicenter cohort of 17,256 men who underwent germline testing in a CLIA-certified laboratory and (2) a single-center cohort of all men eligible for germline testing between 2018 and 2020. The proportions of P/LP alterations and variants of uncertain significance (VUS) were computed. Fisher's exact test was used to compare germline alteration rates for significance. A multivariate logistic regression was performed adjusting for demographic and clinical factors to examine factors associated with germline testing. Results: In the multicenter cohort, the rate of P/LP germline alterations among self-reported Hispanic men was 7.1%, which was lower than self-reported NHW men (9.7% vs. 7.1%, p = 0.058), but was not statistically significant. The VUS rate was significantly higher among the Hispanic cohort (21.5% vs. 16.6%, p = 0.005). In the single-center cohort, 136 Hispanic patients were eligible for testing of which 34 underwent germline testing (26.1%, N = 34/136). Of all prostate cancer patients in the single-center cohort undergoing germline testing (n = 173), the rate of P/LP alterations in Hispanic patients was not significantly different compared to NHW patients (14.7% vs. 12.2%, p = 0.77). The rate of VUS in Hispanic patients was significantly higher than that of NHW patients (20.6% vs. 7.2%, p = 0.047). Conclusion: The P/LP germline alteration rate in our cohorts was similar between Hispanic and NHW men. The rate of VUS was significantly higher in Hispanic men, a consequence of undertesting in minority populations. These data support that Hispanic men with prostate cancer should be screened for germline testing similar to NHW men. abstract_id: PUBMED:14532792 Clinical features and treatment outcome of Hispanic men with prostate cancer following external beam radiotherapy. Purpose: We retrospectively analyzed the clinical characteristics and outcomes of Hispanic men compared with other groups who underwent radiotherapy alone for localized or locally advanced prostate cancer. Materials And Methods: Between April 1987 and January 1998, 964 men who underwent full dose external beam radiotherapy alone for localized or locally advanced prostate cancer were included in the study. Patient medical records were reviewed for pertinent information. Results: Of the 964 men 810 were non-Hispanic white, 54 were Hispanic and 86 were black Americans. The most significant difference among the groups was in the proportion of patients who presented with initial prostate specific antigen (PSA) greater than 20 ng/ml (22% of Hispanic vs 11% of white men, p = 0.0012). In addition, 17% of Hispanic men had a Gleason score of 8 or greater compared with 11% of white men (p = 0.0265). A greater proportion of Hispanic patients also had a less favorable posttreatment PSA nadir of greater than 1 ng/ml compared with white patients, (44% vs 26%, p = 0.0214), which may have translated into a trend toward a lower 5-year disease-free survival rate in Hispanics vs white men (52% vs 65%, p = 0.07). Conclusions: Hispanic men presented with higher PSA and higher grade prostate cancer than white men. Furthermore, a higher percent of Hispanic men had a PSA nadir of 1 ng/ml or greater after radiotherapy, which may have been responsible for their trend toward a decreased 5-year disease-free survival rate compared with white men. Improved screening and early detection may improve disease-free survival in Hispanic men with localized prostate cancer. abstract_id: PUBMED:21805813 Colorectal, prostate, and skin cancer screening among Hispanic and White non-Hispanic men, 2000-2005. Background: Hispanic men have lower colorectal, prostate, and skin cancer screening rates than white non-Hispanic men. Programs designed to increase screening rates, including the national Screen for Life campaign specifically for promoting colorectal cancer (CRC) screening, regional educational/research programs, and state cancer control programs, have been launched. Screen for Life and some intervention programs included educational materials in Spanish as well as English. Objective: To assess whether CRC as well as prostate and skin cancer screening rates among Hispanic and white non-Hispanic men changed between 2000 and 2005. Methods: Cancer screening rates were compared between 2000 and 2005 using the National Health Interview Survey data. The age ranges of the study subjects and definitions of cancer screening were site specific and based on the American Cancer Society recommendations. Results: Hispanic men were less likely to comply with cancer screening guidelines than white non-Hispanic men. However, significant increases in CRC endoscopic screening were observed in both ethnic groups. It increased 2.1-fold and 2.4-fold for Hispanics and white non-Hispanics, respectively (P &lt; .05). In contrast, the use of home fecal occult blood tests decreased among white non-Hispanics but remained similar among Hispanics. Prostate-specific antigen screening remained stable, while the use of skin cancer screening tended to increase among both groups. Conclusion: Although cancer screening rates may be affected by multiple factors, our study suggested the intervention programs such as the Centers for Disease Control and Prevention's national Screen for Life campaign may have raised CRC screening awareness and may contributed to the increase in endoscopic screening rates among both ethnic groups. abstract_id: PUBMED:17701951 Differences in prognostic factors and survival among white and Asian men with prostate cancer, California, 1995-2004. Background: There are very limited data concerning survival from prostate cancer among Asian subgroups living in the U.S., a large proportion of whom reside in California. There do not appear to be any published data on prostate cancer survival for the more recently immigrated Asian subgroups (Korean, South Asian [SA], and Vietnamese). Methods: A study of prognostic factors and survival from prostate cancer was conducted in non-Hispanic whites and 6 Asian subgroups (Chinese, Filipino, Japanese, Korean, SA, and Vietnamese), using data from all men in California diagnosed with incident prostate cancer during 1995-2004 and followed through 2004 (n = 116,916). Survival was analyzed using Cox proportional hazards models. Results: Whites and Asians demonstrated significant racial differences in all prognostic factors: age, summary stage, primary treatment, histologic grade, socioeconomic status, and year of diagnosis. Every Asian subgroup had a risk factor profile that put them at a survival disadvantage compared with whites. Overall, the 10-year risk of death from prostate cancer was 11.9%. However, in unadjusted analyses Japanese men had significantly better survival than whites; Chinese, Filipino, Korean, and Vietnamese men had statistically equal survival; and SA men had significantly lower survival. On multivariate analyses adjusting for all prognostic factors, all subgroups except SA and Vietnamese men had significantly better survival than whites; the latter 2 groups had statistically equal survival. Conclusions: Traditional prognostic factors for survival from prostate cancer do not explain why most Asian men have better survival compared with whites, but they do explain the poorer survival of SA men compared with whites. abstract_id: PUBMED:37301735 Prostate cancer genetic alterations in Hispanic men. Background: Differences in DNA alterations in prostate cancer among White, Black, and Asian men have been widely described. This is the first description of the frequency of DNA alterations in primary and metastatic prostate cancer samples of self-reported Hispanic men. Methods: We utilized targeted next-generation sequencing tumor genomic profiles from prostate cancer tissues that underwent clinical sequencing at academic centers (GENIE 11th). We decided to restrict our analysis to the samples from Memorial Sloan Kettering Cancer Center as it was by far the main contributor of Hispanic samples. The numbers of men by self-reported ethnicity and racial categories were analyzed via Fisher's exact test between Hispanic-White versus non-Hispanic White. Results And Limitations: Our cohort consisted of 1412 primary and 818 metastatic adenocarcinomas. In primary adenocarcinomas, TMPRSS2 and ERG gene alterations were less common in non-Hispanic White men than Hispanic White (31.86% vs. 51.28%, p = 0.0007, odds ratio [OR] = 0.44 [0.27-0.72] and 25.34% vs. 42.31%, p = 0.002, OR = 0.46 [0.28-0.76]). In metastatic tumors, KRAS and CCNE1 alterations were less prevalent in non-Hispanic White men (1.03% vs. 7.50%, p = 0.014, OR = 0.13 [0.03, 0.78] and 1.29% vs. 10.00%, p = 0.003, OR = 0.12 [0.03, 0.54]). No significant differences were found in actionable alterations and androgen receptor mutations between the groups. Due to the lack of clinical characteristics and genetic ancestry in this dataset, correlation with these could not be explored. Conclusion: DNA alteration frequencies in primary and metastatic prostate cancer tumors differ among Hispanic-White and non-Hispanic White men. Notably, we found no significant differences in the prevalence of actionable genetic alterations between the groups, suggesting that a significant number of Hispanic men could benefit from the development of targeted therapies. abstract_id: PUBMED:16475208 Racial disparity and socioeconomic status in association with survival in older men with local/regional stage prostate carcinoma: findings from a large community-based cohort. Background: Few studies have examined the outcomes for Hispanic men with prostate carcinoma and incorporated socioeconomic factors in association with race/ethnicity in affecting survival, adjusting for factors on cancer stage, grade, comorbidity, and treatment. Methods: We studied a population-based cohort of 61,228 men diagnosed with local or regional stage prostate carcinoma at age 65 years or older between 1992 and 1999 in the 11 SEER (Surveillance, Epidemiology, and End Results) areas, identified from the SEER-Medicare linked data with up to 11 years of followup. Results: Low socioeconomic status was significantly associated with decreasing survival in all men with prostate carcinoma. Those living in the community with the lowest quartile of socioeconomic status were 31% more likely to die than those living in the highest quartile (hazard ratio [HR] of all-cause mortality: 1.31; 95% confidence interval [CI]: 1.25-1.36) after adjustment for patient age, comorbidity, Gleason score, and treatment. The HR remained almost unchanged after controlling for race/ethnicity (HR: 1.32; 95% CI: 1.26-1.38). Compared with Caucasians, the risk of mortality in African American men was marginally significantly higher (HR: 1.06; 95% CI: 1.01-1.11) after controlling for education, and no longer significant after adjusting for poverty, income, or composite socioeconomic variable; the HR was lower for Hispanic men (HR: 0.80; 95% CI: 0.72-0.89) after adjustment for education and other socioeconomic variables. Conclusion: Racial disparity in survival among men with local or regional prostate carcinoma was largely explained by socioeconomic status and other factors. Lower socioeconomic status appeared to be one of the major barriers to achieving comparable outcomes for men with prostate carcinoma. abstract_id: PUBMED:19107943 Risk of prostate cancer among Swedish-born and foreign-born men in Sweden, 1961-2004. To elucidate the importance of environmental and genetic factors in prostate cancer etiology, we compared the risk of prostate cancer among foreign-born men to that of Swedish-born men in Sweden and to that in the country of origin. We estimated rate ratios (RRs) with 95% confidence intervals (CIs) adjusted for age, calendar period of year and education using Poisson regression in a cohort of 3.8 million men aged 45 years and older between 1961 and 2004. During the 45 years of follow-up, 8,244 and 187,675 cases of prostate cancer occurred among foreign-born and Swedish-born men, respectively. Overall, foreign-born men had a significantly 40% decreased risk of prostate cancer compared to Swedish-born men (RR = 0.62, 95% CI = 0.61-0.63). Men born in Middle Africa and in the Caribbean had an increased risk (RR = 1.89, 95% CI = 0.95-3.78 and RR = 1.24, 95% CI = 0.71-2.19, respectively). The overall risk in both strata of duration of residence or age at immigration was lower among immigrants compared to Swedish-born men. After additional adjustment for birthplace and age at immigration, although the risk remained lower among immigrants compared to Swedish-born, but it was increased among immigrants who stayed 35 years and longer compared to those who stayed shorter (RR = 1.33, 95% CI = 1.21-1.46). Both environmental and genetic factors seem to be involved in the etiology of prostate cancer. Duration of residence was an important factor affecting the risk among immigrants. Studies focusing on the etiology of prostate cancer specifically in African immigrants and their descendants and increasing preventive and diagnostic activities on old immigrants are recommended. abstract_id: PUBMED:15622688 Prognostic factors for survival among Caucasian, African-American and Hispanic men with androgen-independent prostate cancer. Background: African-American men suffer disproportionately with respect to the incidence and mortality from prostate cancer. The objective of the current study was to define if race was an independent prognostic factor among other variables assessed for survival among men treated for androgen independent prostate cancer. Methods: Between 1988 and 1995, 379 patients with AIPC and clinical progression were referred for novel protocol therapies. Measured variables included: 1) patient age, 2) race or ethnicity, 3) hemoglobin, 4) alkaline phosphatase, 5) serum prostate-specific antigen (PSA) level, 6) time from hormonal ablation to AIPC, 7) number of metastases on bone scan, 8) osseous stage, 9) number of organ systems with metastases and 10) type of treatment for AIPC. Results: Median survival for the cohort was not significantly affected by race, on uni- or multivariate analysis. Multivariate analysis demonstrated that increasing hemoglobin (HR = 0.87 per g, 95% CI [0.81-0.94]) and time to AIPC (HR = 0.994, 95% CI [0.990-0.998]) were associated with increased survival while higher osseous stage (HR = 1.49, stage I versus II, 95% CI [1.11-1.99]), treatment group (HR = 1.68, treatment group I versus II, 95% CI [1.33-2.12]), metastases to three or more organ systems (HR = 1.31 versus less than three organs, 95% CI [1.15-1.49]), and advanced age (HR = 1.51 for age &gt; 70 versus &lt; or = 70, 95% CI [1.18-1.94]) were associated with a decrease in survival among patients with AIPC. Conclusion: Independent prognostic variables for survival among patients with AIPC included patient age, serum hemoglobin level, time to androgen-independent disease, treatment group and the extent of metastatic disease. Ethnicity did not adversely affect outcome. Answer: The study "Immigration factors and prostate cancer survival among Hispanic men in California: does neighborhood matter?" (PUBMED:24477988) investigated the impact of nativity and neighborhood-level Hispanic ethnic enclave on prostate cancer survival among Hispanic men in California. The study found that foreign-born Hispanics had a significantly lower risk of prostate cancer mortality compared to US-born Hispanics, with a hazards ratio (HR) of 0.81. Moreover, living in a high ethnic enclave neighborhood appeared to enhance this survival advantage (HR, 0.78) compared to living in low ethnic enclave neighborhoods (HR, 0.86). These results suggest that despite lower socioeconomic status, Hispanic immigrants have better survival after prostate cancer, and this pattern is more pronounced among those living in ethnic enclaves. Identifying the modifiable individual and neighborhood-level factors that facilitate this survival advantage in Hispanic immigrants could help inform interventions to improve survival among all patients.
Instruction: Are all aberrations equal? Abstracts: abstract_id: PUBMED:15808438 Epilepsy in chromosome aberrations Epilepsy is among the most frequent finding in many chromosome aberrations. While most chromosome aberrations can be associated with different seizure types, there are few aberrations which feature specific seizures and EEG patterns. Among the 400 different chromosomal imbalances described with seizures and EEG abnormalities, eight have a high association with epilepsy. These comprise: the monosomy 1p36, Wolf-Hirschhorn syndrome (4p-), Angelman syndrome, Miller-Dieker del 17p13.3, the inversion duplication 15 syndrome, ring 20 and ring 14 syndromes, Down's syndrome. These chromosomal regions where aberrations have an evident association with epilepsy may be useful targets for gene hunters. On the other hand, a better characterisation of epileptic syndrome in these disorders may lead to a better and specific treatment. abstract_id: PUBMED:17474225 Aberrations in the structure of the X chromosome in women The authors review structural aberrations of the X chromosome in women. The correlation of their occurrence with clinical experience has led to a better understanding of the syndromes they provoke as well as of the gene content and mode of action of the X chromosome. abstract_id: PUBMED:315203 Therapeutic possibilities in occular complications of chromosomal aberrations (author's tansl) The principal occular complications of chromosomal aberrations are : strabismus, cataract, ptosis, nystagmus. Each of these can benefit from surgical treatment but one has to take into account the unfavorable prognosis due to mental deficiency. abstract_id: PUBMED:10640813 Equal induction and persistence of chromosome aberrations involving chromosomes with heterogeneous lengths and gene densities. Little is known about the factors modulating the initial induction and persistence of chromosome aberrations. Chromosome length and gene density have been proposed to play a significant role. We have therefore analyzed the induction and persistence of gamma-ray-induced aberrations involving four human chromosomes (1, 4, 18, and 19) with highly heterogeneous lengths and gene densities. Multicolor FISH was performed on a wild-type lymphoblastoid cell line 1, 3, 7, 14, 28, 42, and 56 d after gamma-irradiation. The frequency of induced chromosomal aberrations was proportional to the length of the chromosomes. Complex aberrations, dicentrics, and fragments were highly unstable and disappeared during the first week after treatment and with similar kinetics for all four chromosomes. The frequency of translocations decreased with time and followed an exponential decline. Thirty percent of the gamma-ray-induced translocations were stable over the entire study period, irrespective of the length and the gene density of the chromosome involved. Accordingly, we concluded that the induction of chromosome aberrations is proportional to the length of the chromosome, that gene density makes no measurable contribution to induction, and that neither length nor gene density influences the persistence of chromosome aberrations. abstract_id: PUBMED:36066705 Metaphase Chromosome Preparation and Classification of Chromosomal Aberrations. Chromosomal aberrations are changes in structure and number of chromosomes. Metaphase chromosome can be analyzed by a standard light microscope to detect chromosomal aberrations. Recently, detailed analysis or rapid analysis was possible by using fluorescence probes and fluorescent microscope. The origins of chromosomal aberrations can be errors of DNA repair, cell divisions, and DNA synthesis. Analysis of chromosome aberrations can be used for the wide range of analysis. It includes a basic science connecting DNA damage to cellular death and mutagenesis and diagnostic tools for hereditary diseases and biodosimetry following radiation exposure.Specific DNA damages produce unique types of chromosomal aberrations. Analysis of chromosomal aberrations enables us to investigate the mechanisms of genotoxic stress. However, one type of DNA damage provides a variety of changes in chromosome structures. It is often confusing. This chapter introduces the standard technique of metaphase chromosome spread preparation and typical classification of chromosomal aberrations. abstract_id: PUBMED:3160442 Chromosome aberrations produced in human lymphocytes by in vivo and in vitro irradiation The production of chromosome aberrations in vivo has been studied in lymphocytes from a patient undergoing a wholebody treatment with gamma-radiation up to a cumulative dose of 1.4 Gy. These results were compared with the observations performed on whole blood samples irradiated in vitro with doses from 0.05 up to 2 Gy of gamma-rays. The frequency of chromosome aberrations, particularly the dicentrics, was found to be similar in vivo and in vitro. The yield of dicentrics could be best related to the dose by using a linear-quadratic model in both cases, the ratio of the coefficients a/b being of 0.56 and 0.69 Gy, respectively in vivo and in vitro. These observations confirm that in vitro dose response curves may be used to evaluate accurately an in vivo absorbed dose. abstract_id: PUBMED:9648007 Ultrasonographic signs of chromosome aberrations We reviewed the literature on ultrasonographic criteria allowing prenatal diagnosis of chromosome aberrations, especially the most frequent: trisomy. Signs vary depending on the term of the ultrasound examination (first trimester ultrasound is often performed to early and several signs are observed in the second trimester). During the first trimester, the main criteria is the diagnosis of nuchal clearness 3 mm. The distance can only be measured with an appropriate sagittal CRL section by an experienced operator. The ideal term of this morphology ultrasound is 10 weeks gestation. During the second trimester, there are many suggestive criteria including non-specific signs: anomalous quantity of amniotic fluid, short femur, nuchal thickness 6 mm, isolated anomaly of the umbilical velocimetry, pyelectasy and fetal malformations (mainly cerebral or abdominal, including ophalocele and diaphragmatic hernia, anomalies, abnormal heart anatomy, cystic hygroma, facial anomalies and malformations of the members, often abnormal flexion of the hands). abstract_id: PUBMED:17474226 Aberrations of the sex chromosomes in women : an anatomo-clinical classification The authors review aberrations of the sex chromosomes in women and their relation to gonadal dysgenesis, male pseudo hermaphroditism and true hermaphroditism. Certain chromosome disorders are included in this classification (Noonan',s syndrome, familial forms of true gonadal dysgenesis, testicular feminization syndrome) although they are not accompanied by karyotypic anomalies. abstract_id: PUBMED:34573313 Chromosomal Aberrations in Cattle. Chromosomal aberrations and their mechanisms have been studied for many years in livestock. In cattle, chromosomal abnormalities are often associated with serious reproduction-related problems, such as infertility of carriers and early mortality of embryos. In the present work, we review the mechanisms and consequences of the most important bovine chromosomal aberrations: Robertsonian translocations and reciprocal translocations. We also discuss the application of bovine cell cultures in genotoxicity studies. abstract_id: PUBMED:1699877 Chromosomal aberrations in patients with primary biliary cirrhosis. Chromosomal aberrations in untreated lymphocyte cultures, bleomycin (BLM)-induced aberrations and sister chromatid exchanges (SCE) in the peripheral blood lymphocytes of 11 patients suffering from primary biliary cirrhosis (PBC) and 14 matched control individuals were analysed. The lymphocytes of the PBC patients had on average a lower mitotic index (2.3) compared with controls (3.5) in the untreated cultures. The mean baseline rate of aberrations of the cultured lymphocytes of the patients was 5.3 aberrations per 100 metaphases (%); this was significantly different (P = 0.0291) from that of the controls with a mean of 2.3%. In lymphocytes of the patients and controls, most of the aberrations observed took the form of gaps; there was an almost equal breakage rate in both groups (0.5% and 0.4%, respectively). The average number of mitoses with aberrations in the PBC patients studied was double that of the controls (4.9% and 2.3% respectively, P = 0.0323). The mean number of the BLM-induced aberrations was 54.0% and 27.7% for the lymphocytes of the patients and controls, respectively. The mean number of the aberrant mitoses in the BLM cultures was 6 times higher than that of the untreated cultures for both groups, 25.7% and 14.6% respectively (P = 0.018). The chromosomal distribution of baseline and induced aberrations was not random. The PBC patients had a mean number of 8.7 SCE per mitosis, which was significantly higher than the SCEs in the controls (6.3 SCE per mitosis; P = 0.0156). The evidence suggests that the chromosomes of the lymphocytes of PBC patients may be less stable than those of the control individuals in this study. Answer: No, all aberrations are not equal. The induction and persistence of chromosome aberrations can be influenced by various factors, including chromosome length and gene density. However, a study analyzing gamma-ray-induced aberrations involving human chromosomes with highly heterogeneous lengths and gene densities found that the frequency of induced chromosomal aberrations was proportional to the length of the chromosomes, but gene density did not make a measurable contribution to induction. Furthermore, neither length nor gene density influenced the persistence of chromosome aberrations, as both complex aberrations and translocations showed similar kinetics for disappearance or decline over time, regardless of the chromosome's characteristics (PUBMED:10640813). Additionally, specific DNA damages can produce unique types of chromosomal aberrations, and the analysis of these aberrations can help investigate the mechanisms of genotoxic stress. However, one type of DNA damage can lead to a variety of changes in chromosome structures, which can be confusing (PUBMED:36066705). Moreover, the origins of chromosomal aberrations can include errors in DNA repair, cell divisions, and DNA synthesis (PUBMED:36066705). In the context of clinical implications, different chromosome aberrations can be associated with various syndromes and disorders. For instance, certain chromosomal regions where aberrations have a strong association with epilepsy may be useful targets for gene hunters, and better characterization of epileptic syndromes in these disorders may lead to more specific treatments (PUBMED:15808438). Chromosomal aberrations can also be associated with ocular complications, such as strabismus, cataract, ptosis, and nystagmus, each of which can benefit from surgical treatment (PUBMED:315203). In summary, chromosomal aberrations are not all equal in terms of their induction, persistence, and clinical consequences. The specific type and location of the aberration, as well as the underlying cause, can significantly influence the outcome and implications of the aberration.
Instruction: Is the screening method of sacral neuromodulation a prognostic factor for long-term success? Abstracts: abstract_id: PUBMED:21168865 Is the screening method of sacral neuromodulation a prognostic factor for long-term success? Purpose: We evaluated whether there is a difference in long-term outcomes between patients screened with percutaneous nerve evaluation and a first stage tined lead procedure. We also evaluated the outcome in patients who only responded to screening with the tined lead procedure after failed initial percutaneous nerve evaluation. Materials And Methods: We evaluated all patients screened for eligibility to receive sacral neuromodulation treatment since the introduction of the tined lead technique in our center in 2002. In May 2009 all implanted patients were asked to maintain a voiding diary to record the effect of sacral neuromodulation on urinary symptoms. Chi-square analysis was used to evaluate differences in the long-term outcomes of the separate screening methods. Results: A total of 92 patients were screened for sacral neuromodulation. Of the 76 patients screened with percutaneous nerve evaluation 35 (46%) met the criteria for permanent implantation. In 11 of the 16 patients (69%) who underwent direct screening with the tined lead procedure permanent stimulators were placed. Of the 41 patients in whom percutaneous nerve evaluation failed and who subsequently underwent screening with tined lead procedure 18 (44%) were implanted with a neurostimulator after a successful response. Statistical analysis showed no difference between screening type and long-term success (p = 0.94). Conclusions: The first stage tined lead procedure is a more sensitive screening tool than percutaneous nerve evaluation but long-term success seems to be independent of the screening method. Patients in whom percutaneous nerve evaluation initially failed but who responded to prolonged screening the with tined lead procedure appeared to be as successful as those who directly responded to percutaneous nerve evaluation or the tined lead procedure. abstract_id: PUBMED:37547929 Long-term cost-effectiveness analysis of sacral neuromodulation in the treatment of severe faecal incontinence. Aim: The aim of this study was to evaluate the long-term cost-effectiveness of sacral neuromodulation in the treatment of severe faecal incontinence as compared with symptomatic management. Methods: In the public health field, a micro-costing evaluation method was conducted from the perspectives of the health system and the society. The incremental cost-effectiveness ratio was used as a decision index, and we considered various scenarios to evaluate the impact of the cost of symptomatic management and percutaneous nerve evaluation success rate in its calculation. Clinical data were retrieved from a consecutive cohort of 93 patients with severe faecal incontinence undergoing sacral neuromodulation after a failure of conservative (pharmacological and biofeedback) and/or surgical (sphincteroplasty) first-line treatments were considered. Results: The long-term incremental cost-effectiveness ratio comparing sacral neuromodulation versus symptomatic management was 14347€/QALY and 28523€/QALY from the societal and health service provider's perspectives, respectively. If the definitive pulse generator implant success rate was 100%, incremental cost-effectiveness would correspond to 6831€/QALY and 16761€/QALY, respectively. Conclusions: Sacral neuromodulation may be considered a cost-effective technique in the long-term treatment of severe faecal incontinence from the societal and health care sector perspectives. Improving patient selection and determining the predictive outcome factors for successful sacral neuromodulation in the treatment of faecal incontinence would improve cost-effectiveness. abstract_id: PUBMED:35027284 Effectiveness of sacral neuromodulation in patients with Parkinson's disease Introduction: The urinary disorders of the patients with Parkinson's disease are complex and have a negative impact on their quality of life. None of therapy is considered effective ; whether drug or surgical. Sacral neuromodulation, recommended in other neurological pathologies such as multiple sclerosis, has never been studied in the patients with Parkinson's disease. The objective of our study is to assess the efficacy of sacral neuromodulation in the patients with Parkinson's disease. Material And Method: Multicentric retrospective cohort study, of 22 parkinsonian patients who underwent a sacral neuromodulation test. Epidemiological, clinical and urodynamic data were collected for each patient. A long-term effectiveness telephone survey was conducted. Results: Twenty two patients with Parkinson's disease had a sacral neuromodulation test. 17/22 (77%) had Idiopathic Parkinson's Disease and 5/22 (23%) had Systematized Multi Atrophy. Clinically, the indication for the sacral neuromodulation test was overactive bladder in 68% of the cases. Urodynamically, detrusor hyperactivity is found in 12 patients (8 MPI, 4 AMS). Sacral neuromodulation was effective in only 7 patients (6 MPI and 1 AMS). Rather, the profile of the patient in whom NMS is effective is female, mature, and with PID. The long-term effectiveness of NMS is disappointing. Only 2 permanently implanted patients retained urinary benefit. Conclusion: NMS improves urinary symptoms in the patients with Parkinson's disease in 32% of cases. It fluctuates over time and loses its effectiveness in the long term. Level Of Evidence: 3: abstract_id: PUBMED:23404204 Review of sacral neuromodulation for management of constipation. Background: Sacral neuromodulation (SN) is an emerging treatment for constipation. This review evaluates the mechanism of action, techniques, efficacy, and adverse effects of SN in the management of constipation. Methods: Electronic searches for studies describing the use of SN were performed in PubMed, MEDLINE and Embase. Abstracts were reviewed and full text copies of all relevant articles obtained. Results: Fifty-nine results were obtained on the initial searches. Ten studies discussed the results of SN in patients with constipation. A total of 225 temporary neuromodulations and 125 permanent implants were performed. Bowel diaries showed improvement in assessment criteria in more than 50% of patients on temporary neuromodulation and the results were maintained in approximately 90% of patients who underwent permanent implantation over medium to long-term follow-up. The rate of adverse effects was high, but the majority of them were related to electrode position. Improvements in transit studies and anorectal physiology after neuromodulation were noted in some studies. The recognized limitations included a lack of randomized studies and an inability to perform meta-analysis. Conclusion: Sacral neuromodulation may be an effective treatment in selected patients with constipation and should be a part of the management repertoire. Improvement in defecatory frequency with temporary wire placement is a good predictor of subsequent response following permanent implant. Further research into predictive factors for success would improve patient selection. abstract_id: PUBMED:35061951 Sacral neuromodulation for the treatment of overactive bladder: systematic review and future prospects. Introduction: Sacral Neuromodulation (SNM) is a minimally invasive treatment for OAB patients following failure of conventional interventions. Patient selection, lead placement, and testing technique are important pillars in optimizing success rates. Areas Covered: A comprehensive literature search was conducted on 'sacral neuromodulation' and 'overactive bladder.' There was no date restriction, with the last search dated 31 May 2021. Patient selection, lead placement, test phases, safety, efficacy, and available devices are thoroughly discussedLastly, future perspectives will be presented with the anticipated trajectory of sacral neuromodulation over the next five years. Expert Opinion/commentary: SNM has proved to be a safe and effective therapy on the short-, medium- and long-term without precluding any other treatment options. In all studies reviewed, no life threatening or major irreversible complications were presented. However, surgical re-intervention rates were high with a median of 33.2% (range: 8-34%) in studies with at least 24 months follow-up. No true consensus could be made regarding prognostic factors. However, optimized lead placement, consequent ideal motor thresholds, and the use of a curved stylet theoretically facilitates reaching maximal success with SNM. Test phase success rates increased to such a level that from a cost-effective point of view, single-stage implants could be considered.Abbreviations: OAB: overactive bladder; SNM: sacral neuromodulation; BoNT-A: Botulinum toxin A; PFM EMG: pelvic floor muscle electromyography; IPG: implantable pulse generator; PNE: percutaneous nerve evaluation; FSTLP: first-stage tined lead procedure; NLUTD: neurogenic lower urinary tract dysfunction; ITT: intention to threat; PPMC: per protocol modified completers; PPC: per protocol completers; AE: adverse event; MRI: magnetic resonance imaging; RCT: randomized controlled trial. abstract_id: PUBMED:28429130 Long-term outcome of sacral neuromodulation for chronic refractory constipation. Purpose: Sacral neuromodulation has been reported as a treatment for severe idiopathic constipation. This study aimed to evaluate the long-term effects of sacral neuromodulation by following patients who participated in a prospective, open-label, multicentre study up to 5 years. Methods: Patients were followed up at 1, 3, 6, 12, 24, 36, 48 and 60 months. Symptoms and quality of life were assessed using bowel diary, the Cleveland Clinic constipation score and the Short Form-36 quality-of-life scale. Results: Sixty-two patients (7 male, median age 40 years) underwent test stimulation, and 45 proceeded to permanent implantation. Twenty-seven patients exited the study (7 withdrawn consent, 7 loss of efficacy, 6 site-specific reasons, 4 withdrew other reasons, 2 lost to follow-up, 1 prior to follow-up). Eighteen patients (29%) attended 60-month follow-up. In 10 patients who submitted bowel diary, their improvement of symptoms was sustained: the number of defecations per week (4.1 ± 3.7 vs 8.1 ± 3.4, mean ± standard deviation, p &lt; 0.001, baseline vs 60 months) and sensation of incomplete emptying (0.8 ± 0.3 vs 0.2 ± 0.1, p = 0.002). In 14 patients (23%) with Cleveland Clinic constipation score, improvement was sustained at 60 months [17.9 ± 4.4 (baseline) to 10.4 ± 4.1, p &lt; 0.001]. Some 103 device-related adverse events were reported in 27 (61%). Conclusion: Benefit from sacral neuromodulation in the long-term was observed in a small minority of patients with intractable constipation. The results should be interpreted with caution given the high dropout and complication rate during the follow-up period. abstract_id: PUBMED:35021838 Sacral neuromodulation for faecal incontinence - 10 years experience and long-term outcomes of a specialized centre. Introduction: Sacral neuromodulation/sacral nerve stimulation (SNM/SNS) has become the most successful method for treatment of faecal incontinence (FI) in the last 10 years. The high efficiency of SNM is based on the electrical stimulation of the external anal sphincter and moreover the mechanism of action of SNS can be explained by the modulation of somatovisceral reflexes and perceptions of afferent information. Therefore the mechanism of action is more complex in contrast to other methods of treatment. In the Czech Republic, the SNM was implemented for the first time in 2010 with the financial support of the IGA grant of the Ministry of Health of the Czech Republic. Since 2018, two specialized centres for the treatment of FI using the SNM method have been established in the Czech Republic. Methods: In the years 20102020, 35 patients were indicated for SNM. The ratio of women to men was 34:1. The mean age at implantation was 62 years (range 4675). Most patients were in the 6th and 7th decade. Two diagnostic procedures were performed in all patients, percutaneous evaluation of the S2S4 sacral nerves, implantation of the Medtronic 3889 28cm stimulation tined lead electrode and its connection to an external stimulator and subsequent subchronic stimulation for 24 weeks. The criteria for permanent neurostimulator implantation were a minimum 50% reduction in the number of FI episodes per week or a 50% reduction in incontinence score. Patients were then implanted with a Medtronic InterStim II 3058 permanent neurostimulator. Results: A permanent neurostimulator was implanted in 33 of 35 patients (94%). No patient died. The complication rate was 11.4%. In 2 patients it was an infectious complication. In one patient malposition of the stimulator occurred after falling down and in one patient we observed lead breakage with subsequent malfunction of the stimulator after falling down. All complications were successfully resolved by reoperation. The longterm effect of SNM was evaluated in the group of the first 15 implanted patients from 20102011. Of these, 9 patients were available, in whom a new neurostimulator was reimplanted due to loss of battery power in 20182020. The mean length of follow-up was 112 months (99124). The mean number of FI episodes per week was 1.9 (013) after neurostimulator implantation compared to 13.6 (325) before implantation. The Cleveland Clinic Incontinence Score (CCIS) was 8.3 (316) after neurostimulator implantation compared to CCIS 18.8 (1520) before implantation. Both FI episode counts and CCIS scores were significantly lower (p. abstract_id: PUBMED:30729252 Variation in bony landmarks and predictors of success with sacral neuromodulation. Introduction And Hypothesis: We assessed variations in sacral anatomy and lead placement as predictors of sacral neuromodulation (SNM) success. Based solely on bony landmarks, we also assessed the accuracy of the 9 and 2 protocol for locating S3. Methods: This is a retrospective cohort study performed from October 2008 to December 2016 at the University of North Carolina at Chapel Hill. Fluoroscopic images were used to assess sacral anatomy and lead location. Success was defined as &gt;50% symptom improvement after stage I and clinical response at most recent follow-up. Results: Of 249 procedures, 209 were primary implants and 40 were revisions among 187 (89.5%) women and 22 (10.5%) men. Success rate was 83.3% for primary implants and 89.4% for revisions. Success was associated with shorter implant duration (21.3 ± 22.2 vs 33.6 ± 25.8 months), higher body mass index (30.3 ± 7.8 vs 27.6 ± 6.1 kg/m2), and straight vs curved lead (90.5% vs 80.5%) (all p = .05), but not with sacral anatomy or lead placement. In assessing the 9 and 2 protocol, mean distance from coccyx to S3 did not equal 9 cm: 7.4 ± 1.0 vs 7.2 ± 0.8 cm (p = .26), while mean distance from midline to S3 did equal 2 cm: 1.9 ± 0.4 vs 2.0 ± 0.7 cm (p = .37). Conclusions: Variations in sacral anatomy and lead placement did not predict SNM success. The 2-cm protocol was verified while the 9-cm protocol was not, although neither was predictive of success, which may obviate the need to mark bony landmarks prior to fluoroscopy. abstract_id: PUBMED:30310935 Sacral neuromodulation for refractory overactive bladder Sacral neuromodulation (SNM) is a minimally invasive, established treatment modality for overactive bladder (OAB) as well as for other pelvic functional disorders, such as non-obstructive urinary retention and fecal incontinence. A pacemaker implanted in the upper buttocks delivers mild electrical pulses to the sacral nerves thereby modulating pathological afferent signals from pelvic end organs. The current article conveys the current knowledge on the indications, contraindications, mechanism of action, implantation technique, efficacy and complications of SNM for the treatment refractory OAB. abstract_id: PUBMED:27822462 Sacral neuromodulation in overactive bladder: a review and current perspectives. Overactive bladder (OAB) symptoms of urgency, frequency, and urge incontinence currently affect a substantial portion of the population, especially as age increases. Sacral neuromodulation has become a popular option for refractory OAB symptoms over the past 2 decades. Studies have demonstrated that it is an effective treatment for OAB and urge incontinence as indicated by decreased number of voids, increased bladder capacity, and fewer leakage events. In addition, the effects have proved to be durable to multiple years following implantation. These benefits come at the expense of a high rate of adverse events, although with comparable long-term cost-effectiveness to botulinum toxin A. We aimed to review the literature that demonstrates that sacral neuromodulation continues to be an efficacious treatment for refractory OAB wet and dry patients, with continuously expanding indications. Answer: The screening method of sacral neuromodulation does not appear to be a prognostic factor for long-term success. A study evaluated whether there was a difference in long-term outcomes between patients screened with percutaneous nerve evaluation (PNE) and those screened with a first stage tined lead procedure. The results showed that although the first stage tined lead procedure is a more sensitive screening tool than PNE, long-term success seems to be independent of the screening method used. Patients who initially failed PNE but responded to the tined lead procedure had similar long-term success rates as those who responded directly to either PNE or the tined lead procedure (PUBMED:21168865). This suggests that the type of screening method does not significantly impact the long-term efficacy of sacral neuromodulation.
Instruction: Is there value in routinely obtaining a report from the general practitioner as part of pre-entry health screening of students for nursing studies? Abstracts: abstract_id: PUBMED:15133141 Is there value in routinely obtaining a report from the general practitioner as part of pre-entry health screening of students for nursing studies? Background: Reports from general practitioners (GPs) are requested on applicants for nurse training, but there is no published evidence of the merit of this practice. Aims: To assess the benefit of GP report in health assessments of student nurse applicants. Methods: An audit was made of information obtained by health declaration form (HDF), nurse's assessment, GP report and, when performed, a physician's assessment for each applicant. Agreement between the health questionnaire and GP report was analysed by kappa statistics. Results: Of 254 applicants, 246 (97%) were declared 'fit to work', four (1.6%) were deemed 'fit with restrictions' and four (1.6%) were considered 'unfit to work'. The most common problems declared were psychiatric and skin problems. The agreement between health declaration and the information provided by GPs was classed as almost perfect for diabetes and only fair to moderate for all other measures. The reports provided additional information on problems not declared by applicants, but all of these were passive problems. The four unfit candidates all had psychiatric illness, but in all cases the occupational health assessment was sufficient to make this decision or to request further information. In the 'fit with restrictions' category, three of the four GP reports (75%) helped in correctly assigning the applicants to this category. In one of these eight cases a passive problem had not been declared. Conclusions: The additional information in GP reports does not affect the conclusion regarding fitness for training in most cases and does not provide sufficient information to merit it being sought routinely. abstract_id: PUBMED:25979152 Clinical placements in Australian general practice: (Part 1) the experiences of pre-registration nursing students. An international shift towards strengthening primary care services has stimulated the growth of nursing in general (family) practice. As learning in the clinical setting comprises a core component of pre-registration nursing education, it is logical that clinical placement opportunities would follow the workforce growth in this setting. Beyond simply offering placements in relevant clinical areas, it is vital to ensure high quality learning experiences that meet the educational needs of pre-registration nurses. Part 1 of a two part series reports on the qualitative study of a mixed methods project. Fifteen pre-registration nursing students participated in semi-structured interviews following a clinical placement in an Australian general practice. Interviews were transcribed verbatim and underwent a process of thematic analysis. Findings are presented in the following four themes; (1) Knowledge of the practice nurse role: I had very limited understanding, (2) Quality of the learning experience: It was a fantastic placement, (3) Support, belonging and mutual respect: I really felt part of the team, (4) Employment prospects: I would really, really love to go to a general practice but …… General practice placements exposed students to a diverse range of clinical skills which would equip them for future employment in primary care. Exposure to nursing in general practice also stimulated students to consider a future career in this clinical setting. abstract_id: PUBMED:38215688 Graduate entry nursing students' development of professional nursing self: A scoping review. Background: Accelerated graduate entry nursing programmes require students to rapidly socialise to the profession. Professional identity is an important element of becoming a nurse. Objective: This scoping review aimed to synthesise published literature reporting the development of professional identity, belongingness and self-concept as a nurse in students enrolled in a pre-registration graduate entry nursing programme. Design: Scoping review. Setting: Graduate entry nursing programmes. Participants: Graduate entry nursing students. Method: Following a pre-registered protocol, we searched electronic databases for publications investigating graduate entry nursing students' development of professional identity, belongingness and self-concept. Screening, data extraction and analysis were initially in duplicate and independent, and then by consensus. Results: Of the 871 records identified, twenty met the inclusion criteria. Publications were from the USA, Australia, New Zealand, Canada, and the UK. We identified one overarching theme of 'professional nursing self', with four sub-themes: 1) professional socialisation, 2) professional self-concept, 3) developing nursing agency, and 4) identity formation. Socialisation into nursing and belongingness to the profession occurred concurrently as students moved through their programme of learning. Due to the accelerated nature of the programmes, rapid professional socialisation was required, supported by positive relationships in the clinical setting. Strategies that enhanced belongingness and wellbeing enabled students to feel connected to the profession. Conclusions: The development of professional identity in graduate entry nursing students is impacted by their rapid professional transition through an accelerated programme. Students' growing sense of nursing agency is embodied in their experiences of thinking and acting as a nurse. Their previous professional identity is then reconstituted in their new graduate selves; educational programmes support this transition. Tweetable Abstract: Scoping review finds professional identity development in graduate entry nursing students is rapid in accelerated preregistration degrees #belonging #connection. abstract_id: PUBMED:25892350 A case study exploring the experience of graduate entry nursing students when learning in practice. Aim: To explore how Graduate Entry Nursing students present and position themselves in practice in response to anti-intellectualist stereotypes and assessment structures. Background: A complex background turbulence exists in nurse education which incorporates both pro- and anti-intellectualist positions. This represents a potentially challenging learning environment for students who are recruited onto pre-registration programmes designed to attract graduates into the nursing profession on the basis of the specific attributes they bring known as 'graduateness'. Design: A longitudinal qualitative case study conducted over 2 years. Methods: Data were collected from eight Graduate Entry Nursing students at 6 monthly points between 2009-2011 via diaries, clinical assessment documentation and interviews. Forty interviews took place over 2 years. Additionally, three focus groups involving 12 practice assessors were conducted at the end of the study period. Data were analysed through a social constructivist lens and compared with a set of suppositions informed by existing empirical and theoretical debates. Findings: Demonstrated the interplay of performance strategies adopted by Graduate Entry Nursing students to challenge or pre-empt actual or perceived negative stereotypes held by established practitioners to gain acceptance, reduce threat and be judged as appropriately competent. Conclusion: Students interpreted and responded to, perceived stereotypes of nursing practice they encountered in ways which facilitated the most advantageous outcome for themselves as individuals. The data present the creative and self-affirming strategies which students adopted in response to the expectations generated by these stereotypes. They also depict how such strategies commonly involved suppression of the attributes associated with 'graduateness'. abstract_id: PUBMED:36108350 Nursing students' experiences of health screening in rural areas of southern Turkey: a qualitative study. Introduction: Individuals in rural areas live with healthcare disadvantages relating to, for example, access to health institutions, necessary treatments, and healthcare professionals during medical emergencies. The aim of this study was to explore the experiences, beliefs and attitudes of nursing students to identify advantages and disadvantages of health screening in several rural areas in rural Turkey. Methods: Health screening practices with senior nursing students were conducted in six rural areas. A qualitative descriptive study was performed using thematic analysis of open-ended responses to a web-based survey of 34 students aged 18 years and over. This study was conducted in March and April 2020. Results: The practices of nursing students in rural areas included measuring vital signs, body mass index calculation, blood glucose and cholesterol measurement, depression screening, cancer screening and health education. Students undertook various health screening practices in rural health care including colorectal cancer screening, evaluation of scales used in diabetes and depression risk. Characteristics referred to by student nurses as part of public health nursing roles were protector, advocate, supporter, caregiver, coordinator, collaborator, educator, counsellor, researcher, therapist, case manager, leader and care provider. The main themes generated related to student emotions, feedback of screening participants to nursing students, positive nursing characteristics, advantages and disadvantages of doing health screening in rural areas, benefits of working with health professionals to nursing student education, and feedback for nursing educators and researchers. Conclusion: Participants recognised their emotions, and the benefits and advantages of health screening practices, and disadvantages were determined across the themes. Health services should be planned by taking these experiences into account in health screenings to be carried out in rural areas. abstract_id: PUBMED:34482206 Exploring the experiences and perceptions of students in a graduate entry nursing programme: A qualitative meta-synthesis. Background: Students commencing graduate entry fast-tracked nursing programmes leading to registration are highly motivated and characterised by rich life experiences. Given their unique motivations and characteristics, gaining insight into their experiences of graduate entry programmes will inform strategic directions in education. Objective: To synthesise graduate entry nursing students' self-reported experiences and perceptions of their accelerated programme. Design: Qualitative meta-synthesis. Data Sources: Databases included Cumulative Index of Nursing and Allied Health Literature, Emcare, Education Resources Information Centre, Medical Literature Analysis and Retrieval System Online, Psychological Information and Scopus. Qualitative studies published in English and reporting primary data analysis including experiences and perceptions of graduate entry nursing students were considered. Review Methods: Qualitative studies were systematically identified and critically appraised. The meta-synthesis used an open card sort technique to organise data into a matrix of graduate entry nursing students' experiences and perceptions. Results: Fourteen studies were included. The analysis revealed three primary themes: what I bring and what I come with, developing a sense of self and nursing self, and what I need. Within these themes we found potential enablers of student success in learning; space, working together, and balancing work and life and learning to bridge two worlds. Students reflected on the benefits of academic support and shared their experiences of learning in clinical placement. In addition, students acknowledged the importance of clinical educators and preceptors who provided bridging that was further scaffolded by simulated learning experiences. Conclusions: Findings indicate graduate entry nursing students have important needs and expectations of support in transition. The experiences and perceptions of graduate entry nursing students differentiated into what students arrived with, what support they need in their journey to become a nurse, alongside their experience of building a sense of self and their nursing self. Systematic Review Registration Number: CRD42020220201. abstract_id: PUBMED:8885482 Cancer detection activities coordinated by nursing students in community health. This article describes ongoing cancer screening in a low-income and ethnically diverse community (primarily Southeast Asian and Hispanic). These services are part of the comprehensive care provided in a district nursing community health clinical. Screening services occur within the refugee community and include mammograms, individualized breast self-exam (BSE) teaching, home follow-up on the BSE teaching, and assistance obtaining any additional screening or treatment, if necessary. Except for technician activities, students plan, implement, and evaluate all services. The first event was in spring 1995, the second in summer 1995, and the third in fall 1995. Thus far, 85 women have received services. Cambodian and Laotian women show the lowest level of knowledge and experience related to breast cancer detection. This article provides some of the first data on cancer screening for low-income Cambodian and Laotian women in the United States. The article also shows how ongoing cancer screening and prevention services can be provided to populations that have not been successfully reached through usual means, e.g., referral by nurse practitioner, physician, and electronic or print media. Specific means of overcoming barriers to screening, prevention, and learning are described in detail. abstract_id: PUBMED:25397970 Nursing students' attitudes about psychiatric mental health nursing. The purpose of this study was to describe Masters entry nursing students' attitudes about psychiatric mental health clinical experiences; preparedness to care for persons with mental illness; students' perceived stigmas and stereotypes; and plans to choose mental health nursing as a career. A 31-item survey was administered to pre-licensure graduate nursing students who were recruited from a Masters entry nursing program from a university in a large city in the Midwestern US. Results indicated that clinical experiences provide valuable experiences for nursing practice, however, fewer students think that these experiences prepare them to work as a psychiatric mental health nurse and none plan to pursue careers as psychiatric mental health nurses. The findings support conclusions from other studies that increasing the amount of time in the clinical setting and adding specific content to the curriculum, particularly content related to the importance of psychiatric mental health nursing and the effects of stigma, may assist the profession's efforts to recruit and retain psychiatric mental health nurses. Further research is needed to determine the effectiveness of these strategies and to identify the best ways to implement them. abstract_id: PUBMED:16342636 Empowering nursing faculty and students for community service. As the health care delivery system moves from acute care settings to community-based services, nursing education must examine the experiences that increase students' abilities to function within a changing system. Currently, students receive community-based experiences that involve teaching health promotion concepts. However, students are not routinely prepared with specialized screening skills to use in the community. Using a train-the-trainer model, faculty and students were taught four specialized screening skills: Early and Periodic Screening, Diagnosis, and Treatment, Denver II Developmental Screening, vision screening, and hearing screening. Initially, faculty obtained certification as trainers for these specialized skills. Then, faculty educated students and provided enhanced, guided community experiences. Undergraduate, RN-to-BSN, and graduate family nurse practitioner students participated in this community-based experience. Project outcomes were evaluated based on the number of students and faculty who were certified and participated in these community activities. abstract_id: PUBMED:33145951 'Anyone can have a mental illness': A qualitative inquiry of pre-registration nursing students' experiences of traditional mental health clinical placements. Nurses play a crucial role in mental healthcare provision. Like many countries, Australian nursing students are educated in comprehensive pre-registration programmes which include mental health clinical placements. Placements play a vital role in students' education, providing the opportunity to engage with consumers and develop mental health nursing knowledge and skills. There is limited knowledge of student perspectives on traditional placements in contemporary recovery-oriented mental health services. This interpretive qualitative inquiry aimed to explore nursing students' experience of traditional mental health clinical placement and how it influenced their practice and their understandings of recovery from mental illness. Data were collected from focus groups with n = 31 nursing students in a large metropolitan public mental health service. Thematic analysis resulted in three themes of experience: humanizing people with mental illness; learning about recovery; and shifting perspectives on mental health nursing. Through a positive placement experience where they felt supported and included by staff, students came to see consumers as people rather than diagnoses, developed greater understanding of mental health nursing work and were more likely to consider mental health nursing as a career choice. Peer-support workers were an important influence on students' understandings of recovery and have a key role to play in educating students on placement. Students need to be prepared and supported by university and clinical staff to deal with vicarious trauma that may occur on placement. Mental health placements play a crucial role in attracting students into the field, and it is imperative they remain part of comprehensive pre-registration education. Answer: Based on the information provided in the abstracts, there is an indication that routinely obtaining a report from the general practitioner as part of pre-entry health screening of students for nursing studies may not be of significant value. Specifically, an audit assessing the benefit of GP reports in health assessments of student nurse applicants found that the additional information in GP reports did not affect the conclusion regarding fitness for training in most cases and did not provide sufficient information to merit it being sought routinely (PUBMED:15133141). The reports provided some additional information on problems not declared by applicants, but these were passive problems, and the occupational health assessment was sufficient to make decisions regarding fitness or to request further information. In the 'fit with restrictions' category, GP reports helped in correctly assigning applicants in some cases, but overall, the study concluded that the routine collection of GP reports is not justified based on the findings. The other abstracts provided do not directly address the question of the value of GP reports in pre-entry health screening but rather focus on various aspects of nursing education and student experiences, such as clinical placements (PUBMED:25979152), professional identity development (PUBMED:38215688), experiences of graduate entry nursing students (PUBMED:25892350, PUBMED:34482206), health screening practices (PUBMED:36108350), cancer detection activities (PUBMED:8885482), attitudes about psychiatric mental health nursing (PUBMED:25397970), empowering nursing faculty and students for community service (PUBMED:16342636), and experiences of traditional mental health clinical placements (PUBMED:33145951). These abstracts provide context on the broader educational and clinical experiences of nursing students but do not offer evidence related to the specific question of GP report value in pre-entry health screening.
Instruction: Does co-morbid depressive illness magnify the impact of chronic physical illness? Abstracts: abstract_id: PUBMED:16608557 Does co-morbid depressive illness magnify the impact of chronic physical illness? A population-based perspective. Objective: To examine the relative and combined impact of depressive and chronic physical conditions on functional status and health-care use in the general population. Method: Canadian, representative, population-based cross-sectional survey (n=130,880). Major depressive disorder (MDD) in the past 12 months was assessed by structured interview, and physical disorders, activity reduction, role impairment and work absence by self-report. The relative impact of MDD and six common chronic physical illnesses (asthma, arthritis, back problems, chronic obstructive pulmonary disease, heart disease and diabetes) was estimated using multivariate regression, adjusting for sociodemographic characteristics and overall chronic physical illness burden. Results: After adjusting for sociodemographic characteristics, alcohol dependence and chronic physical illness burden, the presence of co-morbid MDD was associated with significantly greater (approximately double the) likelihood of health-care utilization and increased functional disability and work absence compared to the presence of a chronic physical illness without co-morbid MDD. This impact of MDD was seen across each of the six chronic physical illnesses examined in this study, with the strongest associations seen for work absence. Conclusions: These observations confirm prior findings of a strong association at the population level between major depression and health-care use and role impairment among persons with chronic physical disorders. They also point to the significant impact of co-morbid major depression on health-care seeking, disability and work absence in persons with chronic physical illness, underscoring the need for greater efforts to design and test the impact of detection and treatment programs for such individuals. abstract_id: PUBMED:21714361 Treating co-morbid chronic medical conditions and anxiety/depression. Aims: This systematic review examines interventions for care of people with co-morbid chronic medical illness and anxiety/depression disorders--a group with high risks for morbidity and mortality. Methods: Systematic search of Medline 1995 to January 2011 for randomized controlled trials of treatment interventions designed for adult outpatients with diagnosed chronic medical illness (diabetes mellitus, cardiovascular disorders, and chronic respiratory disorders) and anxiety/depression disorders. Results: Six trials studied complex interventions based on the chronic care model, and eight trials studied psychosocial interventions. Most interventions addressed the mental health aspect of the co-morbidity and showed improvements in anxiety/depression but not in the co-morbid medical disorder. Conclusions: Further research might focus on interventions integrating mental health treatment with enhanced medical care components, incorporating shared-decision making and information technology advances. abstract_id: PUBMED:23472087 Double trouble: does co-morbid chronic somatic illness increase risk for recurrence in depression? A systematic review. Objective: To perform a systematic review, and if possible a meta-analysis, to establish whether depressed patients with co-morbid chronic somatic illnesses are a high risk "double trouble" group for depressive recurrence. Method: The databases PubMed, EMbase and PsycINFO were systematically searched until the 4(th) of December 2012 by using MeSH and free text terms. Additionally, reference lists of retrieved publications and treatment guidelines were reviewed, and experts were consulted. Inclusion criteria were: depression had to be measured at least twice during the study with qualified instruments and the chronic somatic illness had to be assessed by self-report or by a medical professional. Information on depressive recurrence was extracted and additionally risk ratios of recurrence were calculated. Results: The search generated four articles that fulfilled our inclusion criteria. These studies showed no differences in recurrence over one- two- three- and 6.5 years of follow-up for a total of 2010 depressed patients of which 694 patients with a co-morbid chronic somatic illness versus 1316 patients without (Study 1: RR = 0.49, 95% CI, 0.17-1.41 at one year follow-up and RR = 1.37, 95% CI, 0.78-2.41 at two year follow-up; Study 2: RR = 0.94, 95% CI, 0.65-1.36 at two year follow-up; Study 3: RR = 1.15, 95% CI, 0.40-3.27 at one year follow-up; RR = 1.07, 95% CI, 0.48-2.42 at two year follow-up and RR = 0.99, 95% CI,0.55-1.77 at 6.5 years follow-up; Study 4: RR = 1.16, 95% CI, 0.86-1.57 at three year follow-up). Conclusion: We found no association between a heightened risk for depressive recurrence and co-morbid chronic somatic illnesses. There is a need for more longitudinal studies to justify the current specific treatment advice such as long-term pharmacological maintenance treatment for this presumed "double trouble" group. abstract_id: PUBMED:26919799 Depression predicts future emergency hospital admissions in primary care patients with chronic physical illness. Objective: More than 15 million people currently suffer from a chronic physical illness in England. The objective of this study was to determine whether depression is independently associated with prospective emergency hospital admission in patients with chronic physical illness. Method: 1860 primary care patients in socially deprived areas of Manchester with at least one of four exemplar chronic physical conditions completed a questionnaire about physical and mental health, including a measure of depression. Emergency hospital admissions were recorded using GP records for the year before and the year following completion of the questionnaire. Results: The numbers of patients who had at least one emergency admission in the year before and the year after completion of the questionnaire were 221/1411 (15.7%) and 234/1398 (16.7%) respectively. The following factors were independently associated with an increased risk of prospective emergency admission to hospital: having no partner (OR 1.49, 95% CI 1.04 to 2.15); having ischaemic heart disease (OR 1.60, 95% CI 1.04 to 2.46); having a threatening experience (OR 1.16, 95% CI 1.04 to 1.29); depression (OR 1.58, 95% CI 1.04 to 2.40); and emergency hospital admission in the year prior to questionnaire completion (OR 3.41, 95% CI 1.98 to 5.86). Conclusion: To prevent potentially avoidable emergency hospital admissions, greater efforts should be made to detect and treat co-morbid depression in people with chronic physical illness in primary care, with a particular focus on patients who have no partner, have experienced threatening life events, and have had a recent emergency hospital admission. abstract_id: PUBMED:11434403 Risk profile of SSrIs in elderly depressive patients with co-morbid physical illness. Background: So far, most studies on treatment strategies in elderly depressive patients have included only patients in good physical health, thereby excluding and neglecting somatic co-morbidity, which is very prevalent and relevant in geriatric psychiatry. Method: 40 elderly depressive inpatients at the Department of Internal Medicine in Hochzirl who had started on SSRI monotherapy were allocated to this prospective post-marketing surveillance study. A stable medication for their physical illness for at least six months was a prerequisite. A Mini Mental State Exam (MMSE) score of &gt;24 was required for study entry. The four-week study consisted of one baseline and four follow-up examinations, including psychiatric and medical history, as well as ratings for psychopathology and treatment-related adverse events. The antidepressants administered were paroxetine (20 mg/d), citalopram (20 mg/d), fluoxetine (20 mg/d) and sertraline (50 mg/d). Depression was rated using the 21-item Hamilton Depression Scale (HAMD); side effects were evaluated by the UKU Side Effect Rating Scale, and we used the Hillside Akathisia Scale (HAS) to record the incidence of SSRI-induced akathisia. Results And Conclusion: Our results suggest that SSRls are effective and reasonably safe in elderly depressive patients with co-morbid physical illness. Adverse effects are more common, but generally tolerable, than in younger and physically healthy patients. The risk profile of SSRls in this population can be considered favorable. abstract_id: PUBMED:20492663 Anxiety and depression in association with morbid obesity: changes with improved physical health after duodenal switch. Background: Patients with morbid obesity have an increased risk for anxiety and depression. The "duodenal switch" is perhaps the most effective obesity surgery procedure for inducing weight loss. However, to our knowledge, data on symptoms of anxiety and depression after the duodenal switch are lacking. Furthermore, it has been hypothesized that self-reported physical health is the major predictor of symptoms of depression in patients with morbid obesity. We therefore investigated the symptoms of anxiety and depression before and after the duodenal switch procedure and whether post-operative changes in self-reported physical health were predictive of changes in these symptoms. Methods: Data were assessed before surgery (n = 50), and one (n = 47) and two (n = 44) years afterwards. Symptoms of anxiety and depression were assessed by the "Hospital Anxiety and Depression Scale", and self-reported physical health was assessed by the "Short-Form 36" questionnaire. Linear mixed effect models were used to investigate changes in the symptoms of anxiety and depression. Correlation and linear multiple regression analyses were used to study whether changes in self-reported physical health were predictive of post-operative changes in the symptoms of anxiety and depression. Results: The symptom burden of anxiety and depression were high before surgery but were normalized one and two years afterwards (P &lt; 0.001). The degree of improvement in self-reported physical health was associated with statistically significant reductions in the symptoms of anxiety (P = 0.003) and depression (P = 0.004). Conclusions: The novelty of this study is the large and sustained reductions in the symptoms of anxiety and depression after the duodenal switch procedure, and that these changes were closely associated with improvements in self-reported physical health. abstract_id: PUBMED:24939806 Depression and the older medical patient--when and how to intervene. Depression in the elderly, particularly those with chronic physical health problems, is a common, but complex problem. In this paper we review the research literature on both the epidemiology and management of depression in the older medical patient. After a general overview of depression in the elderly, we discuss some of the particular issues relevant to depression and co-morbid physical illness amongst elderly patients. Depression can be difficult to diagnose in medically unwell older adults, particularly when there is substantial overlap in symptomatology. The epidemiology and evidence base for the treatment of depression in a number of chronic health problems common in an older adults population are then discussed, specifically cardiac disease, cerebrovascular disease, cancer, chronic kidney disease, chronic obstructive pulmonary disease, and Parkinson's disease. For many of these conditions there is emerging evidence that treatments can be effective in reducing depressive symptoms. However, these potential benefits need to be balanced against the often-increased risk of adverse events or interactions with medical treatments. Although co-morbid depression is consistently associated with poorer medical outcomes, there is limited evidence that standard anti-depressive therapy has additional benefits in terms of physical health outcomes. Collaborative care models appear particularly well suited to medically unwell older adult patients, and may provide more generalised benefits across both mental and physical health measures. abstract_id: PUBMED:17472762 The role of perinatal problems in risk of co-morbid psychiatric and medical disorders in adulthood. Background: Perinatal problems may be associated with an increased risk for psychological and physical health problems in adulthood, although it is unclear which perinatal problems (low birthweight, preterm birth, low Apgar scores, and small head circumference), or what clusters of problems, are more likely to be associated with later health problems. It is also not known whether perinatal problems (singly or together) are associated with co-morbidity between psychological and physical health problems. Method: A regional random sample (from Baltimore) of mothers and their children (n=1525) was followed from birth to adulthood (mean age 29 years). Perinatal conditions were measured at delivery. Psychological problems (depression and suicidal ideation) were measured with the General Health Questionnaire-28 (GHQ-28) and physical problems (asthma and hypertension) with the RAND-36 Health Status Inventory. Results: Children with perinatal problems were generally at increased risk for depression, suicidal ideation and hypertension, and co-morbid depression and hypertension even after controlling for confounders. One possible underlying condition, preterm low birthweight (LBW), extracted by cluster analysis, considering all of the four perinatal problems, was associated with increased risk for psychological and physical health outcomes as well as co-morbidity of the two. Conclusions: LBW, preterm birth and small head circumference singly increased the risk for both psychological and physical health problems, as well as co-morbid depression and hypertension, while low Apgar scores were only associated with psychological problems. Delineating different etiological processes, such as preterm LBW, considering various perinatal problems simultaneously, might be of benefit to understanding the fetal origin of adult illness and co-morbidity. abstract_id: PUBMED:35001405 Associations between Developmental Assets and Adolescent Health Status: Findings from the 2016 National Survey of Children's Health. Background: Developmental assets foster positive health outcomes among adolescents, but have not been studied in adolescents with chronic illness or depression, two conditions that impact behaviors in school. We examined parent-reported assets in a national sample of adolescents and compared the number and types of assets by health statuses. Methods: Data were from the 2016 National Survey of Children's Health (N = 15,734 adolescents), which captured 15 of 40 assets in the Developmental Assets Framework. We categorized adolescents as healthy; chronic physical illness alone; depression alone; and chronic physical illness with co-morbid depression. Data were analyzed using analysis of variance and logistic regression. Results: Healthy adolescents and those with chronic physical illness alone were comparable in number and types of assets. Adolescents with chronic physical illness and co-morbid depression had fewer assets compared to healthy adolescents and those with chronic physical illness alone. Similar associations were found in comparing healthy adolescents to those with depression without chronic physical illness. Conclusions: The presence of depression, among adolescents with and without chronic physical illness, was associated with fewer internal and external assets. The absence of assets may serve as a unique indicator of underlying depressive symptoms among adolescents in the school setting. abstract_id: PUBMED:17013767 Co-occurrence of mental and physical illness in US Latinos. Background: This study describes the prevalence of comorbid physical and mental health problems in a national sample of US Latinos. We examined the co-occurrence of anxiety and depression with prevalent physical chronic illnesses in a representative sample of Latinos with national origins from Mexico, Cuba, Puerto Rico, and other Latin American countries. Method: We used data on 2,554 Latinos (75.5% response rate) ages 18 years and older from the National Latino and Asian American Study (NLAAS). The NLAAS was based on a stratified area probability sample design, and the sample came from the 50 states and Washington, DC. Survey questionnaires were delivered both in person and over the telephone in English and Spanish. Psychiatric disorders were assessed using the World Mental Health Survey Initiative version of the World Health Organization Composite International Diagnostic Interview (WMH-CIDI). Physical chronic illness was assessed by self-reported history. Results: Puerto Ricans had the highest prevalence of meeting criteria for any comorbid psychiatric disorder (more than one disorder). Puerto Ricans had the highest prevalence (22%) of subject-reported asthma history, while Cubans had the highest prevalence (33%) of cardiovascular disease. After accounting for age, sex, household income, number of years in the US, immigrant status, and anxiety or depression, anxiety was associated with diabetes and cardiovascular disease, in the entire sample. Depression and co-occurring anxiety and depression were positively associated with having a history of asthma but not with other physical diseases, in the entire sample. Interestingly, Puerto Ricans with a depressive disorder had a lower odds of having a history of cardiovascular disease than Puerto Ricans without a depressive disorder. The relationship between chronic physical and mental illness was not confounded by immigration status or number of years in the US. Discussion: Despite previous findings that link acculturation with both chronic physical and mental illness, this study does not find that number of years in the US nor nativity explain the prevalence of psychiatric-medical comorbidities. This study demonstrates the importance of considering psychiatric and medical comorbidity among specific ethnic groups, as different patterns emerge than when using aggregate ethnic measures. Research is needed on both the pathways and the mechanisms of comorbidity for the specific Latino groups. Answer: Yes, co-morbid depressive illness does magnify the impact of chronic physical illness. A population-based cross-sectional survey in Canada found that the presence of co-morbid Major Depressive Disorder (MDD) was associated with significantly greater likelihood of health-care utilization, increased functional disability, and work absence compared to the presence of a chronic physical illness without co-morbid MDD. This impact was observed across six common chronic physical illnesses, with the strongest associations seen for work absence (PUBMED:16608557). Additionally, depression has been shown to predict future emergency hospital admissions in primary care patients with chronic physical illness. Factors such as having no partner, experiencing threatening life events, and having had a recent emergency hospital admission were independently associated with an increased risk of prospective emergency admission to hospital, alongside depression (PUBMED:26919799). Moreover, systematic reviews have indicated that interventions for care of people with co-morbid chronic medical illness and anxiety/depression disorders show improvements in anxiety/depression but not in the co-morbid medical disorder, suggesting that the mental health aspect of the co-morbidity is significant and requires attention (PUBMED:21714361). Furthermore, depression in the elderly with chronic physical health problems is a complex issue, and while treatments can be effective in reducing depressive symptoms, they need to be balanced against the risk of adverse events or interactions with medical treatments. Co-morbid depression is consistently associated with poorer medical outcomes, although standard anti-depressive therapy does not necessarily improve physical health outcomes (PUBMED:24939806). In summary, the presence of co-morbid depressive illness exacerbates the impact of chronic physical illness on functional status, health-care use, and risk of hospital admission, underscoring the need for greater efforts to detect and treat depression in individuals with chronic physical illness.
Instruction: Is orthopedics more competitive today than when my attending matched? Abstracts: abstract_id: PUBMED:24836166 Is orthopedics more competitive today than when my attending matched? An analysis of National Resident Matching Program data for orthopedic PGY1 applicants from 1984 to 2011. Objective: This study evaluated supply and demand trends for orthopedic postgraduate year 1 (PGY1) positions from 1984 to 2011 for the purpose of estimating national intercandidate competition over time. Design: National Resident Matching Program (NRMP) data for orthopedic surgery from 1984 to 2011 were collected. Proxy variables including (total number of orthopedic applicants/number of orthopedic PGY1 positions), (number of US senior applicants to orthopedics/number of orthopedic PGY1 positions), (number of US seniors matching into orthopedics/number of US senior orthopedic applicants), (total number of matched orthopedic applicants/total number of orthopedic applicants), and (total number of US applicants who fail to match into orthopedics/total number of US senior applicants into orthopedics) as well as average United States Medical Licensing Examination Step 1 scores were used to gauge the level of competition between candidates and were compared over time. Setting: Academic medical center in the Midwestern United States. Participants: Medical professors and medical students. Results: The NRMP data suggested that the number of positions per applicant decreased or remained stable since 1984 and that the percentage of applicants who did not match was no higher now than in the past. This finding was primarily because of the relative decrease in the ratio of applicants to available PGY1 positions, which stems from the number of positions increasing more rapidly than the number of applicants. Conclusions: The NRMP data from 1984 to 2011 supported our hypothesis that intercandidate competition intensity for orthopedic PGY1 positions has not increased over time. The misconception that orthopedics is becoming more competitive likely arises from the increased number of applications submitted per candidate and the resulting relative importance placed on objective criteria such as United States Medical Licensing Examination Step 1 scores when programs select interview cohorts. abstract_id: PUBMED:4048637 Rehabilitation of the physically handicapped--even today an ongoing task for orthopedics? Rehabilitation of disabled people has at all times been an essential element of orthopaedics. The great advances made in orthopaedic surgery over the last few decades have however not resulted in rehabilitative activity having become superfluous. On the contrary, better medical restoration provides a better starting base for vocational rehabilitation. Rehabilitation generally has better technical possibilities today than it used to, which contributes to its being more successful today. Most important of all has however been the progress entailed by a fundamental change in how rehabilitation is conceived: Today, we consider the disabled person a full and equal member of our society, whose integration in competitive employment we mean to foster to the greatest extent possible. abstract_id: PUBMED:26793639 Comparison of quality of clinical supervision as perceived by attending physicians and residents in university teaching hospitals in Tehran. Background: Clinical supervision is an important factor in the development of competency in residency program. Attending physicians play a key role in supervision of residents. However little is known about how attending physicians and residents perceive the quality of clinical supervision. The aim of this study was to explore the differences between perceived qualities of supervision in these two groups in different wards in teaching hospitals in Tehran, Iran. Methods: A valid questionnaire were completed by 219 attending physicians and residents from surgery, psychiatry, gynecology, pediatrics, internal medicine, orthopedics and radiology wards in two teaching hospital affiliated to Iran University of Medical Sciences. This questionnaire contained 15 items in regards to supervisory roles, rated on a five point Likert scale (1=never, 2=seldom, 3=sometimes, 4=often, 5=always). Results: Out of 219 participants, 90 (41%) were attending physicians and 129 (59%) were residents. The overall mean±SD scores of perceived clinical supervision achieved by attending physicians and residents were respectively, 4.20±0.5 and 3.00±0.7 which was statistically significant (p&lt;0.05). Attending physicians and residents acquired minimum scores (mean=4.06 and 2.7, respectively) regarding expectation from their supervisor to know and do during training period of residency. Conclusion: It seems that the clinical supervisory does not have an efficient performance in teaching hospitals which needs to be more assessed and improved. Therefore it is suggested that policymakers in medical education system pay more attention to this important issue and enhance some faculty development programs for clinical educators in Iran. abstract_id: PUBMED:16151748 Damage control orthopedics Background: In the management of patients with multiple injuries, the concept of damage control orthopedics (DCO) is still a matter of controversy. Thus, the clinical value of DCO remains unclear and should be evaluated on an evidence-based level by a review of the current literature. Results: The work of various authors has demonstrated an association between injury severity and the clinical immuno-inflammatory response and its prognostic relevance regarding organ dysfunction or organ failure and clinical outcome. Research data published by the authors and other investigators have clearly demonstrated an additional inflammatory response caused by surgical trauma which is significantly higher after primary intramedullary fracture treatment than after external fracture stabilization. In contrast, a generally minor inflammatory response seems to be associated with intramedullary nailing for secondary conversion osteosynthesis. Three retrospective cohort studies have shown a reduction of organ dysfunction and an improvement of survival with the DCO approach. Simultaneously, it was demonstrated that primary external fracture fixation and secondary conversion to definite osteosynthesis is not associated with an increased rate of local or systemic complications. Conclusions: The advocates of DCO claim that patients with multiple injuries including severe brain and chest injuries as well as those with an unstable cardiopulmonary or circulatory condition are at high risk of developing a severe systemic immuno-inflammatory reaction during early total fracture care. Therefore, they recommend primary minimally invasive external fracture stabilization in these patients to avoid additional surgical trauma and that definitive secondary fracture care should be performed after medical stabilization of the patient in intensive care. abstract_id: PUBMED:26869965 Two Polarities of Attention in Social Contexts: From Attending-to-Others to Attending-to-Self. Social attention is one special form of attention that involves the allocation of limited processing resources in a social context. Previous studies on social attention often regard how attention is directed toward socially relevant stimuli such as faces and gaze directions of other individuals. In contrast to attending-to-others, a different line of researches has shown that self-related information such as own face and name automatically captures attention and is preferentially processed comparing to other-related information. These contrasting behavioral effects between attending-to-others and attending-to-self prompt me to consider a synthetic viewpoint for understanding social attention. I propose that social attention operates at two polarizing states: In one extreme, individual tends to attend to the self and prioritize self-related information over others', and, in the other extreme, attention is allocated to other individuals to infer their intentions and desires. Attending-to-self and attending-to-others mark the two ends of an otherwise continuum spectrum of social attention. For a given behavioral context, the mechanisms underlying these two polarities will interact and compete with each other in order to determine a saliency map of social attention that guides our behaviors. An imbalanced competition between these two behavioral and cognitive processes will cause cognitive disorders and neurological symptoms such as autism spectrum disorders and Williams syndrome. I have reviewed both behavioral and neural evidence that support the notion of polarized social attention, and have suggested several testable predictions to corroborate this integrative theory for understanding social attention. abstract_id: PUBMED:33535284 Nanotechnology applied to the transport of antibiotics in orthopedics and traumatology Bone infection and implants are a real problem in orthopedics. The formation of biofilm as well as multi-existing pathogens to antibiotics, make fighting them a difficult challenge with the tools we have today. With the aim of knowing the current state of nanotechnology applied to the transport of antibiotics in traumatology and orthopedics, and their projection in the future. We conducted a bibliographic review in June 2019. While much development of the topic and work on humans is lacking, experimental studies show that nanotechnology applied to antibiotic transport promises to be an important weapon in the treatment of bone infections in the future. abstract_id: PUBMED:35341710 A Multi-Center Comparison of Orthopaedic Attending and Resident Learning Styles. Objective: Effective education of orthopedic residents requires an understanding of how they process information. To date however no literature has described resident learning styles based on the updated Kolb Learning Style Inventory (KLSI) v4.0. The purpose of this study is to identify common learning styles amongst orthopedic residents and attendings and evaluate the effect that race, gender, and resident/attending status have on learning styles. Design: The KLSI v4.0 and a demographic survey were distributed to 103 orthopedic attendings and residents at two academic centers during the 2019 to 2020 academic year. Frequencies and descriptive statistics were reported. Learning styles based on gender, race, attending versus resident status, and institution were evaluated. A p-value &lt; 0.05 was considered significant. Setting: This is a multi-center study performed at two academic, university based orthopedic surgery departments. Participants: Orthopaedic surgery residents and attending surgeons. Results: At both institutions, the combined response rate for the KLSI v4.0 was 66% and 68% for the demographic surgery. The three most common learning styles recorded were: Deciding (26.5%), Acting (17.6%), and Thinking (17.6%). Learning styles were compared by gender, race, attending and/or resident status, and institution with no statistically significant difference found between any of the comparisons (p &gt; 0.05). Conclusion: The majority of orthopedic surgeons have Deciding, Acting, or Thinking learning styles, which are categorized by motivation to achieve goals, disciplined and logical reasoning, and the use of theories and models to solve problems. However, not all residents and attendings utilize these common learning styles. A mismatch in learning styles between residents and attendings could result in poor educational experiences. Understanding the learning styles of orthopedic surgeons has implications for improving evaluation interpretation, mentorship pairing, quality of life, and resident remediation. abstract_id: PUBMED:21266005 Chronic pain and psychiatric morbidity: a comparison between patients attending specialist orthopedics clinic and multidisciplinary pain clinic. Objective: The objective of this study was to examine the associations between chronic pain and psychiatric morbidity using interview-based assessments of psychiatric symptomatology. We compared the prevalence of common mental disorder (CMD; consistent with neurotic and somatic symptoms, fatigue, and negative affect), depression, and anxiety disorder(s), and associated factors with these psychiatric illnesses among Chinese patients with chronic pain attending specialist orthopedics clinic and multidisciplinary pain clinic. Methods: A total of 370 patients with chronic pain were recruited from an Orthopedics Clinic (N=185) and a Pain Clinic (N=185) in Hong Kong. Psychiatric morbidity was assessed using the Revised Clinical Interview Schedule. Individual scores for neurotic symptoms and neurotic disorders (including depression and four types of anxiety disorders) were also calculated. Results: The reported lifetime prevalence rates of CMD were 35.3% and 75.3% for the Orthopedics and Pain Clinic samples, respectively. Rates of depression and anxiety disorders in the Pain Clinic (57.1% and 23.2%, respectively) were significantly higher than those in the Orthopedics sample (20.2% and 5.9%, respectively) (all P&lt;0.001). Pain characteristics including number of pain sites, pain duration, pain intensity, and pain interference were all significantly associated with psychiatric morbidity after controlling for sociodemographic factors. Pain duration and litigation/compensation status consistently predicted concurrent pain intensity and disability. Conclusions: Chronic pain is associated with psychiatric morbidity. The higher rate of depression than anxiety disorder(s) among patients with chronic pain is consistent with previous studies that have found depression to be highly prevalent in chronic pain. abstract_id: PUBMED:16237889 History of orthopedic surgery in Belgrade--100 years of orthopedics in Serbia (1905-2005) The history of orthopedics in Serbia is related to a hand x-ray made in 1905 by dr. Nikola Krstić. The first orthopedic word was founded in 1919, to be enlarged into a full-fledged orthopedic surgical ward of the General State Hospital in 1932. Until 1941, the ward headed by Dr. Nikola Krstić. The Orthopedics course was headed by Dr. Borivoje Gradojević, who also wrote the first textbook in our country in 1934. In 1947, the ward became the Clinic for Orthopedic Surgery and Traumatology in Belgrade which is the orthopedic basis of Serbia, together with the Special Orthopedic Surgery Hospital Banjica, even today. abstract_id: PUBMED:23955523 Technical orthopedics. Importance in an increasingly operatively oriented faculty The foundation of the German Society for Orthopedics in 1901 was due to a separation from the faculty of surgery because a surgical approach alone did not adequately deal with the symptoms. Orthopedists were initially considered as a fringe group. The conservative treatment approach was initially at the forefront and operative measures were a side line. The main aim was the rehabilitation of patients into a normal life as best as possible. In the conservative area treatment with orthopedic technical aids and appliances rapidly came to play an important role and a great multitude of technical appliances were developed with sometimes very different possible applications. Despite the clearly improved operative treatment approaches in orthopedics and trauma surgery, technical orthopedics still plays a substantial role even today. Healing and supportive aids and appliances are of decisive importance for the treatment of a multitude of diseases and handicaps. They stabilize and improve operative treatment results and often result in new approaches. This depends on cooperation between technicians, therapists and physicians in a team, even in the scientific field. Evidence-based studies on the effectiveness of technical aids are currently still uncommon but recently some clear evidence for effectiveness could be shown. Scientifically this is a very varied field of work. The demographic development presents new requirements which must be dealt with. Technical solutions are often very promising especially in this field. Technical orthopedics remains an important component of the specialty of orthopedics and trauma surgery and with an increasing tendency due to more recent research and development. Answer: Based on the analysis of National Resident Matching Program data for orthopedic postgraduate year 1 (PGY1) applicants from 1984 to 2011, it was found that the intensity of intercandidate competition for orthopedic PGY1 positions has not increased over time (PUBMED:24836166). Contrary to the common perception that orthopedics is becoming more competitive, the study suggests that the number of positions per applicant decreased or remained stable since 1984, and the percentage of applicants who did not match was no higher than in the past. This was primarily due to the relative decrease in the ratio of applicants to available PGY1 positions, as the number of positions increased more rapidly than the number of applicants. Therefore, based on this data, orthopedics is not more competitive today than it was in the past.
Instruction: Management of cancer of the ampulla of Vater: does local resection play a role? Abstracts: abstract_id: PUBMED:14506332 Management of cancer of the ampulla of Vater: does local resection play a role? Background: The clinical outcome of patients with ampullary carcinoma is significantly more favorable than for patients with pancreatic head carcinoma. The Whipple procedure is the operation of choice for both diagnoses. Still local resection is recommended in selected cases. The aim of this study was to assess the outcome of local resection of cancer of the ampulla of Vater by comparison with pancreaticoduodenectomy. Method: 92 patients with cancer of the ampulla of Vater treated between 1975 and 1999 with local resection (n = 10), pancreatic resection (n = 49) or laparotomy and no resection (n = 33) were studied retrospectively. The main outcome measures were postoperative morbidity and mortality, surgical radicality and long-term survival. Results: The postoperative complication rate was significantly lower after local resection (p = 0.036) whereas mortality did not differ between the 2 resection groups. UICC stages were less advanced in the local resection group (p &lt; 0.04). Still, the frequency of positive resection margins and RO resections was the same in both groups, as was long-term survival. Local recurrence was diagnosed in 8/10 (80%) patients after local and in 11/49 (22%) patients after pancreatic resection (p = 0.001). Conclusion: Pancreaticoduodenectomy is the preferred operation for cancer of the ampulla of Vater in patients who are fit for the procedure. Local resection plays a limited role in carefully selected patients. abstract_id: PUBMED:22111080 Role of transduodenal ampullectomy for tumors of the ampulla of Vater. Purpose: Tumors arising from the ampulla of Vater can be benign or malignant. Recently, endoscopic papillectomy has been employed in the management of benign ampulla of Vater tumors; however, surgical resection is the treatment of choice. The aim of this study was to define indications and suggest a role for transduodenal ampullectomy in the management of ampulla of Vater tumors. Methods: We retrospectively reviewed the medical records of 54 patients treated for ampulla of Vater tumors between January 1999 and December 2008. Results: Twenty-two endoscopic papillectomies and 21 transduodenal ampullectomies were performed. Four patients underwent transduodenal ampullectomy after endoscopic papillectomy due to a recurrent or remnant tumor. Recurrence or a remnant tumor was found in one patient after transduodenal ampullectomy compared to six patients after endoscopic papillectomy. Immediate intraoperative conversion from transduodenal ampullectomy to pancreaticoduodenectomy was performed in five patients based on intraoperative frozen biopsy analysis. Conclusion: Transduodenal ampullectomy should be performed to treat ampulla of Vater tumors that are unsuitable for endoscopic papillectomy. Transduodenal ampullectomy can serve as an intermediate treatment option between endoscopic papillectomy and pancreaticoduodenectomy in the management of ampulla of Vater tumors. abstract_id: PUBMED:23450004 The role of local excision in invasive adenocarcinoma of the ampulla of Vater. Background: Ampulla of Vater carcinomas are rare malignancies that have been traditionally treated with radical surgical resection. Given the mortality associated with pancreaticoduodenectomy, some patients may benefit from local resection. A single-institution outcomes analysis was performed to define the role of local resection. Methods: Patients undergoing local resection (ampullectomy) for ampullary carcinomas at Duke University between 1976 and 2010 were analyzed retrospectively. Time-to-event analysis was conducted analyzing all patients undergoing surgery, with and without adjuvant chemoradiation therapy (CRT). Overall survival (OS), local control (LC), metastases-free survival (MFS), and disease-free survival (DFS) were studied using Kaplan-Meier analysis. Results: A total of 17 patients with invasive carcinoma underwent ampullectomy. The 3-and 5-year LC, MFS, DFS and OS rates were 36% and 24%, 68% and 54%, 31% and 21%, and 35% and 21%, respectively. Patients receiving adjuvant CRT did not appear to have improved outcomes compared with surgery alone, although this group tended to have poorer histological grade, more advanced tumor staging and involved surgical margins. Conclusions: Ampullectomy for invasive ampullary adenocarcinomas is a safe procedure but does not offer satisfactory long-term results, mostly due to high local failure rates. Adjuvant CRT therapy does not appear to offer increased local control or survival benefit following ampullectomy, although these results may suffer from selection bias and small sample size. Local resection should be limited to benign ampullary lesions or patients with very small, early tumors with favorable histologic features where radical resection is not feasible. abstract_id: PUBMED:23930053 Prognostic analysis of carcinoma of the ampulla of Vater: pancreaticoduodenectomy versus local resection. Background And Aim: Pancreaticoduodenectomy (PD) is considered to be the optimal treatment for carcinoma of the ampulla of Vater, but the trauma caused by PD is often severe and extensive. Local resection (LR) for ampullary tumors has been performed for a century but remains controversial. The use of this procedure for benign conditions is clear, but its place, if any, in the management of ampullary carcinoma is debated. The aim of this study was to investigate the outcomes and analyse the prognostic factors of LR of carcinoma of the ampulla of Vater by comparison with PD. Patients And Methods: A retrospective analysis of 71 patients of carcinoma of the ampulla of Vater was conducted at Zhejiang Cancer Hospital from January 1995 to December 2005. We investigated the differences of the baseline characteristics and the intra- and postoperative data of patients who underwent PD and LR. Prognostic factors for recurrence and survival of carcinoma of the ampulla of Vater between PD and LR was also analysed. Results: Among the 71 patients of ampullary carcinoma who underwent surgical resection, a PD was performed in 46 (64.8%) patients while a LR was performed in 25 (35.2%) patients. The 30-day mortality rate associated with PD (6.5%) was not different from that with LR (0%; p=0.547) while the morbidity following PD (30.4%) and LR (8.0%) was statistically different (p=0.031). The complications were also significantly higher in the PD group than the LR group (34.8% vs 6.5%; p=0.013). In a univariate Cox regression analysis of survival, there were significant differences in tumor size (p=0.031), TNM (Tumor Node Metastasis) stage (p=0.000), pT (pathologic Tumor) stage (p=0.010), pN (pathologic Node) stage (p=0.000), differentiation (p=0.026), and surgical margin (p=0.031). Multivariate Cox regression analysis showed that TNM stage (HR=3.640, 95% CI 1.428~9.282; p=0.007), pT stage (HR=3.090, 95% CI 1.230~7.762; p=0.016), and pN stage (HR=4.479, 95% CI 1.524~013.161; p=0.005) remained as independent predictors of survival rates. According to the method of Kaplan-Meier, the five-year survival rate in the PD group was 53.5% and that in the LR group was 48.0%, no significant differences were found between the two groups in overall survival rates (p=0.540). Compared with the PD, the 5-year survival of patients with the TNM stage-III/IV who undergoing LR was statistically lower (11.1% vs 38.1%; p=0.040). As expected, the overall survival were signicant differences between the two groups in pT stage-T3/T4 (47.4% vs 18.2%, p=0.018) and pN stage-N1 (36.8% vs 11.1%, p=0.004), respectively. Tumor recurrence was diagnosed in 10/43 (23.3%) patients after PD and 12/25 (48.0%) patients after LR (p=0.035). Logistic regression analysis of recurrence showed that TNM stage-III/IV (p=0.004), pT stage-T3/T4 (p=0.034), and pN stage-N1 (p=0.007) were associated with a 2.444, 1.943, and 2.111-fold increased risk of recurrence, respectively. Conclusions: PD is the preferred operation for carcinoma of the ampulla of Vater. LR is less mortal and morbid than PD, which is a suitable treatment in patients with a low-risk cancer in stages I/II or pT1/T2 N0 with a maximum diameter of 2 cm or less. TNM stage, pT stage, and pN stage remained as independent predictors of survival rates. abstract_id: PUBMED:26977032 Partial Resection of the Pancreatic Head and Duodenum for Management of Carcinoma of the Ampulla of Vater: A Case Report. A 57-year-old woman presented with spontaneous pain in the upper right quadrant of the abdominal region of one year's duration. Contrast-enhanced computed tomography (CT), magnetic resonance imaging, and magnetic resonance cholangiopancreaticography revealed the presence of a tumour in the periampullary region, gallstones, cholecystitis, and biliary obstruction, as well as atrophy of the pancreas and dense adhesions involving the pancreas, portal vein, and superior mesenteric vein. Duodenoscopy revealed a papillary neoplasm, measuring 2.5×3 cm, in the descending duodenum. Pathological analysis of the duodenoscopic biopsy suggested carcinoma of the ampulla of Vater. Partial resection of the pancreatic head and duodenum, together with lymph node dissection and digestive tract reconstruction, was performed. Postoperatively, the patient recovered well. CT at 14 months postoperatively showed no recurrence or metastasis. This surgical procedure avoids the potential risk of pancreaticoduodenectomy and retains the function of the pancreas as much as possible, while achieving radical tumour resection. abstract_id: PUBMED:27756031 Curative resection of carcinoma of the ampulla of Vater with lymph node metastases around the abdominal aorta after chemotherapy: A case report. Introduction: For carcinoma of the ampulla of Vater, lymph node metastasis around the abdominal aorta is an inoperable factor equivalent to distant metastasis, such as hepatic metastasis or peritoneal carcinomatosis, making the cancer unresectable. Presentation Of Case: A 53-year-old man was referred to our hospital and was diagnosed as having carcinoma of the ampulla of Vater with lymph node metastases around the abdominal aorta. Although only chemotherapy was initially scheduled, the chemotherapy was effective, and the metastases were dramatically reduced after 4 cycles of chemotherapy. Curative surgical resection was performed. Discussion: There were only eight case reports describing curative resections of initially unresectable biliary tract carcinomas excluding intrahepatic cholangiocellular carcinoma after chemotherapy. Conclusion: Curative surgical resection after chemotherapy may be a feasible treatment plan in patients with unresectable biliary tract cancer. abstract_id: PUBMED:8916877 The management of tumors of the ampulla of Vater by local resection. Objective: The authors report on indications and results of local excision of tumors of the ampulla of Vater. Summary Background Data: Local excision of ampullary tumors has been performed for nearly a century but remains controversial. The use of this procedure for benign conditions is clear, but its place, if any, in the management of ampullary malignancy is debated. Methods: The presentation, evaluation, and treatment of 26 patients who underwent local resection of ampullary tumors between January 1987 and November 1994 are reviewed. Results: There were 16 men and 10 women, with a median age of 58 years. Eighteen patients had adenomas, whereas 8 patients had adenocarcinomas. Patients presented predominantly with jaundice (50%), pain (35%), and pancreatitis (27%) and were evaluated with endoscopic retrograde cholangiopancreatography and biopsy. All patients with benign lesions had accurate preoperative biopsies. Two of eight patients shown intraoperatively to have malignant lesions had preoperative biopsies read as benign. There were no deaths. Postoperative complications included two wound infections and one episode each of cholangitis, lower gastrointestinal bleeding, and adhesive gastrointestinal obstruction. All patients had prompt resolution of jaundice if present before surgery, and the mean postoperative stay was 7.5 days. Six of eight patients with malignant lesions have had recurrent disease. Conclusions: Local excision of malignant ampullary tumors is effective palliative therapy when the patient is unfit for the Whipple procedure. Ampullary resection usually is curative for patients with benign lesions without a polyposis syndrome. In this series, intraoperative frozen section routinely was accurate. abstract_id: PUBMED:2194412 Local resection of tumors of the ampulla of Vater. Local resection of an ampullary tumor with reimplantation of the pancreatic and bile ducts was first described by William S. Halsted in 1899. Technical hazard and unsuitability in malignant ampullary tumors have unfortunately led to a disregard for this operation that is unwarranted. Radical pancreaticoduodenectomy is now the most common method of resecting benign and malignant ampullary tumors. Experience was gained with two high-risk patients with benign adenomatous polyps obstructing the ampulla of Vater. Their medical unsuitability for radical pancreaticoduodenectomy led us to revive the procedure of wide local excision of these tumors with reimplantation of the pancreatic and bile ducts. Operative time and blood loss were substantially less than radical resection and postoperative recoveries were relatively uncomplicated. Radical resection of benign ampullary tumors may be appropriate for good-risk patients in whom the risk of local recurrence outweighs the operative risk. We suggest that local resection of benign ampullary tumors is the procedure of choice in high-risk patients and that it be considered in palliation of limited local malignancies of the ampulla in high-risk patients. abstract_id: PUBMED:9168753 Villous tumors of the ampulla Vater. Patients with villous tumors of the ampulla Vater usually present with jaundice, intermittent or constant, but may seek care for abdominal pain, intestinal hemorrhage, or pancreatitis. Because villous tumors may harbor carcinoma in 30 to 50 per cent of cases, appropriate management may require radical resection. We have managed four patients with villous lesions of the ampulla Vater occurring in 1981, 1992, 1993, and 1995. Three were villous (two with malignant change) and one was a villoglandular adenoma. Treatment consisted of local excision with reimplantation of the ducts in one patient, Whipple resection in two patients, and biliopancreatic bypass in one who had distant nodal metastases not resectable for cure. This patient died 18 months after operation of an unrelated disease, but the others were well at last follow-up. The presentation as well as the diagnostic and therapeutic considerations in the management of villous tumors of the ampulla Vater are discussed. abstract_id: PUBMED:11776824 Local resection of carcinoma of ampulla of Vater: an analysis of 52 cases Objective: To study the therapeutic results of patients with carcinoma of ampulla Vater treated by local tumor resection. Methods: Four cases of carcinoma of ampulla of Vater treated in our hospital by local tumor resection and 48 cases reported in Chinese medical literature in 1992-1997 were analyzed. Results: A total of 52 cases was collected who received local resection of carcinoma of ampulla of Vater. The operative mortality was 3.8% and the morbidity was 9.6%. Recurreuce occurred in 12 cases, of whom 3 had poorly differentiated adenocarcinoma, 7 had their tumor incompletely resected with tumor left at the cutting edge and one had local invasion. In 8 month to 4 years, 19 cases died. Thirty two cases survived for 7 months to 8 years after operation. Conclusion: The prognosis of local resection is related not only to tumor size, extent of invasion, metastasis to lymph node, but also to pathologic types and differentiation. It is a treatment of choice for early carcinoma of ampulla of Vater in patients who are at high-risk due to old age, poor general condition or severe concurrent diseases. Answer: Local resection can play a role in the management of cancer of the ampulla of Vater, but its utility is limited and should be carefully considered based on individual patient factors and tumor characteristics. Several studies have shown that local resection, such as ampullectomy, can be associated with lower postoperative complication rates compared to pancreaticoduodenectomy (Whipple procedure), and in some cases, it has similar long-term survival outcomes (PUBMED:14506332, PUBMED:23930053). However, local resection is often associated with higher rates of local recurrence (PUBMED:14506332, PUBMED:23930053). It is generally recommended for patients with less advanced UICC stages (PUBMED:14506332) and for those who are not fit for more extensive surgery due to comorbidities or poor general condition (PUBMED:11776824). Transduodenal ampullectomy has been suggested as an intermediate treatment option between endoscopic papillectomy and pancreaticoduodenectomy for tumors that are unsuitable for endoscopic removal (PUBMED:22111080). However, it is important to note that patients receiving adjuvant chemoradiation therapy after local resection do not appear to have improved outcomes, although this could be influenced by selection bias and small sample sizes (PUBMED:23450004). Local resection may be considered curative for benign lesions or very small, early-stage tumors with favorable histologic features when radical resection is not feasible (PUBMED:23450004). In high-risk patients, local resection can be an effective palliative therapy when the patient is unfit for the Whipple procedure (PUBMED:8916877). Additionally, local resection with reimplantation of the pancreatic and bile ducts has been performed successfully in high-risk patients with benign adenomatous polyps (PUBMED:2194412). In conclusion, while pancreaticoduodenectomy remains the preferred operation for carcinoma of the ampulla of Vater, local resection may be suitable for patients with low-risk cancer in early stages or for those who cannot tolerate more radical surgery. The decision to perform local resection should be based on a thorough evaluation of tumor size, extent of invasion, lymph node metastasis, pathologic types, differentiation, and the patient's overall health status (PUBMED:11776824).
Instruction: Are welders more at risk of respiratory infections? Abstracts: abstract_id: PUBMED:6463913 Respiratory symptoms and pulmonary function of welders in the engineering industry. We have studied respiratory symptoms, smoking habits, chest radiographs, sickness absence, and pulmonary function among 258 welders and an equal number of matched control subjects in three engineering factories. Welders who smoked had a higher frequency of chronic phlegm production than control subjects but there was no difference in cough or dyspnoea. The frequency of abnormality on chest radiographs was low and similar in welders and controls. Upper respiratory infections were a more frequent cause of sickness absence in welders than in controls but no difference was found in other respiratory diseases. FEV1 and peak expiratory flow rate were similar in welders and controls. In a subset of 186 subjects the maximum expiratory flow rate at low lung volumes was significantly less in welders who smoked than in control subjects who smoked, but there was no difference in non-smokers. Welders working under these conditions in the engineering industry appear to have no increased risk of chronic obstructive lung disease. abstract_id: PUBMED:27030577 Are welders more at risk of respiratory infections? Findings from a cross-sectional survey and analysis of medical records in shipyard workers: the WELSHIP project. Background: Exposure to welding fume increases the risk of pneumococcal infection; whether such susceptibility extends to other respiratory infections is unclear. We report findings from a survey and from medical consultation data for workers in a large shipyard in the Middle East. Methods: Between January 2013 and December 2013, we collected cross-sectional information from 529 male workers variously exposed to welding fume. Adjusted ORs for respiratory symptoms (cough, phlegm, wheezing, shortness of breath and 'chest illness') were estimated using multivariable logistic regression. Subsequently, we examined consultation records from 2000 to 2011 for 15 954 workers who had 103 840 consultations for respiratory infections; the associations between respiratory infections and levels of welding exposure were estimated using a count regression model with a negative binomial distribution. Results: 13% of surveyed workers reported respiratory symptoms with a higher prevalence in winter, particularly among welders. The adjusted OR in welders versus other manual labourers was 1.72 (95% CI 1.02 to 3.01) overall and 2.31 (1.05 to 5.10) in winter months; no effect was observed in summer. The risk of consultation for respiratory infections was higher in welders than in manual labourers, with an adjusted incidence rate ratio of 1.45 (1.59 to 1.83) overall, 1.47 (1.42 to 1.52) in winter and 1.33 (1.23 to 1.44) in summer (interaction, p&lt;0.001). Conclusions: The observation that respiratory symptoms and consultations for respiratory infection in welders are more common in winter may indicate an enhanced vulnerability to a broad range of infections. If confirmed, this would have important implications for the occupational healthcare of a very large, global workforce. abstract_id: PUBMED:28571199 An Unusual Case of Eosinophilia. Various inflammatory markers have been used to show an association between welding and respiratory tract disorders due to inhalation of fumes. We hereby present a case of 19-year-old male, welder by occupation who presented with upper respiratory tract infection and was documented to have persistent moderate eosinophilia on serial Complete Blood Count (CBC) examination. This was also confirmed by bone marrow examination which was suggestive of increased eosinophilic precursors. Eosinophils are an inflammatory marker and are increased most commonly in respiratory tract of welders due to inhalation of metal fumes. Treatment with steroid is gratifying and provides short term symptomatic relief. Avoidance of metal fumes and/or change of job is the long term preventive measure. Welding occupation, as a risk factor, should be considered for causation of persistent respiratory tract inflammation with eosinophilia. abstract_id: PUBMED:9418945 Investigations on immune parameters in welders. The aim of the present investigation was to study the effects of welding fumes on the human immune system. Thirty male subjects who had regularly welded and 16 control persons without occupational exposure were examined. Cellular immunity was evaluated by phenotyping of peripheral leucocytes, measurement of mitogenic T cell response and T cell stimulation in a heterologous mixed lymphocyte reaction. Non-specific immune reactions were quantified by oxidative burst of granulocytes and monocytes and the cytotoxicity of lymphokine-activated killer (LAK) cells. Serum immunoglobulin levels and immunoglobulin production by stimulated B cells served to demonstrate humoral immune reactions. Welding fumes retarded the kinetics of DNA synthesis after phytohaemagglutinin stimulation of T cells and reduced the cytotoxic activity of LAK cells. No effects on lymphocytic subpopulations, mixed lymphocyte reaction, the phagocytosis of leucocytes or the production of immunoglobulins were observed. Several welders reported on recurrent respiratory infections or bronchitis, a few on allergic skin reactions and one worker was affected by asthmatic symptoms. With the exception of a reduced activity of LAK cells, these effects could not be related to any impairment of immune reactions as they were measured by the immunotoxicity tests applied. abstract_id: PUBMED:32980007 Exposure to welding fumes suppresses the activity of T-helper cells. Welders have an increased susceptibility to airway infections with non-typeable Haemophilus influenzae (NTHi), which implicates immune defects and might promote pneumonia and chronic obstructive pulmonary disease (COPD). We hypothesized that welding-fume exposure suppresses Th1-lymphocyte activity. Non-effector CD4+ T-cells from blood of 45 welders (n = 23 gas metal arc welders, GMAW; n = 16 tungsten inert gas welders, TIG; n = 6 others) and 25 non-welders were ex vivo activated towards Th1 via polyclonal T-cell receptor stimulation and IL-12 (first activation step) and then stimulated with NTHi extract or lipopolysaccharide (LPS) (second activation step). IFNγ and IL-2 were measured by ELISA. In the first activation step, IFNγ was reduced in welders compared to non-welders and in the GMAW welders with higher concentrations of respirable particles compared to the lower exposed TIG welders. IFNγ was not influenced by tobacco smoking and correlated negatively with welding-fume exposure, respirable manganese, and iron. In the second activation step, NTHi and LPS induced additional IFNγ, which was reduced in current smokers compared to never smokers in welders as well as in non-welders. Analyzing both activation steps together, IFNγ production was lowest in smoking welders and highest in never smoking non-welders. IL-2 was not associated with any of these parameters. Welding-fume exposure might suppress Th1-based immune responses due to effects of particulate matter, which mainly consists of iron and manganese. For responses to NTHi this is strongest in smoking welders because welding fume suppresses T-cell activation towards Th1 and cigarette smoke suppresses the subsequent Th1-response to NTHi via LPS. Both effects are independent from IL-2-regulated T-cell proliferation. This might explain the increased susceptibility to infections and might promote COPD development. abstract_id: PUBMED:32071859 Arc-welders' pneumoconiosis with atypical radiological and bronchoalveolar lavage fluid findings: A case report. Arc-welders' pneumoconiosis (AWP) is an occupational lung disease and has nonspecific symptoms typically with the patterns of centrilobular and/or branching opacities on chest high-resolution computed tomography (HRCT) which are similar to those of hypersensitivity pneumonitis (HP) and/or respiratory tract infections. Therefore, the differential diagnosis is often difficult if they are not suspected. We report a case of AWP which was initially suspected to be pulmonary tuberculosis because of the chest HRCT findings: centrilobular opacities distributed predominantly on the right lobe. On detailed review of the work history, however, the patient was found to be involved in welding. Prussian blue staining of the lung tissues and the bronchoalveolar lavage fluid (BALF) ferritin analysis were useful for the final diagnosis and the appropriate treatment for AWP. The atypical lymphocytosis in BALF in this case suggested the involvement of HP in the pathogenesis due to the occupational sensitization to causal antigens. To the best of our knowledge, this is the first case report of AWP showing features of HP. AWP should be noted even in patients with the typical patterns of centrilobular opacities on chest HRCT. Medical history, iron staining of lung tissues, and the BALF ferritin analysis would be useful for the diagnosis of these patients. The BALF findings are sometimes indeterminate for the diagnosis because the occupational sensitization to causal antigens might be involved in some cases of AWP. abstract_id: PUBMED:27103350 Are welders more at risk of respiratory infections? N/A abstract_id: PUBMED:28278175 Pneumococcal infection of respiratory cells exposed to welding fumes; Role of oxidative stress and HIF-1 alpha. Welders are more susceptible to pneumococcal pneumonia. The mechanisms are yet unclear. Pneumococci co-opt the platelet activating factor receptor (PAFR) to infect respiratory epithelial cells. We previously reported that exposure of respiratory cells to welding fumes (WF), upregulates PAFR-dependent pneumococcal infection. The signaling pathway for this response is unknown, however, in intestinal cells, hypoxia-inducible factor-1 α (HIF 1α) is reported to mediate PAFR-dependent infection. We sought to assess whether oxidative stress plays a role in susceptibility to pneumococcal infection via the platelet activating factor receptor. We also sought to evaluate the suitability of nasal epithelial PAFR expression in welders as a biomarker of susceptibility to infection. Finally, we investigated the generalisability of the effect of welding fumes on pneumococcal infection and growth using a variety of different welding fume samples. Nasal epithelial PAFR expression in welders and controls was analysed by flow cytometry. WF were collected using standard methodology. The effect of WF on respiratory cell reactive oxygen species production, HIF-1α expression, and pneumococcal infection was determined using flow cytometry, HIF-1α knockdown and overexpression, and pneumococcal infection assays. We found that nasal PAFR expression is significantly increased in welders compared with controls and that WF significantly increased reactive oxygen species production, HIF-1α and PAFR expression, and pneumococcal infection of respiratory cells. In unstimulated cells, HIF-1α knockdown decreased PAFR expression and HIF-1α overexpression increased PAFR expression. However, in knockdown cells pneumococcal infection was paradoxically increased and in overexpressing cells infection was unaffected. Nasal epithelial PAFR expression may be used as a biomarker of susceptibility to pneumococcal infection in order to target individuals, particularly those at high risk such as welders, for the pneumococcal vaccine. Expression of HIF-1α in unexposed respiratory cells inhibits basal pneumococcal infection via PAFR-independent mechanisms. abstract_id: PUBMED:21060340 Common infections and the risk of stroke. The occurrence of stroke in populations is incompletely explained by traditional vascular risk factors. Data from several case-control studies and one large study using case series methodology indicate that recent infection is a temporarily acting, independent trigger factor for ischemic stroke. Both bacterial and viral infections, particularly respiratory tract infections, contribute to this association. A causal role for infection in stroke is supported by a graded temporal relationship between these conditions, and by multiple pathophysiological pathways linking infection and inflammation, thrombosis, and stroke. Furthermore, observational studies suggest that influenza vaccination confers a preventive effect against stroke. Case-control and prospective studies indicate that chronic infections, such as periodontitis, chronic bronchitis and infection with Helicobacter pylori, Chlamydia pneumoniae or Cytomegalovirus, might increase stroke risk, although considerable variation exists in the results of these studies, and methodological issues regarding serological results remain unresolved. Increasing evidence indicates that the aggregate burden of chronic and/or past infections rather than any one single infectious disease is associated with the risk of stroke. Furthermore, genetic predispositions relating to infection susceptibility and the strength of the inflammatory response seem to co-determine this risk. Here, we summarize and analyze the evidence for common acute and chronic infectious diseases as stroke risk factors. abstract_id: PUBMED:16636389 Risk factors of bronchial asthma in children Case representative selective study of children from 1 to 15 years old age in Batumi, with 2110 children population has been carried out. Risk factors were studied by interviewing method, using two step estimation (on the basis of primary and secondary questionnaire) of the disease developing risk. For statistical analysis of the data SPSS v. 12 was used. Study findings confirm causal connection (with high relative risk) and significant probability of the disease (with high attributive risk), with "major risk factors": allergic diseases and atopic dermatitis in relatives and with "minor risk factors": allergic rhinitis. Relatively weak risk factors (with low relative risk), but with high prevalence in child population in Batumi were found: respiratory infections and bottle-feeding. Revealing individual and population risk phenomenology of bronchial asthma in children population in Batumi, on the basis of that the most of risk factors can be monitored, will promote asthma prevention and population health care, which is more rational and cost-effective. Answer: Yes, welders appear to be more at risk of respiratory infections. A study conducted on shipyard workers, including welders, found that welders had a higher prevalence of respiratory symptoms, particularly in winter, and an increased risk of consultation for respiratory infections compared to other manual laborers. The adjusted incidence rate ratio for respiratory infections was higher in welders, indicating an enhanced vulnerability to a broad range of infections (PUBMED:27030577). Additionally, welders have been reported to have an increased susceptibility to airway infections with non-typeable Haemophilus influenzae, which may be due to suppressed T-helper cell activity as a result of exposure to welding fumes (PUBMED:32980007). Furthermore, exposure to welding fumes has been associated with upregulation of the platelet activating factor receptor (PAFR) in respiratory cells, which increases susceptibility to pneumococcal infection (PUBMED:28278175). However, one study found that welders in the engineering industry did not have an increased risk of chronic obstructive lung disease, although there was a subset of welders who smoked that showed a significantly less maximum expiratory flow rate at low lung volumes compared to control subjects who smoked (PUBMED:6463913). Despite these findings, the overall risk of chronic respiratory diseases among welders may not be significantly elevated under certain working conditions.
Instruction: Does offering pricing information to resident physicians in the emergency department potentially reduce laboratory and radiology costs? Abstracts: abstract_id: PUBMED:24849608 Does offering pricing information to resident physicians in the emergency department potentially reduce laboratory and radiology costs? Objectives: The aim of this study was to establish whether price list information could reduce laboratory and radiological examination costs in emergency departments (EDs). Materials And Methods: A prospective survey of adult (&gt;16 years old) admissions was conducted at the ED of a university hospital in Belgium. Nine resident emergency physicians were followed for a span of 6 months, which was divided into 2-month periods: control (October and November 2011), intervention (December 2011 to January 2012), and washout (February and March 2012). Laboratory and radiological costs for each of the daily admissions were calculated during the respective periods and compared. Results: A total of 3758 patients were registered: 1093 in period 1 (control), 1329 in period 2 (intervention), and 1336 in period 3 (washout). We observed significant reductions in examination costs: 10.73% (P=0.015) for laboratory and 33.66% (P&lt;0.001) for radiological costs in period 2 versus period 1; 5.02% (P=0.014) for laboratory and 40.00% (P&lt;0.001) for radiological costs in period 3 versus period 1. In addition, we found that laboratory examination costs increased slightly between periods 2 and 3 (+6.4%), whereas costs related to radiologic examinations continued to decrease (-10.16%); however, these differences were not statistically significant. Conclusion: We conclude that the distribution of price lists at EDs promotes cost awareness, which can result in significant decreases in examination costs. abstract_id: PUBMED:20093935 Cutting costs: the impact of price lists on the cost development at the emergency department. It was shown that physicians working at the Swedish emergency department (ED) are unaware of the costs for investigations performed. This study evaluated the possible impact of price lists on the overall laboratory and radiology costs at the ED of a Swedish university hospital. Price lists including the most common laboratory analyses and radiological investigations at the ED were created. The lists were distributed to all internal medicine physicians by e-mail and exposed above their working stations continually. No lists were provided for the orthopaedic control group. The average costs for laboratory and radiological investigations during the months of June and July 2007 and 2008 were calculated. Neither clinical nor admission procedures were changed. The physicians were blinded towards the study. Statistical analysis was performed using the Student's t-test. A total of 1442 orthopaedic and 1585 medical patients were attended to in 2007. In 2008, 1467 orthopaedic and 1637 medical patients required emergency service. The average costs per patient were 980.27 SKR (98€)/999.41 SKR (100€, +1.95%) for orthopaedic and 1081.36 SKR (108€)/877.3 SKR (88€, -18.8%) for medical patients. Laboratory costs decreased by 9% in orthopaedic and 21.4% in medical patients. Radiology costs changed +5.4% in orthopaedic and -20.59% in medical patients. The distribution and promotion of price lists as a tool at the ED to heighten cost awareness resulted in a major decrease in the investigation costs. A significant decrease in radiological costs could be observed. It can be concluded that price lists are an effective tool to cut costs in public healthcare. abstract_id: PUBMED:28659219 Evaluating physician awareness of common health care costs in the emergency department. Background: Health care costs are on the rise in Canada and the sustainability of our health care system is at risk. As gatekeepers to patient care, emergency department (ED) physicians have a direct impact on health care costs. We aimed to identify current levels of cost awareness among ED physicians. By understanding the current level of physician cost awareness, we hope to identify areas where cost education would provide the greatest benefit in reducing ordering costs. Methods: We conducted a survey evaluating current awareness of common ordering costs among ED physicians from two tertiary teaching hospitals. Our study population was comprised of 124, certified emergency medicine staff physicians and emergency medicine resident physicians. Our survey asked ED physicians to estimate the costs of 41 items across four categories of day-to-day ordering: imaging investigations, materials, laboratory tests, and pharmaceuticals. Items were selected based on frequency of use, availability of cost-effective alternatives, and tests considered to be "low yield". The primary outcome was percentages of underestimates, correct estimates, and overestimates for ED costs among ED physicians. Results: The average percentage of correct cost estimates among ED physicians was 14% across the four ordering categories. Where cost-effective alternatives exist, ED physicians overestimated the cost of the more cost-effective item. They also underestimated the cost of low-yield tests.InterpretationED physicians demonstrated limited cost awareness of common health care costs. Further studies that characterize utilization of hospital resources based on ED physician awareness of cost-effective alternatives and cost of "low yield" tests are needed. abstract_id: PUBMED:9451310 Information sharing can reduce laboratory use by emergency physicians. This study analyzed the effect information sharing through physician profiling would have on emergency physician behavior. It is a before-and-after audit of laboratory use in a community hospital. A 9-month control period was followed by a 15-month period in which the physicians' laboratory use was presented and discussed at monthly meetings. The laboratory use decreased 17.8%, from a mean of 2.36 studies per patient during the control period to 1.94 during the final quarter of the study. The actual laboratory costs per month decreased 17.7%, from a mean of $32,415 per month to $26,687 per month. There was only one possible adverse outcome out of 34,320 patients seen. There were no adverse changes in other quality improvement indicators. Information sharing can result in a decrease in the number and cost of laboratories studies ordered by emergency physicians without an adverse change in routine quality improvement indicators. abstract_id: PUBMED:32063052 Evaluating Radiology Result Communication in the Emergency Department. Purpose: To assess the pattern of result communication that occurs between radiologists and referring physicians in the emergency department setting. Methods: An institutional review board-approved prospective study was performed at a large academic medical center with 24/7 emergency radiology cover. Emergency radiologists logged information regarding all result-reporting communication events that occurred over a 168-hour period. Results: A total of 286 independent result communication events occurred during the study period, the vast majority of which occurred via telephone (232/286). Emergency radiologists spent 10% of their working time communicating results. Similar amounts of time were spent discussing negative and positive cross-sectional imaging examinations. In a small minority of communication events, additional information was gathered through communication that resulted in a change of interpretation from a normal to an abnormal study. Conclusions: Effective and efficient result communication is critical to care delivery in the emergency department setting. Discussion regarding abnormal cases, both in person and over the phone, is encouraged. However, in the emergency setting, time spent on routine direct communication of negative examination results in advance of the final report may lead to increased disruptions, longer turnaround times, and negatively impact patient care. In very few instances, does the additional information gained from the communication event result in a change of interpretation? abstract_id: PUBMED:25754801 Satisfaction of imaging report rendered in emergency setting: a survey of radiology and referring physicians. Rationale And Objectives: To determine physicians' preference toward three types of structured imaging reports (basic structured report [BSR], itemized report [IR], and point-and-click report [PCR]) used in emergency radiology. Materials And Methods: Survey questions were created and considered valid and reliable based on index of item objective congruence from three specialists (&gt;0.75) and a pilot of 25 subjects (Cronbach alpha, 0.83-1.00). Respondents included trainees and attendings in radiology and referring physicians working in the academic emergency department at the time of survey rollout. They were provided report examples of each type and asked to complete a questionnaire consisting of the following five parts: demographics, necessity of imaging report, report quality (content, format and organization, and language), process of reporting, and components of imaging report. For rating scores, the higher value means the higher preference and agreement. Results: The survey received 79.5% response rate. Respondents included 101 physicians (mean age, 29.4 years; 61 radiology physicians and 40 referring physicians; 81 trainees and 20 attending). Overall, IR was preferred over PCR and BSR by all physicians with scores (out of 10) as follows: IR, 7.62-8.83; PCR, 6.62-8.55; BSR, 5.23-6.65; P &lt; .001. IR received scores (out of 5) of 4.03-4.37, PCR 3.32-4.52, and BSR 2.59-3.86 for report quality. For process of reporting, IR had scores (out of 5) of 3.80-4.56, PCR 2.79-4.09, and BSR 2.32-3.56. Conclusions: In emergency setting, physicians preferred IR over PCR and BSR. IR and PCR were equal in report quality metrics, but IR was most preferred in the process of reporting. BSR ranked last in both quality and process. abstract_id: PUBMED:21487298 How aware are Belgian permanent and resident emergency physicians of common medical costs and radiation doses? Objectives: To establish how aware Belgian emergency physicians are of common treatment costs and radiation doses. Materials And Methods: Using a questionnaire survey on a voluntary basis, 60 emergency physicians from four universities and four district hospitals were asked to estimate treatment costs and radiation doses involved in the management of a patient with pulmonary embolism. The responses of permanent and resident physicians were then compared with actual data defined by the Belgian legislation. Physicians' error was calculated as a percentage of the real value using the formula [(real-estimated)/real]×100. Results: Fifty questionnaires were fully completed and analysed. Estimated costs of diagnostic procedures (chest radiograph, ECG, pulmonary-computed tomography, Doppler legs, cardiac ultrasound, ventilation/perfusion scintigraphy), laboratory tests (standard, D-dimers, arterial blood gases), drugs (alteplase, enoxiparine, acenocoumarol) and hospitalization (emergency department, intensive care and pneumology units) were within 25% of real costs for 38, 14, 18 and 30%, respectively, of permanent physicians and 31, 12, 8 and 27% of resident physicians. Drug prices were generally largely overestimated.Mean error of the physicians' estimates of the radiation dose of imaging modalities (chest radiograph, computed tomography and scintigraphy) was 1805% for permanent physicians versus 4997% for resident physicians. There was no significant difference between the two groups for the different items studied. Conclusion: Emergency doctors, whether permanent physicians or resident physicians, have a limited knowledge of both costs and radiation doses of investigations and treatments they prescribe every day. abstract_id: PUBMED:8953958 Distribution of emergency department costs. Study Objective: To report the distribution of emergency department costs by category of expense and level of patient urgency. Methods: Cost-to-charge and relative-value methods were used to determine direct and indirect physician, facility, supply, pharmacy, laboratory, radiology, and miscellaneous costs for 24,010 ED patients. Explicit criteria were used to classify patient visits as nonurgent, semiurgent, or urgent. Results: For all patients, the average costs were physician, $64; facility, $84; laboratory, $21; radiology, $24, and total, $209. Laboratory and radiology costs accounted for 5% of the total costs for nonurgent visits and 23% of the total costs for urgent visits. Conclusion: The distribution of ED costs varies significantly according to the urgency of the medical condition. For nonurgent patient visits, most costs are represented by the hospital facility and ED physicians' costs. Ancillary services represent a much greater proportion of costs for patients with urgent conditions. Although reduced test-ordering might result in some savings among patients with urgent conditions, overall improved cost efficiency can be achieved only through reductions in the fixed costs of operation of hospital EDs. abstract_id: PUBMED:28495346 Changing the electronic request form proves to be an effective tool for optimizing laboratory test utilization in the emergency department. Objectives: Appropriate laboratory utilization more often than not needs to be initiated by the laboratory. This study was performed to analyze the impact on test ordering patterns in the emergency department obtained by omitting certain tests from the electronic tick box request form. The tests could still be ordered by writing the full name of the test or by a phone call. Methods: Erythrocyte sedimentation rate (ESR), fibrinogen, aspartate aminotransferase (AST), calcium and lipase were omitted from the electronic request form and could subsequently be ordered either by phone or a typed-in request. A reflex testing protocol was elaborated for reduction of creatine kinase (CK) and CK-MB analyses. All interventions were introduced with prior consultation with clinical staff and according to current guidelines. The reduction of test orders and costs in the post-intervention period was assessed. All data were retrieved retrospectively from the laboratory information system (LIS). Results: Disappearance from the tick box request form resulted in a significant decrease in the number of requests for targeted tests in the post-intervention year, mostly affecting AST and fibrinogen (83% and 79% reduction of ordering, respectively), followed by a 58% reduction in calcium orders, and 54% and 43% reductions in ESR and lipase requests, respectively. A substantial reduction in CK requests was also observed, while CK-MB requests almost disappeared. Annual cost savings that emerged from all implemented interventions were estimated to be 19,445€. Conclusion: Significant reduction in ordering of selected tests was achieved simply by limiting their availability in hospital computerized order entry (COE) system. The present data suggest that removal of laboratory tests from the electronic request form can be an effective tool for changing physicians' test ordering behavior. abstract_id: PUBMED:3177988 Physician assessments of practice patterns in emergency department radiograph interpretation. Emergency and radiology department directors at 517 acute-care US teaching hospitals were sent identical questionnaires surveying practice patterns of ED plain film radiograph interpretation. Fifty-seven percent of ED directors and 51% of radiology department (RD) directors responded. Both groups reported that the most common practice pattern (60%) was alternating responsibility for immediate interpretation between emergency physicians and radiologists during a 168-hour week. Remaining hospitals were nearly equally divided between systems in which radiologists always provided immediate interpretation and those in which the emergency physicians always provided the initial interpretation. In all systems in which emergency physicians provided some immediate radiograph interpretation, radiologists provided subsequent review. Institutions varied as to the level of training of the physician in both the ED and RD who provided immediate radiograph interpretation; at least 30% of the institutions responding were reported to have either emergency physicians or radiologists of resident level providing immediate interpretation at least part of the 24-hour day. Interpretation discrepancy rates between emergency physicians and radiologists were estimated to be 4% to 6%. The large amount of variation among hospitals in ED radiograph interpretation systems suggests that these systems were not designed with quality of care concerns as their primary criterion. As a consequence, objective patient outcome and process measures need to be developed and measured to ensure that a particular internal system for ED radiograph interpretation is providing an acceptably high standard of patient care. Answer: Yes, offering pricing information to resident physicians in the emergency department has been shown to potentially reduce laboratory and radiology costs. A study conducted in a Belgian university hospital found that the distribution of price lists to resident emergency physicians led to significant reductions in examination costs, with a 10.73% decrease for laboratory and 33.66% decrease for radiological costs during the intervention period compared to the control period. Even during the washout period, the reductions persisted for radiological costs, which decreased by 40.00% compared to the control period (PUBMED:24849608). Similarly, a Swedish study observed that the distribution and promotion of price lists at the emergency department resulted in a major decrease in investigation costs, with laboratory costs decreasing by 21.4% and radiology costs by 20.59% for medical patients after the introduction of price lists (PUBMED:20093935). These findings suggest that providing cost information to physicians can promote cost awareness and lead to more cost-effective ordering behaviors in the emergency department setting.
Instruction: Do the effects of quality improvement for depression care differ for men and women? Abstracts: abstract_id: PUBMED:15550798 Do the effects of quality improvement for depression care differ for men and women? Results of a group-level randomized controlled trial. Objective: We sought to examine whether a quality improvement (QI) program for depression care is effective for both men and women and whether their responses differed. Design: We instituted a group-level, randomized, controlled trial in 46 primary care practices within 6 managed care organizations. Clinics were randomized to usual care or to 1 of 2 QI programs that supported QI teams, provider training, nurse assessment and patient education, and resources to support medication management (QI-Meds) or psychotherapy (QI-Therapy). Patients: There were 1299 primary care patients who screened positive for depression and completed at least one questionnaire during the course of 24 months. Outcome Measures: Outcomes were probable depression, mental health-related quality of life (HRQOL), work status, use of any antidepressant or psychotherapy, and probable unmet need, which was defined as having probable depression but not receiving probable appropriate care. Results: Women were more likely to receive depression care than men over time, regardless of intervention status. The effect of QI-Meds on probable unmet need was delayed for men, and the magnitude of the effect was significantly greater for men than for women; therefore, this intervention reduced differences in probable unmet need between men and women. QI reduced the likelihood of probable depression equally for men and women. QI-Therapy had a greater impact on mental HRQOL and work status for men than for women. QI-Meds improved these outcomes for women. Conclusions: To affect both quality and outcomes of care for men and women while reducing gender differences, QI programs may need to facilitate access to both medication management and effective psychotherapy for depression. abstract_id: PUBMED:19590610 Quality improvement in depression care in the Netherlands: the Depression Breakthrough Collaborative. A quality improvement report. Background: Improving the healthcare for patients with depression is a priority health policy across the world. Roughly, two major problems can be identified in daily practice: (1) the content of care is often not completely consistent with recommendations in guidelines and (2) the organization of care is not always integrated and delivered by multidisciplinary teams. Aim: To describe the content and preliminary results of a quality improvement project in primary care, aiming at improving the uptake of clinical depression guidelines in daily practice as well as the collaboration between different mental health professionals. Method: A Depression Breakthrough Collaborative was initiated from December 2006 until March 2008. The activities included the development and implementation of a stepped care depression model, a care pathway with two levels of treatment intensity: a first step treatment level for patients with non-severe depression (brief or mild depressive symptoms) and a second step level for patients with severe depression. Twelve months data were measured by the teams in terms of one outcome and several process indicators. Qualitative data were gathered by the national project team with a semi-structured questionnaire amongst the local team coordinators. Results: Thirteen multidisciplinary teams participated in the project. In total 101 health professionals were involved, and 536 patients were diagnosed. Overall 356 patients (66%) were considered non-severely depressed and 180 (34%) patients showed severe symptoms. The mean percentage of non-severe patients treated according to the stepped care model was 78%, and 57% for the severely depressed patient group. The proportion of non-severely depressed patients receiving a first step treatment according to the stepped care model, improved during the project, this was not the case for the severely depressed patients. The teams were able to monitor depression symptoms to a reasonable extent during a period of 6 months. Within 3 months, 28% of monitored patients had recovered, meaning a Beck Depression Inventory (BDI) score of 10 and lower, and another 27% recovered between 3 and 6 months. Conclusions And Discussion: A stepped care approach seems acceptable and feasible in primary care, introducing different levels of care for different patient groups. Future implementation projects should pay special attention to the quality of care for severely depressed patients. Although the Depression Breakthrough Collaborative introduced new treatment concepts in primary and specialty care, the change capacity of the method remains unclear. Thorough data gathering is needed to judge the real value of these intensive improvement projects. abstract_id: PUBMED:25316450 Patient-centred quality of care in an IVF programme evaluated by men and women. Study Question: Do men and women value the same aspects of quality of care during IVF treatment when measuring rates of importance by the validated instrument, quality from the patient's perspective of in vitro fertilization (QPP-IVF)? Summary Answer: Women valued most aspects of care as significantly more important than their partner although men and women evaluated the importance of the different care factors in a similar pattern. What Is Known Already: A few validated tools measuring patient-centred quality of care during IVF have been developed. Few studies of gender differences concerning experiences of patient-centred quality of care have been reported in the literature to date. Study Design, Size And Duration: A two-centre study was conducted between September 2011 and May 2012. Heterosexual couples (n = 497) undergoing IVF were invited to complete a questionnaire before receiving the result of the pregnancy test. Participants/materials, Setting, Methods: In all, 363 women and 292 men evaluated quality of care by answering the QPP-IVF questionnaire. The measurements consisted of two kinds of evaluations: the rating of the importance of various aspects of treatment (subjective importance) and the rating of perceived quality of care (perceived reality). Comparisons between men and women on importance ratings and perceived reality ratings were performed both on factor (subscale) and single item levels by intra-couple analyses and corrected for age. A stepwise multiple logistic regression analysis was performed in order to select baseline variables independently predicting evaluation at factor level. Main Results And The Role Of Chance: The response rate was 67.5%, with 363 women (74.2%) and 292 men (60.6%) completing the study. Both the woman and man responded in 251 couples. Women rated the different care aspects as significantly more important than their partner in all factors except the factor, 'Responsibility/continuity'. Both genders gave the factors, 'Medical care' and 'Information after treatment', the highest scores. At item level women rated the majority of items as significantly more important than men. Perceived reality for the majority of factors and items was similarly rated by men and women in the couples. For women, receiving embryo transfer, short duration of infertility, IVF as a method and number of previous cycles were independently correlated to the highest score of importance of certain factors. Limitations, Reason For Caution: The lower response rate of men compared with women (60.6 versus 74.2%, respectively) might have influenced the results through selection bias. Only patients who had adequate fluency in the Swedish language participated. Wider Implications Of The Findings: This study is an important contribution in comparing the needs of men and women undergoing IVF treatments. The QPP-IVF instrument is a suitable instrument for revealing important care aspects identified by both men and women and a useful tool for stimulating patient-centred quality improvements within and between clinics. Study Funding/competing Interest: The study was supported by the LUA/ALF agreement at Sahlgrenska University Hospital, Gothenburg, Sweden, and by Hjalmar Svensson's Research Foundation. None of the authors declared any conflict of interests. abstract_id: PUBMED:35076803 Improving Depression Screening in Primary Care: A Quality Improvement Initiative. The increase in depression during the COVID-19 pandemic underscores the importance of systematic approaches to identify individuals with mental health concerns. Primary care is often underutilized for depression screening, and it is not clear how practices can successfully increase screening rates. This study describes a quality improvement initiative to increase depression screening in five Family Medicine clinics. The initiative included four Plan-Do-Study-Act cycles that resulted in implementing a standardized workflow for depression screening, collaborative efforts with health information technology to prompt providers to perform screening via the medical record, delivering educational materials for providers and clinic staff and conducting follow-up education. Between September 2020 and April 2021 there were 23,745 clinic encounters with adult patients that were analyzed to determine whether patients were up-to-date on depression screening following their visit. A multi-level logistic regression model was constructed to determine the changes in likelihood of a patient being up-to-date on screening over the study period, while controlling for patient demographics and comorbidities. The average proportion of up-to-date patients increased from 61.03% in September 2020 to 82.33% in April 2021. Patients aged 65+ and patients with comorbidities were more likely to be up-to-date on screening; patients with telemedicine visits had lower odds of being up-to-date on depression screening. Overall, this paper describes a feasible, effective intervention to increase depression screening in a primary care setting. Additionally, we discuss lessons learned and recommendations to inform the design of future interventions. abstract_id: PUBMED:36103939 Operationalizing Depression Screening in Ambulatory Palliative Care: A Quality Improvement Project. Background: Depression is common in the palliative care setting and impacts outcomes. Operationalized screening is unusual in palliative care. Local Problem: Lack of operationalized depression screening at two ambulatory palliative care sites. Methods: A fellow-driven quality improvement initiative to implement operationalized depression screening using the patient health questionnaire-2 (PHQ-2). The primary measure was rate of EMR-documented depression screening. Secondary measures were clinician perspectives on the feasibility and acceptability of implementing the PHQ-2. Intervention: The intervention is a clinic-wide implementation of PHQ-2 screening supported by note templates, brief clinician training, referral resources for clinicians, and opportunities for indirect psychiatric consultation. Results: Operationalized depression screening rates increased from 2% to 38%. All clinicians felt incorporation of depression screening was useful and feasible. Conclusions: Operationalized depression screening is feasible in ambulatory palliative care workflow, though optimization through having screening be completed prior to clinician visit might improve uptake. abstract_id: PUBMED:28363686 Treatment and outcomes of acute coronary syndromes in women: An analysis of a multicenter quality improvement Chinese study. Background: Variations in care and outcomes by sex in patients with acute coronary syndrome (ACS) have been reported worldwide. The aims of this study are to describe ACS management according to sex in China and the effects of a quality improvement program in Chinese male and female ACS patients. Methods And Results: Clinical Pathways for Acute Coronary Syndromes - Phase 2 (CPACS-2) was a cluster randomized trial to test whether a clinical pathways-based intervention would improve ACS management in China. The study enrolled 15,141 hospitalized patients [4631 (30.6%) were women] from 75 hospitals throughout China between October 2007 and August 2010. The intervention included clinical pathway implementation and performance measurement using standardized indicators with 6 monthly audit-feedback cycles. Eight key performance indicators reflecting in hospital management of ACS were measured. After adjustment for differences in patient characteristics and comorbidities at presentation, women were significantly less likely to undergo coronary angiography when indicated (RR 0.88 [0.85 to 0.92], P&lt;0.001), less likely to receive guideline recommended medical therapies at discharge (RR 0.94 [0.91 to 0.98], P=0.003) and more likely to be hospitalized for shorter (mean difference -0.42 [-0.73 to -0.12] days, P=0.007). However, in-hospital clinical outcomes did not differ by sex. There was no evidence of heterogeneity in the relative effects of the quality improvement initiative by sex. Conclusions: Sex disparities were apparent in some key quality of care indicators for patients with suspected with ACS presenting to hospitals in China. The beneficial effect of the quality improvement program was consistent in women and men. Clinical Trial Registration: http://www.anzctr.org.au/default.aspx. Unique identifier: ACTRN12609000491268. abstract_id: PUBMED:31271731 Implementation of a Perinatal Depression Care Bundle in a Nurse-Managed Midwifery Practice. Objective: To implement a perinatal depression care bundle at a midwifery practice to help certified nurse-midwives (CNMs) educate women about perinatal depression and direct those affected to mental health services. Design: Quality improvement project to implement a perinatal depression care bundle for care of pregnant women between 24 and 29 weeks gestation. Setting/local Problem: CNMs practicing in a nurse-managed midwifery practice systematically screen all women for perinatal depression during pregnancy and the postpartum period but do not have a consistent method of providing anticipatory guidance about perinatal depression. Participants: All CNMs in the midwifery practice providing prenatal care (n = 16) participated in implementation. Intervention/measurements: The perinatal depression care bundle included three elements: (a) an educational handout; (b) a brief, provider-initiated discussion about perinatal depression; and (c) lists of local and online mental health resources. Four weeks after the care bundle was implemented, we conducted a retrospective chart review to assess CNMs' adherence to the new bundle. Results: Over 4 weeks, 51 prenatal visits met eligibility criteria for participation. CNMs implemented the perinatal depression care bundle for 22 (43.1%) eligible visits. CNM feedback indicated that the care bundle was brief, easy to incorporate into routine care, and well received by women. Conclusion: This project incorporated the use of a perinatal depression care bundle for women seen during routine prenatal care. Using a systematic approach to deliver perinatal depression education and resources reduces process variability and may destigmatize the illness, allowing women to feel empowered to seek help before depression symptoms become severe. abstract_id: PUBMED:28811733 Community Partners in Care: 6-Month Outcomes of Two Quality Improvement Depression Care Interventions in Male Participants. Objective: Limited data exist on approaches to improve depression services for men in under-resourced communities. This article explores this issue using a sub-analysis of male participants in Community Partners in Care (CPIC). Design: Community partnered, cluster, randomized trial. Setting: Hollywood-Metropolitan and South Los Angeles, California. Participants: 423 adult male clients with modified depression (PHQ-8 score≥10). Interventions: Depression collaborative care implementation using community engagement and planning (CEP) across programs compared with the more-traditional individual program, technical assistance (Resources for Services, RS). Main Outcome Measures: Depressive symptoms (PHQ-8 score), mental health-related quality of life (MHRQL), mental wellness, services utilization and settings. Results: At screening, levels of probable depression were moderate to high (17.5%-47.1%) among men across services sectors. Intervention effects on primary outcomes (PHQ-8 score and MHRQL) did not differ. Men in CEP compared with RS had improved mental wellness (OR 1.85, 95% CI 1.00-3.42) and reduced hospitalizations (OR .40, 95% CI .16-.98), with fewer mental health specialty medication visits (IRR 0.33, 95% CI .15-.69), and a trend toward greater faith-based depression visits (IRR 2.89, 95% CI .99-8.45). Conclusions: Exploratory sub-analyses suggest that high rates of mainly minority men in under-resourced communities have high prevalence of depression. A multi-sector coalition approach may hold promise for improving community-prioritized outcomes, such as mental wellness and reduced hospitalizations for men, meriting further development of this approach for future research and program design. abstract_id: PUBMED:30865875 Improving Postpartum Depression Screening in Pediatric Primary Care: A Quality Improvement Project. Background: Despite recommendations for standardized postpartum depression screening in primary care pediatrics, few pediatric healthcare providers are adequately screening mothers for postpartum depression. Aims: To improve standardized screening for postpartum depression in the pediatric primary care setting. Secondary aims were to determine if infant and family characteristics (gender of infant, feeding method, insurance type, income level, ethnicity of infant) were associated with positive postpartum depression screening. Methods: This quality improvement project involved implementing a standardized postpartum depression screening tool into pediatric primary care practice. Independent samples t-test and logistic regression were used for data analysis. Results: Postpartum depression screening practices improved from 83% to 88% (p = 0.096). Although not statistically significant, infant characteristics of male gender, Medicaid or sliding-scale payment for services, and Hispanic ethnicity were associated with higher rates of positive postpartum depression screens. Conclusions: Pediatric health care providers can effectively screen for postpartum depression. Certain infant and family characteristics may alert the provider to higher risks for mothers. abstract_id: PUBMED:28168592 Improving Perinatal Mental Health Care for Women Veterans: Description of a Quality Improvement Program. Purpose We describe results from a quality improvement project undertaken to address perinatal mental healthcare for women veterans. Description This quality improvement project was conducted in a single VA healthcare system between 2012 and 2015 and included screening for depressive symptoms with the Edinburgh Postnatal Depression Scale (EPDS) three times during the perinatal period, a dedicated maternity care coordinator (MCC), an on-site clinical social worker, and an on-site obstetrician/gynecologist (Ob/gyn). Information on prior mental health diagnosis was collected by the MCC or Ob/gyn. The prevalence of perinatal depressive symptoms and receipt of mental healthcare among those with such symptoms are reported by presence of a pre-pregnancy mental health diagnosis. Assessment Of the 199 women who used VA maternity benefits between 2012 and 2015, 56% (n = 111) had at least one pre-pregnancy mental health diagnosis. Compared to those without a pre-pregnancy mental health diagnosis, those with such a diagnosis were more likely to be screened for perinatal depressive symptoms at least once (61.5% vs. 46.8%, p = 0.04). Prevalence of depressive symptoms was 46.7% among those with a pre-pregnancy mental health diagnosis and 19.2% among those without. Among those with a pre-pregnancy mental health diagnosis and depressive symptoms (n = 35), 88% received outpatient mental healthcare and 77% met with the clinical social worker. Among those without a pre-pregnancy mental health diagnosis and depressive symptoms (n = 8), none received outpatient mental healthcare, but 77.8% met with the clinical social worker. Conclusion Improving perinatal mental healthcare for women veterans requires a multidisciplinary approach, including on-site integrated mental healthcare. Answer: Yes, the effects of quality improvement (QI) for depression care do appear to differ for men and women. According to a group-level randomized controlled trial, women were more likely to receive depression care than men over time, regardless of intervention status. However, the effect of QI-Meds on probable unmet need was delayed for men, and the magnitude of the effect was significantly greater for men than for women, which reduced differences in probable unmet need between men and women. QI reduced the likelihood of probable depression equally for men and women. QI-Therapy had a greater impact on mental health-related quality of life (HRQOL) and work status for men than for women, while QI-Meds improved these outcomes for women (PUBMED:15550798). In another study, Community Partners in Care (CPIC), which was a community partnered, cluster, randomized trial, men in the community engagement and planning (CEP) intervention compared with the more traditional individual program, technical assistance (Resources for Services, RS), had improved mental wellness and reduced hospitalizations, with fewer mental health specialty medication visits, and a trend toward greater faith-based depression visits. This suggests that a multi-sector coalition approach may be promising for improving community-prioritized outcomes for men (PUBMED:28811733). These findings indicate that QI programs may need to be tailored to address the specific needs and responses of men and women to effectively improve the quality and outcomes of care for depression while reducing gender differences.
Instruction: Repeat epididymovasostomies: are they worthwhile? Abstracts: abstract_id: PUBMED:36831049 Gene-Environment Interactions in Repeat Expansion Diseases: Mechanisms of Environmentally Induced Repeat Instability. Short tandem repeats (STRs) are units of 1-6 base pairs that occur in tandem repetition to form a repeat tract. STRs exhibit repeat instability, which generates expansions or contractions of the repeat tract. Over 50 diseases, primarily affecting the central nervous system and muscles, are characterized by repeat instability. Longer repeat tracts are typically associated with earlier age of onset and increased disease severity. Environmental exposures are suspected to play a role in the pathogenesis of repeat expansion diseases. Here, we review the current knowledge of mechanisms of environmentally induced repeat instability in repeat expansion diseases. The current evidence demonstrates that environmental factors modulate repeat instability via DNA damage and induction of DNA repair pathways, with distinct mechanisms for repeat expansion and contraction. Of particular note, oxidative stress is a key mediator of environmentally induced repeat instability. The preliminary evidence suggests epigenetic modifications as potential mediators of environmentally induced repeat instability. Future research incorporating an array of environmental exposures, new human cohorts, and improved model systems, with a continued focus on cell-types, tissues, and critical windows, will aid in identifying mechanisms of environmentally induced repeat instability. Identifying environmental modulators of repeat instability and their mechanisms of action will inform preventions, therapies, and public health measures. abstract_id: PUBMED:29951815 Repeat Expansion Disease Models. Repeat expansion disorders are a group of inherited neuromuscular diseases, which are caused by expansion mutations of repeat sequences in the disease-causing genes. Repeat expansion disorders include a class of diseases caused by repeat expansions in the coding region of the genes, producing mutant proteins with amino acid repeats, mostly the polyglutamine (polyQ) diseases, and another class of diseases caused by repeat expansions in the noncoding regions, producing aberrant RNA with expanded repeats, which are called noncoding repeat expansion diseases. A variety of Drosophila disease models have been established for both types of diseases, and they have made significant contributions toward elucidating the molecular mechanisms of and developing therapies for these neuromuscular diseases. abstract_id: PUBMED:25653910 Tandem-repeat protein domains across the tree of life. Tandem-repeat protein domains, composed of repeated units of conserved stretches of 20-40 amino acids, are required for a wide array of biological functions. Despite their diverse and fundamental functions, there has been no comprehensive assessment of their taxonomic distribution, incidence, and associations with organismal lifestyle and phylogeny. In this study, we assess for the first time the abundance of armadillo (ARM) and tetratricopeptide (TPR) repeat domains across all three domains in the tree of life and compare the results to our previous analysis on ankyrin (ANK) repeat domains in this journal. All eukaryotes and a majority of the bacterial and archaeal genomes analyzed have a minimum of one TPR and ARM repeat. In eukaryotes, the fraction of ARM-containing proteins is approximately double that of TPR and ANK-containing proteins, whereas bacteria and archaea are enriched in TPR-containing proteins relative to ARM- and ANK-containing proteins. We show in bacteria that phylogenetic history, rather than lifestyle or pathogenicity, is a predictor of TPR repeat domain abundance, while neither phylogenetic history nor lifestyle predicts ARM repeat domain abundance. Surprisingly, pathogenic bacteria were not enriched in TPR-containing proteins, which have been associated within virulence factors in certain species. Taken together, this comparative analysis provides a newly appreciated view of the prevalence and diversity of multiple types of tandem-repeat protein domains across the tree of life. A central finding of this analysis is that tandem repeat domain-containing proteins are prevalent not just in eukaryotes, but also in bacterial and archaeal species. abstract_id: PUBMED:31072311 Resolving repeat families with long reads. Background: Draft quality genomes for a multitude of organisms have become common due to the advancement of genome assemblers using long-read technologies with high error rates. Although current assemblies are substantially more contiguous than assemblies based on short reads, complete chromosomal assemblies are still challenging. Interspersed repeat families with multiple copy versions dominate the contig and scaffold ends of current long-read assemblies for complex genomes. These repeat families generally remain unresolved, as existing algorithmic solutions either do not scale to large copy numbers or can not handle the current high read error rates. Results: We propose novel repeat resolution methods for large interspersed repeat families and assess their accuracy on simulated data sets with various distinct repeat structures and on drosophila melanogaster transposons. Additionally, we compare our methods to an existing long read repeat resolution tool and show the improved accuracy of our method. Conclusions: Our results demonstrate the applicability of our methods for the improvement of the contiguity of genome assemblies. abstract_id: PUBMED:37861103 Genetic modifiers of repeat expansion disorders. Repeat expansion disorders (REDs) are monogenic diseases caused by a sequence of repetitive DNA expanding above a pathogenic threshold. A common feature of the REDs is a strong genotype-phenotype correlation in which a major determinant of age at onset (AAO) and disease progression is the length of the inherited repeat tract. Over a disease-gene carrier's life, the length of the repeat can expand in somatic cells, through the process of somatic expansion which is hypothesised to drive disease progression. Despite being monogenic, individual REDs are phenotypically variable, and exploring what genetic modifying factors drive this phenotypic variability has illuminated key pathogenic mechanisms that are common to this group of diseases. Disease phenotypes are affected by the cognate gene in which the expansion is found, the location of the repeat sequence in coding or non-coding regions and by the presence of repeat sequence interruptions. Human genetic data, mouse models and in vitro models have implicated the disease-modifying effect of DNA repair pathways via the mechanisms of somatic mutation of the repeat tract. As such, developing an understanding of these pathways in the context of expanded repeats could lead to future disease-modifying therapies for REDs. abstract_id: PUBMED:32367146 CANVAS: case report on a novel repeat expansion disorder with late-onset ataxia This article presents the case of a 74-year-old female patient who first developed a progressive disease with sensory neuropathy, cerebellar ataxia and bilateral vestibulopathy at the age of 60 years. The family history was unremarkable. Magnetic resonance imaging (MRI) showed atrophy of the cerebellum predominantly in the vermis and atrophy of the spinal cord. The patient was given the syndromic diagnosis of cerebellar ataxia, neuropathy, vestibular areflexia syndrome (CANVAS). In 2019 the underlying genetic cause of CANVAS was discovered to be an intronic repeat expansion in the RFC1 gene with autosomal recessive inheritance. The patient exhibited the full clinical picture of CANVAS and was tested positive for this repeat expansion on both alleles. The CANVAS is a relatively frequent cause of late-onset hereditary ataxia (estimated prevalence 5‑13/100,000). In contrast to the present patient, the full clinical picture is not always present. Therefore, testing for the RFC1 gene expansion is recommended in the work-up of patients with otherwise unexplained late-onset sporadic ataxia. As intronic repeat expansions cannot be identified by next generation sequencing methods, specific testing is necessary. abstract_id: PUBMED:38295802 CAG repeat expansions create splicing acceptor sites and produce aberrant repeat-containing RNAs. Expansions of CAG trinucleotide repeats cause several rare neurodegenerative diseases. The disease-causing repeats are translated in multiple reading frames and without an identifiable initiation codon. The molecular mechanism of this repeat-associated non-AUG (RAN) translation is not known. We find that expanded CAG repeats create new splice acceptor sites. Splicing of proximal donors to the repeats produces unexpected repeat-containing transcripts. Upon splicing, depending on the sequences surrounding the donor, CAG repeats may become embedded in AUG-initiated open reading frames. Canonical AUG-initiated translation of these aberrant RNAs may account for proteins that have been attributed to RAN translation. Disruption of the relevant splice donors or the in-frame AUG initiation codons is sufficient to abrogate RAN translation. Our findings provide a molecular explanation for the abnormal translation products observed in CAG trinucleotide repeat expansion disorders and add to the repertoire of mechanisms by which repeat expansion mutations disrupt cellular functions. abstract_id: PUBMED:34940797 The molecular pathogenesis of repeat expansion diseases. Expanded short tandem repeats in the genome cause various monogenic diseases, particularly neurological disorders. Since the discovery of a CGG repeat expansion in the FMR1 gene in 1991, more than 40 repeat expansion diseases have been identified to date. In the coding repeat expansion diseases, in which the expanded repeat sequence is located in the coding regions of genes, the toxicity of repeat polypeptides, particularly misfolding and aggregation of proteins containing an expanded polyglutamine tract, have been the focus of investigation. On the other hand, in the non-coding repeat expansion diseases, in which the expanded repeat sequence is located in introns or untranslated regions, the toxicity of repeat RNAs has been the focus of investigation. Recently, these repeat RNAs were demonstrated to be translated into repeat polypeptides by the novel mechanism of repeat-associated non-AUG translation, which has extended the research direction of the pathological mechanisms of this disease entity to include polypeptide toxicity. Thus, a common pathogenesis has been suggested for both coding and non-coding repeat expansion diseases. In this review, we briefly outline the major pathogenic mechanisms of repeat expansion diseases, including a loss-of-function mechanism caused by repeat expansion, repeat RNA toxicity caused by RNA foci formation and protein sequestration, and toxicity by repeat polypeptides. We also discuss perturbation of the physiological liquid-liquid phase separation state caused by these repeat RNAs and repeat polypeptides, as well as potential therapeutic approaches against repeat expansion diseases. abstract_id: PUBMED:35592702 Genetic and Epigenetic Interplay Define Disease Onset and Severity in Repeat Diseases. Repeat diseases, such as fragile X syndrome, myotonic dystrophy, Friedreich ataxia, Huntington disease, spinocerebellar ataxias, and some forms of amyotrophic lateral sclerosis, are caused by repetitive DNA sequences that are expanded in affected individuals. The age at which an individual begins to experience symptoms, and the severity of disease, are partially determined by the size of the repeat. However, the epigenetic state of the area in and around the repeat also plays an important role in determining the age of disease onset and the rate of disease progression. Many repeat diseases share a common epigenetic pattern of increased methylation at CpG islands near the repeat region. CpG islands are CG-rich sequences that are tightly regulated by methylation and are often found at gene enhancer or insulator elements in the genome. Methylation of CpG islands can inhibit binding of the transcriptional regulator CTCF, resulting in a closed chromatin state and gene down regulation. The downregulation of these genes leads to some disease-specific symptoms. Additionally, a genetic and epigenetic interplay is suggested by an effect of methylation on repeat instability, a hallmark of large repeat expansions that leads to increasing disease severity in successive generations. In this review, we will discuss the common epigenetic patterns shared across repeat diseases, how the genetics and epigenetics interact, and how this could be involved in disease manifestation. We also discuss the currently available stem cell and mouse models, which frequently do not recapitulate epigenetic patterns observed in human disease, and propose alternative strategies to study the role of epigenetics in repeat diseases. abstract_id: PUBMED:34819794 Evaluation and pharmacists perspective of repeat prescribing process in refill clinics. Introduction: Repeat prescription refers to a re-prescribed medications list issued by a refill clinic, commonly for stable chronic illnesses. The issues regarding repeat prescriptions have garnered increasing important in recent years, as no general agreement about a standardized protocol exists between organizations. Due to the importance of pharmacists' involvement and intervention in the process of repeat prescription and the lack of local studies discussing this topic, the aim of this study was to assess pharmacists' perspectives toward the repeat prescription process and identify the issues related to repeat prescriptions in refill clinics at tertiary hospitals. Methodology: A self-developed questionnaire was used to assess outpatient pharmacists' perspectives toward the repeat prescription process. This was followed by a comprehensive review of the electronic health records (EHR) of patients who requested repeat prescriptions to identify related issues. The study was conducted at a tertiary teaching hospital from September 2019 to January 2020. Results: Based on the questionnaire, 34 pharmacists reported receiving less than 10 repeat prescriptions per week (82.35%); nevertheless, around 88.24% of pharmacists have faced issues with the repeat prescription process, and only 15.65% of the issues got resolved. Most of the pharmacists (88.24%) showed a proactive attitude toward modifying the work process to reduce issues. Further, the review of the patients' EHR identified 1766 prescriptions with related issues in 617 (14.02%) patients' profiles. Most of these issues were seen in the elderly (46.7%). The most common issue encountered was "Patients came too early to request," which accounted for 986 (55.8%) of the total issues, followed by the issue of "Refilling a restricted medication" reported at 247 (14%). Only 11% of these issues were completely resolved by pharmacists. Conclusion: The repeat prescription service might be associated with issues that lead to preventable adverse effects, especially among the elderly who are prone to such effects. Comprehensive reviews of patients' profiles are necessary to assess their needs and avoid such issues. Answer: The provided abstracts do not contain information specifically related to repeat epididymovasostomies or their outcomes. The abstracts focus on repeat expansion diseases, their genetic and environmental factors, molecular pathogenesis, and the role of tandem-repeat protein domains across the tree of life. They also discuss the development of disease models, the genetic and epigenetic interplay in disease onset and severity, and the evaluation of repeat prescribing processes in refill clinics. However, none of the abstracts address the clinical practice or success rates of repeat epididymovasostomies, which are surgical procedures typically performed to correct male reproductive tract obstructions. Therefore, based on the provided abstracts, it is not possible to determine whether repeat epididymovasostomies are worthwhile. Additional literature specific to the surgical outcomes and success rates of repeat epididymovasostomies would be required to answer this question.
Instruction: Are there biological differences between screen-detected and interval colorectal cancers in the English Bowel Cancer Screening Programme? Abstracts: abstract_id: PUBMED:27219017 Are there biological differences between screen-detected and interval colorectal cancers in the English Bowel Cancer Screening Programme? Background: We measured biomarkers of tumour growth and vascularity in interval and screen-detected colorectal cancers (CRCs) in the English Bowel Cancer Screening Programme in order to determine whether rapid tumour growth might contribute to interval CRC (a CRC diagnosed between a negative guaiac stool test and the next scheduled screening episode). Methods: Formalin-fixed, paraffin-embedded sections from 71 CRCs (screen-detected 43, interval 28) underwent immunohistochemistry for CD31 and Ki-67, in order to measure the microvessel density (MVD) and proliferation index (PI), respectively, as well as microsatellite instability (MSI) testing. Results: Interval CRCs were larger (P=0.02) and were more likely to exhibit venous invasion (P=0.005) than screen-detected tumours. There was no significant difference in MVD or PI between interval and screen-detected CRCs. More interval CRCs displayed MSI-high (14%) compared with screen-detected tumours (5%). A significantly (P=0.005) higher proportion (51%) of screen-detected CRC resection specimens contained at least one polyp compared with interval CRC (18%) resections. Conclusions: We found no evidence of biological differences between interval and screen-detected CRCs, consistent with the low sensitivity of guaiac stool testing as the main driver of interval CRC. The contribution of synchronous adenomas to occult blood loss for screening requires further investigation. abstract_id: PUBMED:22782347 Comparison of screen-detected and interval colorectal cancers in the Bowel Cancer Screening Programme. Background: The NHS Bowel Cancer Screening Programme (BCSP) offers biennial faecal occult blood testing (FOBt) followed by colonoscopy after positive results. Colorectal cancers (CRCs) registered with the Northern Colorectal Cancer Audit Group database were cross-referenced with the BCSP database to analyse their screening history. Methods: The CRCs in the screening population between April 2007 and March 2010 were identified and classified into four groups: control (diagnosed before first screening invite), screen-detected, interval (diagnosed between screening rounds after a negative FOBt), and non-uptake (declined screening). Patient demographics, tumour characteristics and survival were compared between groups. Results: In all, 511 out of 1336 (38.2%) CRCs were controls; 825 (61.8%) were in individuals invited for screening of which 322 (39.0%) were screen detected, 311 (37.7%) were in the non-uptake group, and 192 (23.3%) were interval cancers. Compared with the control and interval cancer group, the screen-detected group had a higher proportion of men (P=0.002, P=0.003 respectively), left colon tumours (P=0.007, P=0.003), and superior survival (both P&lt;0.001). There was no difference in demographics, tumour location/stage, or survival between control and interval groups. Conclusion: The FOBt is better at detecting cancers in the left colon and in men. The significant numbers of interval cancers weren't found to have an improved outcome compared with the non-screened population. abstract_id: PUBMED:30694563 Screen-detected and interval colorectal cancers in England: Associations with lifestyle and other factors in women in a large UK prospective cohort. Faecal occult blood (FOB) - based screening programmes for colorectal cancer detect about half of all cancers. Little is known about individual health behavioural characteristics which may be associated with screen-detected and interval cancers. Electronic linkage between the UK National Health Service Bowel Cancer Screening Programme (BCSP) in England, cancer registration and other national health records, and a large on-going UK cohort, the Million Women Study, provided data on 628,976 women screened using a guaiac-FOB test (gFOBt) between 2006 and 2012. Relative risks (RRs) and 95% confidence intervals (CIs) were estimated by logistic and Cox regression for associations between individual lifestyle factors and risk of colorectal tumours. Among screened women, 766 were diagnosed with screen-detected colorectal cancer registered within 2 years after a positive gFOBt result, and 749 with interval colorectal cancers registered within 2 years after a negative gFOBt result. Current smoking was significantly associated with risk of interval cancer (RR 1.64, 95%CI 1.35-1.99) but not with risk of screen-detected cancer (RR 1.03, 0.84-1.28), and was the only factor of eight examined to show a significant difference in risk between interval and screen-detected cancers (p for difference, 0.003). Compared to screen-detected cancers, interval cancers tended to be sited in the proximal colon or rectum, to be of non-adenocarcinoma morphology, and to be of higher stage. abstract_id: PUBMED:25247322 Screen-detected colorectal cancers are associated with an improved outcome compared with stage-matched interval cancers. Background: Colorectal cancers (CRCs) detected through the NHS Bowel Cancer Screening Programme (BCSP) have been shown to have a more favourable outcome compared to non-screen-detected cancers. The aim was to identify whether this was solely due to the earlier stage shift of these cancers, or whether other factors were involved. Methods: A combination of a regional CRC registry (Northern Colorectal Cancer Audit Group) and the BCSP database were used to identify screen-detected and interval cancers (diagnosed after a negative faecal occult blood test, before the next screening round), diagnosed between April 2007 and March 2010, within the North East of England. For each Dukes' stage, patient demographics, tumour characteristics, and survival rates were compared between these two groups. Results: Overall, 322 screen-detected cancers were compared against 192 interval cancers. Screen-detected Dukes' C and D CRCs had a superior survival rate compared with interval cancers (P=0.014 and P=0.04, respectively). Cox proportional hazards regression showed that Dukes' stage, tumour location, and diagnostic group (HR 0.45, 95% CI 0.29-0.69, P&lt;0.001 for screen-detected CRCs) were all found to have a significant impact on the survival of patients. Conclusions: The improved survival of screen-detected over interval cancers for stages C and D suggest that there may be a biological difference in the cancers in each group. Although lead-time bias may have a role, this may be related to a tumour's propensity to bleed and therefore may reflect detection through current screening tests. abstract_id: PUBMED:27992095 Faecal occult blood testing screening for colorectal cancer and 'missed' interval cancers: are we ignoring the elephant in the room? Results of a multicentre study. Aim: Biennial faecal occult blood testing (FOBT) is used to screen for colorectal cancer throughout the UK. Interval cancers are tumours that develop in patients between screening rounds who have had a negative FOBT. Through a multicentre study, we compared the demographics of patients with interval cancers, FOBT screen detected cancers and cancers that developed in patients who chose not to participate in the screening programme. Method: Five hundred and sixteen colorectal cancers were detected in the screening age group (60-74 years) population in three UK National Health Service hospitals over 2 years. One hundred and twenty seven (25%) were interval cancers, 161 (31%) were screen detected and 228 (44%) were cancers that developed in patients who had declined FOBT. The interval cancer group had a higher incidence of right-sided cancers (38% vs 29% and 24%), a higher proportion of high tumour stages (Dukes C and D) (70% vs 53% and 33%) and a shorter time from diagnosis to death (10 months vs 13 months and 24 months) compared to patients who had declined the FOBT and the FOBT screen detected cancers. Of all the patients studied, those with right-sided interval cancers had the worst outcome. Conclusion: A quarter of the colorectal cancers diagnosed in our study were interval cancers. Patients with right-sided interval cancers had the highest proportion of Dukes C and D tumours coupled with the shortest survival time after diagnosis compared with the other groups. abstract_id: PUBMED:29368418 Ileocolonic neuroendocrine tumours identified in the English bowel cancer screening programme. Aim: Ileocolonic neuroendocrine tumours (NETs) are diagnosed as part of bowel cancer screening programmes (BCSPs). The aim of this study was to identify and characterize NETs diagnosed within the English BCSP, a double-screen programme that uses guaic faecal occult blood test (gFOBT) screening and colonoscopy, by interrogating the national colorectal screening database and validating the findings with individual BCSP centres. Method: The Exeter database was interrogated by running queries to identify participants with coded NETs (from the start of the programme in July 2006 - 1 December 2014). A written proforma was sent to the responsible BCSP clinician for validation and characterization. Results: During this period, 13 061 716 participants were adequately screened using gFOBTs, and 259 765 participants had definitively abnormal results. There were 146 unique participants with NET-related codes from 216 707 BCSP colonoscopies. The diagnosis rates per 100 000 colonoscopies were 29 rectal, 18 colonic and 11 ileal NETs. The majority of rectal NETs had Grade 1 (80%) and Stage T1 (85.1%) disease. Over half of ileal NETs (53.6%) in this study had invasive disease, with 85.2% having nodal and 36.1% having metastastatic disease. Conclusion: The current study highlights the rate of colorectal NETs diagnosed in the English BCSP. These data highlight a higher-than-anticipated incidence, and the potential additional benefit of BCSPs in identifying occult NETs. abstract_id: PUBMED:18690636 Local impact of the English arm of the UK Bowel Cancer Screening Pilot study. Background: The English arm of the UK Bowel Cancer Screening Pilot study recently concluded its third round. The primary aim was to assess the impact of faecal occult blood test (FOBT) screening on the detection of symptomatic (non-screen-detected) cancers within the target age group (50-69 years). The secondary aim was to assess differences between screened and non-screened cohorts in Dukes' classification at diagnosis. Methods: This population-based study utilized retrospective analysis of existing validated colorectal cancer (CRC) data over 5 years (April 2000 to March 2005), encompassing rounds one and two of screening. Results: There was a 23 per cent (P = 0.011) reduction in the diagnosis of over the 5 years. Presentations with symptomatic cancer reduced by 49 per cent (P = 0.049), with a proportionate (2.6-fold) rise in the detection of screened (asymptomatic) malignancy. Cancers were diagnosed at an earlier stage in the screened population, with significantly more Dukes' A tumours than in the non-screen-detected cohort (P &lt; 0.001) and an estimated odds ratio of 0.27 (95 per cent confidence interval 0.08 to 0.91) (P = 0.035) for Dukes' 'D' cancers. Conclusion: FOBT screening resulted in a significant reduction in the number of symptomatic cancers detected within the target age group. Tumours detected by screening were diagnosed at an earlier pathological stage. abstract_id: PUBMED:22486166 Inter-observer variability in the histological assessment of colorectal polyps detected through the NHS Bowel Cancer Screening Programme. Aims: Although effective clinical management of colorectal polyps detected through the National Health Service (NHS) Bowel Cancer Screening Programme (BCSP) is dependent on the quality of pathological diagnosis, there have been few attempts to formally evaluate inter-observer variability in histological assessment. The aim of this study was to examine the impact of inter-observer variability on the reported prevalence of prognostic features in a large series of screen-detected colorectal polyps. Methods And Results: A retrospective series of 1329 screen-detected polyps (2008-10) was identified from computerized records at two histopathology departments participating in the NHS BCSP. Slides from a sample of 239 polyps were exchanged between centres for independent review and measurement of inter-observer (kappa) agreement. There were significant between-centre differences in the prevalence of polyps with high-risk histological features. Diagnostic review demonstrated good reliability with respect to the assessment of adenomatous change (κ = 0.83), excision margin status (κ = 0.74), high-grade dysplasia (0.61) and invasive malignancy (κ = 0.84). By contrast, there were significant inter-observer differences in the classification of villous lesions (0.18) despite recent efforts to standardize reporting practice. Conclusions: Inter-observer variability in the assessment of screen-detected colorectal polyps limits the prognostic value of histological subtyping and highlights the need for clarification of existing diagnostic criteria. abstract_id: PUBMED:38433121 Adenoma characteristics in the English Bowel Cancer Screening Programme. Aim: The English Bowel Cancer Screening Programme detects colorectal cancers and premalignant polyps in a faecal occult blood test-positive population. The aim of this work is to describe the detection rates and characteristics of adenomas within the programme, identify predictive factors influencing the presence or absence of carcinoma within adenomas and identify the factors predicting the presence of advanced colonic neoplasia in different colon segments. Method: The Bowel Cancer Screening System was retrospectively searched for polyps detected during colonoscopies between June 2006 and June 2012, at which time a guaiac test was being used. Data on size, location and histological features were collected, and described. Univariate and multivariate analyses were used to determine the significant factors influencing the development of carcinoma within an adenoma. Results: A total of 229 419 polyps were identified; after exclusions 136 973 adenomas from 58 334 patients were evaluated. Over half were in the rectum or sigmoid colon. Subcentimetre adenomas accounted for 69.8% of the total. The proportion of adenomas containing advanced histological features increased with increasing adenoma size up to 35 mm, then plateaued. A focus of carcinoma was found in 2282 (1.7%) adenomas, of which 95.6% were located distally. Carcinoma was identified even in diminutive adenomas (0.1%). The proportion of adenomas containing cancer was significantly higher in women than men (2.0% vs. 1.5%, p &lt; 0.001). Conclusion: This national, prospectively captured dataset adds robust information about histological features of adenomas that convey an increased risk for colorectal cancer, and identifies caecal adenomas, high-grade dysplasia, increasing adenoma size, distal location and female sex as independent risk factors associated with carcinoma. abstract_id: PUBMED:35578552 Why are most colorectal cancers diagnosed outside of screening? A retrospective analysis of data from the English bowel screening programme. Objective: Despite several interventions to increase participation in England, most colorectal cancers (CRCs) are diagnosed outside of the screening programme. The aims of this study were to better understand why most CRCs are diagnosed externally, the extent to which this is due to suboptimal uptake of screening, and the extent to which it is due to other factors, such as false-negative test results. Setting / Methods: We performed a clinical audit of 1011 patients diagnosed with CRC at St Mark's Hospital (Harrow, UK) between January 2017 and December 2020. Data on the diagnostic pathway and screening history of individuals were extracted from the bowel cancer screening system and assessed using descriptive statistics. Results: 446/1011 (44.1%) patients diagnosed with CRC were eligible for screening at the time of diagnosis. Of these, only 115/446 (25.8%) were diagnosed through screening. Among those diagnosed via non-screening pathways, 210/331 (63.4%) had never taken part in screening, 31/331 (9.4%) had taken part but were not up to date, and 89/331 (26.9%) had taken part and were up-to-date (of these, 82/89 [92.2%] had received a normal or weak positive test result, and 5/89 [5.6%] had received a positive result and declined colonoscopy). Conclusion: Nearly two-thirds of screening eligible patients diagnosed through a non-screening pathway had never taken part in screening. This represents the single largest source of inefficiency within the screening programme, followed by missed findings and inconsistent participation. Given the improved outcomes associated with screen-detected cancers, there is a strong public health mandate to encourage participation. Answer: The investigation into biological differences between screen-detected and interval colorectal cancers in the English Bowel Cancer Screening Programme has yielded mixed results. One study found that interval colorectal cancers (CRCs) were larger and more likely to exhibit venous invasion than screen-detected tumors, but there was no significant difference in microvessel density (MVD) or proliferation index (PI) between the two groups. However, more interval CRCs displayed microsatellite instability-high (MSI-high) compared with screen-detected tumors. Additionally, a significantly higher proportion of screen-detected CRC resection specimens contained at least one polyp compared with interval CRC resections (PUBMED:27219017). Another study suggested that screen-detected CRCs had an improved outcome compared with stage-matched interval cancers, particularly for Dukes' C and D stages, indicating that there may be a biological difference in the cancers in each group. This could be related to a tumor's propensity to bleed and therefore its detectability through current screening tests (PUBMED:25247322). However, another study found no difference in demographics, tumor location/stage, or survival between control and interval groups, suggesting that the significant numbers of interval cancers did not have an improved outcome compared with the non-screened population (PUBMED:22782347). In summary, while some studies suggest there may be biological differences between screen-detected and interval CRCs, such as MSI-high status and the presence of polyps, other studies indicate that the differences in outcomes may not be due to biological factors but rather to the sensitivity of the screening tests and the stage at which the cancer is detected.
Instruction: Disciplined doctors: does the sex of a doctor matter? Abstracts: abstract_id: PUBMED:37546465 Investigation into psychological contract in ethically disciplined group: a case study of academics in Chinese higher education. Ethical values and beliefs are increasingly realized as important factors in the operation of psychological contract for their potential role in determining individuals' attitudes toward employment relationships by valuing mutual exchange. However, to incorporate ethical terms into psychological contract analysis is challenging because they are often confused with relational contract, and ethics of professions can be difficult to summarize and interpret. This study has demonstrated how psychological contract operates within academics in Chinese higher education, an occupational group that is typically considered ethically disciplined and culturally bonded to their identity. Here, we designed a questionnaire survey focusing on transactional/relational psychological contract, ethical framework, and job performance, and statistically analyzed the responses to this survey from 230 Chinese higher education academics. It finds that the sample population perceived psychology contact with a relatively low contribution from monetary terms, while a strong correlation was observed between ethics and relational terms. In addition, the influence of emotional and ethical terms on job performance was clearly differentiated in statistics. From analyzing through a mediation model, it suggested an intermediated role of ethics between psychological contract and job performance. Findings in this study have demonstrated that ethically disciplined groups exhibit unique features in both their perceptions of psychological contract and their correlation with ethics and job performance, which is anomalous in other occupations. This study provides a protocol demonstrating the role of ethical framework in the operation of psychological contract, particularly within occupational groups bonded strongly to their identity/profession and constrained by ethics imposed by the society. abstract_id: PUBMED:21534900 Doctors disciplined for professional misconduct in Australia and New Zealand, 2000-2009. Objectives: To describe professional discipline cases in Australia and New Zealand in which doctors were found guilty of professional misconduct, and to develop a typology for describing the misconduct. Design And Setting: A retrospective analysis of disciplinary cases adjudicated in five jurisdictions (New South Wales, Victoria, Queensland, Western Australia and New Zealand) in 2000-2009. Main Outcome Measures: Characteristics of the cases (setting, misconduct type, patient outcomes, disciplinary measure imposed), characteristics of the doctors involved (sex, specialty, years since qualification) and population-level case rates (by doctor characteristics). Results: The tribunals studied disciplined 485 doctors. Male doctors were disciplined for misconduct at four times the rate of their female colleagues (91 versus 22 cases per 100 000 doctor-years). Obstetrics and gynaecology and psychiatry were the specialties with the highest rates (224 and 178 cases per 100 000 doctor-years). The mean age of disciplined doctors did not differ from that of the general doctor population. The most common types of offences considered as the primary issue were sexual misconduct (24% of cases), illegal or unethical prescribing (21%) and inappropriate medical care (20%). In 78% of cases, the tribunal made no mention of any patient having experienced physical or mental harm as a result of the misconduct. Penalties were severe, with 43% of cases resulting in removal from practice and 37% in restrictions on practice. Conclusions: Disciplinary cases in Australia and New Zealand have features distinct from those studied internationally. The recent nationalisation of Australia's medical boards offers new possibilities for tracking and analysing disciplinary cases to improve the safety and quality of health care. abstract_id: PUBMED:24121699 Disciplined care for disciplined patients: experience of hospitalized blind patients. Blindness is a permanent condition that alters daily life of blind people. Interpretive phenomenology was used to understand lived experiences of the hospitalized blind people. "Disciplined care for disciplined patients" was one of the themes that emerged from the data. Provision of disciplined care can help health care professionals provide a holistic and comprehensive competent care for blind patients. abstract_id: PUBMED:34248397 Evaluating refactorings for disciplining #ifdef annotations: An eye tracking study with novices. The C preprocessor is widely used in practice. Conditional compilation with #ifdef annotations allows developers to flexibly introduce variability in their programs. Developers can use disciplined annotations, entirely enclosing full statements with preprocessor directives, or undisciplined ones, enclosing only parts of the statements. Despite some debate, there is no consensus on whether a developer should use exclusively disciplined annotations. While one prior study found undisciplined annotations more time-consuming and error-prone, another study found no difference between disciplined and undisciplined annotations regarding task completion time and accuracy. In this article, we evaluate whether three fine-grained refactorings to discipline #ifdef annotations correlate with improvements in code comprehension and visual effort with an eye tracker. We conduct a controlled experiment with 64 human subjects who were majoritarily novices in the C programming language. We observed statistically significant differences for two refactorings to discipline annotations with respect to the analyzed metrics (time, fixation duration, fixation count, and regressions count) in the code regions changed by each refactoring. abstract_id: PUBMED:32558629 The doctor in a minority. Suppose that a doctor carrying out a treatment or advising on a treatment or acting as an expert in litigation or writing or lecturing about a treatment is in a minority so far as contemporary medical opinion is concerned. It may be a matter of choice for the doctor between treatment A (the majority practice) or treatment B (the minority practice), and the minority treatment may be of an innovative character. Unfortunately, things went badly wrong, the patient suffered harm and the doctor finds him/herself a defendant in a case for clinical negligence. What is the legal duty of the doctor? Is it sufficient that he/she acted in good faith? Or that he/she was a competent doctor? Or that he/she was a doctor following the practice of a substantial number of doctors, albeit a minority? Or that he/she was in effect acting 'on his/her own'? The legal test is: Was the doctor following the practice of a responsible body of medical clinical opinion, albeit a minority opinion? Medicine has made huge advances over the years - one of the great achievements. But many advances have come about because of the initiative of one individual or a small group of individuals, often in the face of strong disbelief or opposition. The medical profession is a conservative profession, understandably so in view of the obvious inherent risks. Original ideas may not be well received. Therefore, the minority innovative doctor must proceed carefully because he/she runs the risk of a medical mishap, criticism and litigation. abstract_id: PUBMED:11229991 Psychiatrists disciplined by a state medical board. Objective: This study determined the risk of discipline by a medical board for psychiatrists relative to other physicians and assessed the contributions to such risk. Method: Physicians disciplined by the California Medical Board in a 30-month period were compared with matched groups of nondisciplined physicians. Results: Among 584 disciplined physicians, there were 75 (12.8%) psychiatrists, nearly twice the number of psychiatrists among nondisciplined physicians. Female psychiatrists were underrepresented in the disciplined group. Psychiatrists were significantly more likely than nonpsychiatrist physicians to be disciplined for sexual relationships with patients and about as likely to be charged with negligence or incompetence. The disciplined and nondisciplined psychiatrists did not differ significantly from a group of 75 nondisciplined psychiatrists on years since medical school graduation, international medical graduate status, or board certification. The disciplined group included significantly more psychiatrists who claimed child psychiatry as their first or second specialty and significantly fewer psychoanalysts. Conclusions: Organized psychiatry has an obligation to address sexual contact with patients and other causes for medical board discipline. This obligation may be addressable through enhanced residency training, recertification exams, and other means of education. abstract_id: PUBMED:9634259 Physicians disciplined for sex-related offenses. Context: Physicians who abuse their patients sexually cause immense harm, and, therefore, the discipline of physicians who commit any sex-related offenses is an important public health issue that should be examined. Objectives: To determine the frequency and severity of discipline against physicians who commit sex-related offenses and to describe the characteristics of these physicians. Design And Setting: Analysis of sex-related orders from a national database of disciplinary orders taken by state medical boards and federal agencies. Subjects: A total of 761 physicians disciplined for sex-related offenses from 1981 through 1996. Main Outcome Measures: Rate and severity of discipline over time for sex-related offenses and specialty, age, and board certification status of disciplined physicians. Results: The number of physicians disciplined per year for sex-related offenses increased from 42 in 1989 to 147 in 1996, and the proportion of all disciplinary orders that were sex related increased from 2.1% in 1989 to 4.4% in 1996 (P&lt;.001 for trend). Discipline for sex-related offenses was significantly more severe (P&lt;.001) than for non-sex-related offenses, with 71.9% of sex-related orders involving revocation, surrender, or suspension of medical license. Of 761 physicians disciplined, the offenses committed by 567 (75%) involved patients, including sexual intercourse, rape, sexual molestation, and sexual favors for drugs. As of March 1997, 216 physicians (39.9%) disciplined for sex-related offenses between 1981 and 1994 were licensed to practice. Compared with all physicians, physicians disciplined for sex-related offenses were more likely to practice in the specialties of psychiatry, child psychiatry, obstetrics and gynecology, and family and general practice (all P&lt;.001) than in other specialties and were older than the national physician population, but were no different in terms of board certification status. Conclusions: Discipline against physicians for sex-related offenses is increasing over time and is relatively severe, although few physicians are disciplined for sexual offenses each year. In addition, a substantial proportion of physicians disciplined for these offenses are allowed to either continue to practice or return to practice. abstract_id: PUBMED:9634260 Physicians disciplined by a state medical board. Context: State medical boards discipline several thousand physicians each year. Although certain subgroups, such as those disciplined for malpractice, substance use, or sexual abuse, have been studied, little is known about disciplined physicians as a group. Objective: To assess the offenses, contributing factors, and type of discipline of a consecutive series of disciplined physicians. Design: Case-control study on publicly available data matching 375 disciplined physicians with 2 groups of control physicians, one matched solely by locale, and a second matched for sex, type of practice, and locale. Subjects: All disciplined physicians publicly reported by the Medical Board of California from October 1995 through April 1997. Main Outcome Measures: Characteristics of disciplined physicians, offenses leading to discipline, and type of discipline. Results: A total of 375 physicians licensed by the Medical Board of California (approximately 0.24% per year) were disciplined for 465 offenses. The most frequent causes for discipline were negligence or incompetence (34%), abuse of alcohol or other drugs (14%), inappropriate prescribing practices (11%), inappropriate contact with patients (10%), and fraud (9%). Discipline imposed was revocation of medical license (21%), actual suspension of license (13%), stayed suspension of license (45%), and reprimand (21%). Type of offense was significantly associated with severity of discipline (P=.03). In logistic regression models comparing disciplined physicians with controls matched by locale, board discipline was significantly associated with physicians' sex (odds ratio [OR] for women, 0.44; 95% confidence interval [CI], 0.28-0.70) and involvement in direct patient care (OR, 2.56; 95% CI, 1.75-3.75). In the regression model with additional matching criteria, disciplinary action was negatively associated with specialty board certification (OR, 0.42; 95% CI, 0.29-0.60) and positively associated with being in practice more than 20 years (OR, 2.02; 95% CI, 1.39-2.92). Conclusions: A small but substantial proportion of physicians is disciplined each year for a variety of offenses. Further study of disciplined physicians is necessary to identify physicians at high risk for offenses leading to disciplinary action and to develop effective interventions to prevent these offenses. abstract_id: PUBMED:22567070 The characteristics of physicians disciplined by professional colleges in Canada. Background: The identification of health care professionals who are incompetent, impaired, uncaring or have criminal intent has received increasing attention in recent years. These individuals are often subject to disciplinary action by professional licensing authorities. To date, no national data exist for Canadian physicians disciplined for professional misconduct. We sought to describe the characteristics of physicians disciplined by Canadian professional licensing authorities. Methods: We constructed a database of physicians disciplined by provincial licensing authorities during the years 2000 to 2009. Comparisons were made with the general population of physicians licensed in Canada. Data on demographic characteristics, type of misconduct and penalty imposed were collected for each disciplined physician. Results: A total of 606 identifiable physicians were disciplined by their professional college during the years 2000 to 2009. The proportion of licensed physicians who were disciplined in a given year ranged from 0.06% to 0.11%. Fifty-one of the disciplined physicians committed 64 repeat offences, accounting for a total of 113 (19%) offences. Most of the disciplined physicians were independent practitioners (99%), male (92%) and trained in Canada (67%). The most common specialties of physicians subject to disciplinary action were family medicine (62%), psychiatry (14%) and surgery (9%). For disciplined physicians, the average number of years from medical school graduation to disciplinary action was 28.9 (standard deviation [SD] = 11.3). The 3 most frequent violations were sexual misconduct (20%), failure to meet a standard of care (19%) and unprofessional conduct (16%). The 3 most frequently imposed penalties were fines (27%), suspensions (19%) and formal reprimands (18%). Interpretation: A small proportion of registered physicians in Canada were disciplined by their medical licensing authorities. Sexual misconduct was the most common disciplined offence. The standardization of provincial reporting along with the creation of a national database of physician offenders would facilitate more comparable public reporting as well as further research and educational initiatives. abstract_id: PUBMED:34434049 Health Science Students' Perspective on Quality-of-Care-Relating Medical Professionalism. Purpose: Health science students need to be professional to improve quality of care (QOC) in the current Vietnamese healthcare system. Therefore, we aim to evaluate medicine and traditional medicine (TM) students' perception of the professional attributes relating to QOC for improving inter-disciplined professionalism training that promotes QOC in Vietnam. Methods: The cross-sectional study was carried on 2039 students of 6 years at the University of Medicine and Pharmacy at Ho Chi Minh City (HUMP) from the medical and TM faculty in March, 2021. The Vietnamese American Board of Internal Medicine (ABIM) questionnaire (2011) was used as the survey instrument. The confirmatory factor analysis (CFA) was performed to confirm the validity of the scale in TM students. Mean, Min-Max, standard deviation and sample paired t-test were performed for Likert scale. The one-way ANOVA was used for inferential statistics. Results: The CFA demonstrated the validity of the Vietnamese questionnaire in measuring 4 QOC-relating professional attributes, previously found in medical students for TM students. In both faculties and across academic years, students perceived self-awareness and ensuring QOC as the leading important attribute, while social duty and professional habit as the least important attribute. Contrasting with preclinical phase, students' perception did not differ significantly between the two faculties in their clinical years (p &gt; 0.05). Conclusion: TM students share universal QOC-relating professional traits with medical students. Moreover, exposure to clinical environment might increase inter-disciplined agreement on importance of these attributes. However, health sciences students' underestimation of social duty and professional habit persists throughout 6 academic years. Hidden curriculum in clinical training such as specialist-centeredness might hinder the students' improvement in perception of these traits. Therefore, these traits should strongly be emphasized in professionalism training to decrease the effects of hidden curriculum on them. Answer: Yes, the sex of a doctor does appear to matter when it comes to disciplinary actions. Studies have shown that male doctors are disciplined for misconduct at a higher rate than their female colleagues. Specifically, in Australia and New Zealand, male doctors were disciplined at four times the rate of female doctors (91 versus 22 cases per 100,000 doctor-years) (PUBMED:21534900). Additionally, in a study of psychiatrists disciplined by the California Medical Board, female psychiatrists were underrepresented in the disciplined group (PUBMED:11229991). These findings suggest that there is a significant gender disparity in the rates at which doctors are disciplined for professional misconduct.
Instruction: Two-year course trajectories of anxiety disorders: do DSM classifications matter? Abstracts: abstract_id: PUBMED:24912140 Two-year course trajectories of anxiety disorders: do DSM classifications matter? Objective: Anxiety disorders have been shown to differ in their course, but it is unknown whether DSM-categories represent clinically relevant course trajectories. We aim to identify anxiety course trajectories using a data-driven method and to examine whether these course trajectories correspond to DSM-categories or whether other clinical indicators better differentiate them. Method: 907 patients with panic disorder with agoraphobia, panic disorder without agoraphobia , agoraphobia, social phobia, or generalized anxiety disorder according to DSM-IV criteria were derived from a prospective cohort study (Netherlands Study of Depression and Anxiety). Baseline data were collected between September 2004 and February 2007; follow-up data, between October 2006 and March 2009. Latent class growth analysis was conducted, based on symptoms of anxiety and avoidance assessed with the Life Chart Interview covering a 2-year time period. Identified course trajectories were compared with DSM-IV diagnoses and a wider set of predictors. Results: We identified a class with minimal symptoms over time (41.7%), a moderately severe chronic class (42.8%), and a severe chronic class (15.4%). Panic disorder with agoraphobia (OR = 2.14; 95% CI, 1.48-3.09) and social phobia (OR = 1.97; 95% CI, 1.46-2.68) predicted moderately severe chronicity; panic disorder with agoraphobia (OR = 2.70; 95% CI, 1.66-4.40), social phobia (OR = 2.46; 95% CI, 1.62-3.74), and generalized anxiety disorder (OR = 1.86; 95% CI, 1.23-2.82) predicted a severe chronic course. However, baseline severity, duration of anxiety, and disability better predicted severe chronic course trajectories than DSM-categories. Additionally, partner status, age at onset, childhood trauma, and comorbid depressive disorder predicted chronic courses. Conclusions: Course of anxiety was pleomorphic with over 40% having a favorable course, thereby questioning the common notion of chronicity of anxiety disorders. Severity, duration of anxiety, and disability were able to better identify severe chronic course trajectories as compared with DSM-IV categories. These findings facilitate the identification of chronic course trajectories of anxiety disorders in clinical care and support current debates on staging and profiling of mental disorders. abstract_id: PUBMED:34706441 The 9-year clinical course of depressive and anxiety disorders: New NESDA findings. Background: In longitudinal research, switching between diagnoses should be considered when examining patients with depression and anxiety. We investigated course trajectories of affective disorders over a nine-year period, comparing a categorical approach using diagnoses to a dimensional approach using symptom severity. Method: Patients with a current depressive and/or anxiety disorder at baseline (N = 1701) were selected from the Netherlands Study of Depression and Anxiety (NESDA). Using psychiatric diagnoses, we described 'consistently recovered,' 'intermittently recovered,' 'intermittently recurrent', and 'consistently chronic' at two-, four-, six-, and nine-year follow-up. Additionally, latent class growth analysis (LCGA) using depressive, anxiety, fear, and worry symptom severity scores was used to identify distinct classes. Results: Considering the categorical approach, 8.5% were chronic, 32.9% were intermittently recurrent, 37.6% were intermittently recovered, and 21.0% remained consistently recovered from any affective disorder at nine-year follow-up. In the dimensional approach, 66.6% were chronic, 25.9% showed partial recovery, and 7.6% had recovered. Limitations: 30.6% of patients were lost to follow-up. Diagnoses were rated by the interviewer and questionnaires were completed by the participant. Conclusions: Using diagnoses alone as discrete categories to describe clinical course fails to fully capture the persistence of affective symptoms that were observed when using a dimensional approach. The enduring, fluctuating presence of subthreshold affective symptoms likely predisposes patients to frequent relapse. The commonness of subthreshold symptoms and their adverse impact on long-term prognoses deserve continuous clinical attention in mental health care as well further research. abstract_id: PUBMED:32147155 Assessment of the effect of communication skills training on communication apprehension in first year pharmacy students - A two-year study. Introduction: The purpose of the study is to assess the impact of a communication skills course on communication apprehension (CA) in two cohorts of first-year (P1), first quarter pharmacy students over a consecutive two-year span. Methods: The personal report of CA (PCRA-24) was administered at the beginning and completion (pre-post) of a skills-centered communication course to two cohorts of P1, first quarter pharmacy students over a consecutive two-year period. The delivery of the communications course was redesigned during this timeframe based on post-course analysis data and student feedback to incorporate opportunities for students to engage in active learning activities throughout the course. Results: Results of the study revealed a statistically significant reduction of total CA in both cohorts. Cohort 1 had significant reduction of CA in all four measured domains: group discussion meetings, interpersonal communication, and public speaking. Cohort 2 had significant reduction in two of the domains (group and meeting). Conclusions: Overall, this study indicated that the format of this P1, first quarter communications course had a positive effect on student CA. In addition to the data collected for this research project, post-course evaluations and student comments indicated an overall positive reaction to the design and delivery of the course material, active learning assignments, and assessments. abstract_id: PUBMED:26544613 Low stability of diagnostic classifications of anxiety disorders over time: A six-year follow-up of the NESDA study. Background: Stability of diagnosis was listed as an important predictive validator for maintaining separate diagnostic classifications in DSM-5. The aim of this study is to examine the longitudinal stability of anxiety disorder diagnoses, and the difference in stability between subjects with a chronic versus a non-chronic course. Methods: Longitudinal data of 447 subjects with a current pure anxiety disorder diagnosis at baseline from the Netherlands Study of Depression and Anxiety were used. At baseline, 2-, 4-, and 6-year follow-up mental disorders were assessed and numbers (and percentages) of transitions from one anxiety disorder diagnosis to another were determined for each anxiety disorder diagnosis separately and for subjects with a chronic (i.e. one or more anxiety disorder at every follow-up assessment) and a non-chronic course. Results: Transition percentages were high in all anxiety disorder diagnoses, ranging from 21.1% for social anxiety disorder to 46.3% for panic disorder with agoraphobia at six years of follow-up. Transition numbers were higher in the chronic than in the non-chronic course group (p=0.01). Limitations: Due to the 2 year sample frequency, the number of subjects with a chronic course may have been overestimated as intermittent recovery periods may have been missed. Conclusions: These data indicate that anxiety disorder diagnoses are not stable over time. The validity of the different anxiety disorder categories is not supported by these longitudinal patterns, which may be interpreted as support for a more pronounced dimensional approach to the classification of anxiety disorders. abstract_id: PUBMED:33424750 Comparison of Three Motor Subtype Classifications in de novo Parkinson's Disease Patients. Objective: The aims of this study were to compare the characteristics of three motor subtype classifications in patients with de novo Parkinson's disease (PD) and to find the most suitable motor subtype classification for identifying non-motor symptoms (NMSs). Methods: According to previous studies, a total of 256 patients with de novo PD were classified using the tremor-dominant/mixed/akinetic-rigid (TD/mixed/AR), TD/indeterminate/postural instability and gait disturbance (PIGD), and predominantly TD/predominantly PIGD (p-TD/p-PIGD) classification systems. Results: Among the TD/mixed/AR subgroups, the patients with the AR subtype obtained more severe motor scores than the patients with the TD subtype. Among the TD/indeterminate/PIGD subgroups and between the p-TD and p-PIGD subgroups, the patients with the PIGD/p-PIGD subtype obtained more severe scores related to activities of daily living (ADL), motor and non-motor symptoms, including depression, anxiety, and sleep impairment, than the patients with the TD/p-TD subtype. Furthermore, symptoms in the cardiovascular, gastrointestinal, and miscellaneous domains of the Non-motor Questionnaire (NMSQuest) were more prevalent in the patients with the PIGD/p-PIGD subtypes than the patients with the TD/p-TD subtypes. Conclusions: The PIGD/p-PIGD subtypes had more severe ADL, motor and non-motor symptoms than the TD/p-TD subtypes. We disclosed for the first time that the TD/indeterminate/PIGD classification seems to be the most suitable classification among the three motor subtype classifications for identifying NMSs in PD. abstract_id: PUBMED:30660019 Prevalence and course of subthreshold anxiety disorder in the general population: A three-year follow-up study. Background: This study examined the prevalence, course and risk indicators of subthreshold anxiety disorder to determine the necessity and possible risk indicators for interventions. Methods: Data were derived from the 'Netherlands Mental Health Survey and Incidence Study-2' (NEMESIS-2), a psychiatric epidemiological cohort study among the general population (n = 4528). This study assessed prevalence, characteristics, and three-year course of subthreshold anxiety disorder (n = 521) in adults, and compared them to a no anxiety group (n = 3832) and an anxiety disorder group (n = 175). Risk indicators for persistent and progressive subthreshold anxiety disorder were also explored, including socio-demographics, vulnerability factors, psychopathology, physical health and functioning. Results: The three-year prevalence of subthreshold anxiety disorder was 11.4%. At three-year follow-up, 57.3% had improved, 29.0% had persistent subthreshold anxiety disorder and 13.8% had progressed to a full-blown anxiety disorder. Prevalence, characteristics and course of subthreshold anxiety disorder were in between both comparison groups. Risk indicators for persistent course partly overlapped with those for progressive course and included vulnerability and psychopathological factors, and diminished functioning. Limitations: Course analysis were restricted to the development of anxiety disorders, other mental disorders were not assessed. Moreover, due to the naturalistic design of the study the impact of treatment on course cannot be assessed. Conclusions: Subthreshold anxiety disorder is relatively prevalent and at three-year follow-up a substantial part of respondents experienced persistent symptoms or had progressed into an anxiety disorder. Risk indicators like reduced functioning may help to identify these persons for (preventative) treatment and hence reduce functional limitations and disease burden. abstract_id: PUBMED:23106669 Two-year course of anxiety disorders: different across disorders or dimensions? Objective: This study compares diagnostic and symptom course trajectories across different anxiety disorders, and examines the role of anxiety arousal vs. avoidance behaviour symptoms in course prediction. Method: Data were from 834 subjects with a current anxiety disorder from the Netherlands Study of Depression and Anxiety (NESDA) who were re-interviewed after 2 years. DSM-IV-based diagnostic interviews and Life Chart Interviews (LCI) were used to assess the diagnostic and symptom course trajectory over 2 years. Anxiety arousal and avoidance behaviour symptoms were measured with LCI, Beck Anxiety Inventory and Fear Questionnaire. Results: Prognosis varied across disorders, with favourable remittance rates of 72.5% for panic disorder without agoraphobia and 69.7% for generalized anxiety disorder; gradually declining to 53.5% for social phobia and 52.7% for panic disorder with agoraphobia. Only 42.9% of those with multiple anxiety disorder remitted, and this group showed a more chronic course than pure anxiety disorders. Both baseline duration and severity were course predictors. Avoidance behaviour symptoms predicted the outcome better than anxiety arousal symptoms. Conclusions: These data suggest that the specific anxiety disorders such as recognized by DSM-IV are useful in predicting the outcome and that this may be determined largely by the relative severity of avoidance behaviour that patients have developed. abstract_id: PUBMED:24359850 Mixed states: evolution of classifications The nosological position of mixed states has followed the course of classifying methods in psychiatry, the steps of the invention of the clinic, progress in the organization of care, including the discoveries of psychopharmacology. The clinical observation of a mixture of symptoms emerging from usually opposite clinical conditions is classical. In the 70s, a syndromic specification fixed the main symptom combinations but that incongruous assortment failed to stabilize the nosological concept. Then stricter criteriology was proposed. To be too restrictive, a consensus operates a dimensional opening that attempts to meet the pragmatic requirements of nosology validating the usefulness of the class system. This alternation between rigor of categorization and return to a more flexible criteriological option reflects the search for the right balance between nosology and diagnosis. The definition of mixed states is best determined by their clinical and prognostic severity, related to the risk of suicide, their lower therapeutic response, the importance of their psychiatric comorbidities, anxiety, emotional lability, alcohol abuse. Trying to compensate for the lack of categorical definitions and better reflecting the clinical field problems, new definitions complement criteriology with dimensional aspects, particularly taking into account temperaments. abstract_id: PUBMED:28910662 Effect of a clinical skills refresher course on the clinical performance, anxiety and self-efficacy of the final year undergraduate nursing students. Although the final year of nursing undergraduate programs that focus on clinical education are planned to prepare nursing students to better transition to the real world of health care service; evidence has shown that this program is not sufficient to reach this end goal. This controlled trial study was to investigate the effectiveness of a basic clinical skills refresher course for nursing students before entering the internship program. The sample consisted of 160 undergraduate nursing students assigned into two groups. The intervention was a three-day refresher course directed by nurse instructors for intervention group focused on 10 basic nursing procedures in the clinical skill lab. The control group did not receive any intervention. The students' anxiety, clinical self- efficacy and clinical skills practice were measured before and after intervention in both groups. The results indicated that the students who took part in the refresher course experienced lower anxiety levels, higher levels of clinical self-efficacy, and have better clinical skills during their internships. The undergraduate nursing curriculum can be strengthened by the basic clinical skills refresher course. This refresher course can bridge the theory - practice gap and provide a better transition from the student to nurse role. abstract_id: PUBMED:25086766 Posttraumatic stress disorder in African Americans: a two year follow-up study. The present study was a prospective, naturalistic, longitudinal investigation of the two year course of posttraumatic stress disorder (PTSD) in a sample of African Americans with anxiety disorders. The study objectives were to examine the two year course of PTSD and to evaluate differences between African Americans with PTSD and anxiety disorders and African Americans with anxiety disorders but no PTSD with regard to comorbidity, psychosocial impairment, physical and emotional functioning, and treatment participation. The participants were 67 African Americans with PTSD and 98 African Americans without PTSD (mean age 41.5 years, 67.3% female). Individuals with PTSD were more likely to have higher comorbidity, lower functioning, and they were less likely to seek treatment than those with other anxiety disorders but no PTSD. The rate of recovery from PTSD over two years was 0.10 and recovery from comorbid Major Depressive Disorder was 0.55. PTSD appears to be persistent over time in this population. The rates of recovery were lower than what has been reported in previous longitudinal studies with predominantly non-Latino Whites. It is imperative to examine barriers to treatment and factors related to treatment engagement for this population. Answer: The two-year course trajectories of anxiety disorders have been examined to determine whether DSM classifications are clinically relevant for predicting the course of these disorders. A study using data from the Netherlands Study of Depression and Anxiety (NESDA) identified three classes of anxiety course trajectories: minimal symptoms over time, moderately severe chronic, and severe chronic. The study found that DSM-IV categories such as panic disorder with agoraphobia, social phobia, and generalized anxiety disorder were predictors of chronicity. However, baseline severity, duration of anxiety, and disability were better predictors of severe chronic course trajectories than DSM-IV categories. Additionally, partner status, age at onset, childhood trauma, and comorbid depressive disorder also predicted chronic courses (PUBMED:24912140). Another study from the NESDA cohort found that diagnostic classifications of anxiety disorders were not stable over time, with high transition percentages between different anxiety disorder diagnoses. The study concluded that the validity of different anxiety disorder categories is not supported by these longitudinal patterns, suggesting a more dimensional approach to the classification of anxiety disorders might be more appropriate (PUBMED:26544613). Furthermore, a study comparing diagnostic and symptom course trajectories across different anxiety disorders found that prognosis varied across disorders, with some having favorable remittance rates and others showing more chronic courses. The study also indicated that avoidance behavior symptoms predicted outcomes better than anxiety arousal symptoms, suggesting that DSM-IV recognized specific anxiety disorders are useful in predicting outcomes largely due to the severity of avoidance behavior (PUBMED:23106669). In conclusion, while DSM classifications do provide some predictive value for the course of anxiety disorders, other clinical indicators such as baseline severity, duration of anxiety, disability, and avoidance behavior symptoms may offer better differentiation of chronic course trajectories. These findings support current debates on staging and profiling of mental disorders and suggest that a more nuanced, dimensional approach to classification may be beneficial in clinical care (PUBMED:24912140; PUBMED:26544613; PUBMED:23106669).
Instruction: Does the National Health Insurance Scheme provide financial protection to households in Ghana? Abstracts: abstract_id: PUBMED:26275412 Does the National Health Insurance Scheme provide financial protection to households in Ghana? Background: Excessive healthcare payments can impede access to health services and also disrupt the welfare of households with no financial protection. Health insurance is expected to offer financial protection against health shocks. Ghana began the implementation of its National Health Insurance Scheme (NHIS) in 2004. The NHIS is aimed at removing the financial barrier to healthcare by limiting direct out-of-pocket health expenditures (OOPHE). The study examines the effect of the NHIS on OOPHE and how it protects households against catastrophic health expenditures. Methods: Data was obtained from a cross-sectional representative household survey involving 2,430 households from three districts across Ghana. All OOPHE associated with treatment seeking for reported illness in the household in the last 4 weeks preceding the survey were analysed and compared between insured and uninsured persons. The incidence and intensity of catastrophic health expenditures (CHE) among households were measured by the catastrophic health payment method. The relative effect of NHIS on the incidence of CHE in the household was estimated by multiple logistic regression analysis. Results: About 36% of households reported at least one illness during the 4 weeks period. Insured patients had significantly lower direct OOPHE for out-patient and in-patient care compared to the uninsured. On financial protection, the incidence of CHE was lower among insured households (2.9%) compared to the partially insured (3.7%) and the uninsured (4.0%) at the 40% threshold. The incidence of CHE was however significantly lower among fully insured households (6.0%) which sought healthcare from NHIS accredited health facilities compared to the partially insured (10.1%) and the uninsured households (23.2%). The likelihood of a household incurring CHE was 4.2 times less likely for fully insured and 2.9 times less likely for partially insured households relative to being uninsured. The NHIS has however not completely eliminated OOPHE for the insured and their households. Conclusion: The NHIS has significant effect in reducing OOPHE and offers financial protection against CHE for insured individuals and their households though they still made some out-of-pocket payments. Efforts should aim at eliminating OOPHE for the insured if the objective for establishing the NHIS is to be achieved. abstract_id: PUBMED:33848718 Does Ghana's National Health Insurance Scheme provide financial protection to tuberculosis patients and their households? Financial barriers are a key limitation to accessing health services, such as tuberculosis (TB) care in resource-poor settings. In Ghana, the National Health Insurance Scheme (NHIS), established in 2003, officially offers free TB care to those enrolled. Using data from the first Ghana's national TB patient cost survey, we address two key questions 1) what are the key determinants of costs and affordability for TB-affected households, and 2) what would be the impact on costs for TB-affected households of expanding NHIS to all TB patients? We reported the level of direct and indirect costs, the proportion of TB-affected households experiencing catastrophic costs (defined as total TB-related costs, i.e., direct and indirect, exceeding 20% of their estimated pre-diagnosis annual household income), and potential determinants of costs, stratified by insurance status. Regression models were used to determine drivers of costs and affordability. The effect of enrolment into NHIS on costs was investigated through Inverse Probability of Treatment Weighting Analysis. Higher levels of education and income, a bigger household size and an multi-drug resistant TB diagnosis were associated with higher direct costs. Being in a low wealth quintile, living in an urban setting, losing one's job and having MDR-TB increased the odds of experiencing catastrophic costs. There was no evidence to suggest that enrolment in NHIS defrayed medical, non-medical, or total costs, nor mitigated income loss. Even if we expanded NHIS to all TB patients, the analyses suggest no evidence for any impact of insurance on medical cost, income loss, or total cost. An expansion of the NHIS programme will not relieve the financial burden for TB-affected households. Social protection schemes require enhancement if they are to protect TB patients from financial catastrophe. abstract_id: PUBMED:31539034 Inequalities in the benefits of national health insurance on financial protection from out-of-pocket payments and access to health services: cross-sectional evidence from Ghana. A central pillar of universal health coverage (UHC) is to achieve financial protection from catastrophic health expenditure. There are concerns, however, that national health insurance programmes with premiums may not benefit impoverished groups. In 2003, Ghana became the first sub-Saharan African country to introduce a National Health Insurance Scheme (NHIS) with progressively structured premium charges. In this study, we test the impact of being insured on utilization and financial risk protection compared with no enrolment, using the 2012-13 Ghana Living Standards Survey (n = 72 372). Consistent with previous studies, we observed that participating in health insurance significantly decreased the probability of unmet medical needs by 15 percentage points (p.p.) and that of incurring catastrophic out-of-pocket (OOP) health payments by 7 p.p. relative to no enrolment in the NHIS. Households living outside a 1-h radius to the nearest hospital had lower reductions in financial risk from excess OOP medical spending relative to households living closer (-5 p.p. vs -9 p.p.). We also find evidence that in Ghana, the scheme was highly pro-poor. Once insured, the poorest 40% of households experienced significantly larger improvements in medical utilization (18 p.p. vs. 8 p.p.) and substantively larger reductions in catastrophic OOP health expenditure (-10 p.p. vs. -6 p.p.) compared with that of the richest households. However, health insurance did not benefit vulnerable persons equally from financial risk. Once insured, poor, low-educated and self-employed households living far from hospitals had significantly lower reductions in catastrophic OOP medical spending compared with their counterparts living closer. Taken together, we show that enrolment in the NHIS is associated with improved financial protection but less so among geographically remote vulnerable groups. Efforts to boost not just insurance uptake but also health service delivery may be needed as a supplement for insurance schemes to accelerate progress towards UHC. abstract_id: PUBMED:31885056 Health insurance coverage, type of payment for health insurance, and reasons for not being insured under the National Health Insurance Scheme in Ghana. Background: Ghana's National Health Insurance Scheme has improved access to care, although equity and sustainability issues remain. This study examined health insurance coverage, type of payment for health insurance and reasons for being uninsured under the National Health Insurance Scheme in Ghana. Methods: The 2014 Ghana Demographic Health Survey datasets with information for 9396 women and 3855 men were analyzed. The study employed cross-sectional national representative data. The frequency distribution of socio-demographics and health insurance coverage differentials among men and women is first presented. Further statistical analysis applies a two-stage probit Hackman selection model to determine socio-demographic factors associated with type of payment for insurance and reasons for not insured among men and women under the National Health insurance Scheme in Ghana. The selection equation in the Hackman selection model also shows the association between insurance status and socio-demographic factors. Results: About 66.0% of women and 52.6% of men were covered by health insurance. Wealth status determined insurance status, with poorest, poorer and middle-income groups being less likely to pay themselves for insurance. Women never in union and widowed women were less likely to be covered relative to married women although this group was more likely to pay NHIS premiums themselves. Wealth status (poorest, poorer and middle-income) was associated with non-affordability as a reason for being not insured. Geographic disparities were also found. Rural men and nulliparous women were also more likely to mention no need of insurance as a reason of being uninsured. Conclusion: Tailored policies to reduce delays in membership enrolment, improve positive perceptions and awareness of National Health Insurance Scheme in reducing catastrophic spending and addressing financial barriers for enrolment among some groups can be positive precursors to improve trust and enrolments and address broad equity concerns regarding the National Health Insurance Scheme. abstract_id: PUBMED:28532403 Assessing the impoverishment effects of out-of-pocket healthcare payments prior to the uptake of the national health insurance scheme in Ghana. Background: There is a global concern regarding how households could be protected from relatively large healthcare payments which are a major limitation to accessing healthcare. Such payments also endanger the welfare of households with the potential of moving households into extreme impoverishment. This paper examines the impoverishing effects of out-of-pocket (OOP) healthcare payments in Ghana prior to the introduction of Ghana's national health insurance scheme. Methods: Data come from the Ghana Living Standard Survey 5 (2005/2006). Two poverty lines ($1.25 and $2.50 per capita per day at the 2005 purchasing power parity) are used in assessing the impoverishing effects of OOP healthcare payments. We computed the poverty headcount, poverty gap, normalized poverty gap and normalized mean poverty gap indices using both poverty lines. We examine these indicators at a national level and disaggregated by urban/rural locations, across the three geographical zones, and across the ten administrative regions in Ghana. Also the Pen's parade of "dwarfs and a few giants" is used to illustrate the decreasing welfare effects of OOP healthcare payments in Ghana. Results: There was a high incidence and intensity of impoverishment due to OOP healthcare payments in Ghana. These payments contributed to a relative increase in poverty headcount by 9.4 and 3.8% using the $1.25/day and $2.5/day poverty lines, respectively. The relative poverty gap index was estimated at 42.7 and 10.5% respectively for the lower and upper poverty lines. Relative normalized mean poverty gap was estimated at 30.5 and 6.4%, respectively, for the lower and upper poverty lines. The percentage increase in poverty associated with OOP healthcare payments in Ghana is highest among households in the middle zone with an absolute increase estimated at 2.3% compared to the coastal and northern zones. Conclusion: It is clear from the findings that without financial risk protection, households can be pushed into poverty due to OOP healthcare payments. Even relatively richer households are impoverished by OOP healthcare payments. This paper presents baseline indicators for evaluating the impact of Ghana's national health insurance scheme on impoverishment due to OOP healthcare payments. abstract_id: PUBMED:29914497 Examining equity in health insurance coverage: an analysis of Ghana's National Health Insurance Scheme. Background: Following years of out-of-pocket payment for healthcare, some countries in Africa including Ghana, Kenya and Rwanda have instituted social health protection programs through health insurance to provide access to quality and affordable healthcare especially for the poor. This paper examines equity in coverage under Ghana's National Health Insurance Scheme (NHIS). Methods: Secondary data from the 2008 Ghana Demographic and Health Survey based on an analytical sample of 4821 females (15-49 years) and 4568 males (15-59 years) were analysed using descriptive, bivariate and multivariate methods. Concentration curves and indices were used to examine equity in coverage on the NHIS. Results: As at 2008, more than 60% of Ghanaians aged 15-59 years were not covered under the NHIS with slightly more females (38.9%) than males (29.7%) covered. Coverage was highest among the highly educated, professionals, those from households in the richest wealth quintile and urban residents. Lack of coverage was most concentrated among the poor. Conclusions: Universal coverage under the NHIS is far from being achieved with marked exclusion of the poor. There is the need for deliberate action to enrol the poor under the NHIS. abstract_id: PUBMED:37264458 Evaluating the effectiveness of the National Health Insurance Fund in providing financial protection to households with hypertension and diabetes patients in Kenya. Background: Non-communicable diseases (NCDs) can impose a substantial financial burden to households in the absence of an effective financial risk protection mechanism. The national health insurance fund (NHIF) has included NCD services in its national scheme. We evaluated the effectiveness of NHIF in providing financial risk protection to households with persons living with hypertension and/or diabetes in Kenya. Methods: We carried out a prospective cohort study, following 888 households with at least one individual living with hypertension and/or diabetes for 12 months. The exposure arm comprised households that are enrolled in the NHIF national scheme, while the control arm comprised households that were not enrolled in the NHIF. Study participants were drawn from two counties in Kenya. We used the incidence of catastrophic health expenditure (CHE) as the outcome of interest. We used coarsened exact matching and a conditional logistic regression model to analyse the odds of CHE among households enrolled in the NHIF compared with unenrolled households. Socioeconomic inequality in CHE was examined using concentration curves and indices. Results: We found strong evidence that NHIF-enrolled households spent a lower share (12.4%) of their household budget on healthcare compared with unenrolled households (23.2%) (p = 0.004). While households that were enrolled in NHIF were less likely to incur CHE, we did not find strong evidence that they are better protected from CHE compared with households without NHIF (OR = 0.67; p = 0.47). The concentration index (CI) for CHE showed a pro-poor distribution (CI: -0.190, p &lt; 0.001). Almost half (46.9%) of households reported active NHIF enrolment at baseline but this reduced to 10.9% after one year, indicating an NHIF attrition rate of 76.7%. The depth of NHIF cover (i.e., the share of out-of-pocket healthcare costs paid by NHIF) among households with active NHIF was 29.6%. Conclusion: We did not find strong evidence that the NHIF national scheme is effective in providing financial risk protection to households with individuals living with hypertension and/diabetes in Kenya. This could partly be explained by the low depth of cover of the NHIF national scheme, and the high attrition rate. To enhance NHIF effectiveness, there is a need to revise the NHIF benefit package to include essential hypertension and/diabetes services, review existing provider payment mechanisms to explicitly reimburse these services, and extend the existing insurance subsidy programme to include individuals in the informal labour market. abstract_id: PUBMED:27449349 Can health insurance protect against out-of-pocket and catastrophic expenditures and also support poverty reduction? Evidence from Ghana's National Health Insurance Scheme. Background: Ghana since 2004, begun implementation of a National Health Insurance Scheme (NHIS) to minimize financial barriers to health care at point of use of service. Usually health insurance is expected to offer financial protection to households. This study aims to analyze the effect health insurance on household out-of-pocket expenditure (OOPE), catastrophic expenditure (CE) and poverty. Methods: We conducted two repeated household surveys in two regions of Ghana in 2009 and 2011. We first analyzed the effect of OOPE on poverty by estimating poverty headcount before and after OOPE were incurred. We also employed probit models and use of instrumental variables to analyze the effect of health insurance on OOPE, CE and poverty. Results: Our findings showed that between 7-18 % of insured households incurred CE as a result of OOPE whereas this was between 29-36 % for uninsured households. In addition, between 3-5 % of both insured and uninsured households fell into poverty due to OOPE. Our regression analyses revealed that health insurance enrolment reduced OOPE by 86 % and protected households against CE and poverty by 3.0 % and 7.5 % respectively. Conclusion: This study provides evidence that high OOPE leads to CE and poverty in Ghana but enrolment into the NHIS reduces OOPE, provides financial protection against CE and reduces poverty. These findings support the pro-poor policy objective of Ghana's National Health Insurance Scheme and holds relevance to other low and middle income countries implementing or aiming to implement insurance schemes. abstract_id: PUBMED:25595036 Refusal to enrol in Ghana's National Health Insurance Scheme: is affordability the problem? Background: Access to health insurance is expected to have positive effect in improving access to healthcare and offer financial risk protection to households. Ghana began the implementation of a National Health Insurance Scheme (NHIS) in 2004 as a way to ensure equitable access to basic healthcare for all residents. After a decade of its implementation, national coverage is just about 34% of the national population. Affordability of the NHIS contribution is often cited by households as a major barrier to enrolment in the NHIS without any rigorous analysis of this claim. In light of the global interest in achieving universal health insurance coverage, this study seeks to examine the extent to which affordability of the NHIS contribution is a barrier to full insurance for households and a burden on their resources. Methods: The study uses data from a cross-sectional household survey involving 2,430 households from three districts in Ghana conducted between January-April, 2011. Affordability of the NHIS contribution is analysed using the household budget-based approach based on the normative definition of affordability. The burden of the NHIS contributions to households is assessed by relating the expected annual NHIS contribution to household non-food expenditure and total consumption expenditure. Households which cannot afford full insurance were identified. Results: Results show that 66% of uninsured households and 70% of partially insured households could afford full insurance for their members. Enroling all household members in the NHIS would account for 5.9% of household non-food expenditure or 2.0% of total expenditure but higher for households in the first (11.4%) and second (7.0%) socio-economic quintiles. All the households (29%) identified as unable to afford full insurance were in the two lower socio-economic quintiles and had large household sizes. Non-financial factors relating to attributes of the insurer and health system problems also affect enrolment in the NHIS. Conclusion: Affordability of full insurance would be a burden on households with low socio-economic status and large household size. Innovative measures are needed to encourage abled households to enrol. Policy should aim at abolishing the registration fee for children, pricing insurance according to socio-economic status of households and addressing the inimical non-financial factors to increase NHIS coverage. abstract_id: PUBMED:22791557 Universal financial protection through National Health Insurance: a stakeholder analysis of the proposed one-time premium payment policy in Ghana. Extending coverage to the informal sector is a key challenge to achieving universal coverage through contributory health insurance schemes. Ghana introduced a mandatory National Health Insurance system in 2004 to provide financial protection for both the formal and informal sectors through a combination of taxes and annual premium payments. As part of its election promise in 2008, the current government (then in opposition) promised to make the payment of premiums 'one-time'. This has been a very controversial policy issue in Ghana. This study sought to contribute to assessing the feasibility of the proposed policy by exploring the understandings of various stakeholders on the policy, their interests or concerns, potential positions, power and influences on it, as well as the general prospects and challenges for its implementation. Data were gathered from a review of relevant documents in the public domain, 28 key informant interviews and six focus group discussions with key stakeholders in Accra and two other districts. The results show that there is a lot of confusion in stakeholders' understanding of the policy issue, and, because of the uncertainties surrounding it, most powerful stakeholders are yet to take clear positions on it. However, stakeholders raised concerns that revolved around issues such as: the meaning of a one-time premium within an insurance scheme context, the affordability of the one-time premium, financing sources and sustainability of the policy, as well as the likely impact of the policy on equity in access to health care. Policy-makers need to clearly explain the meaning of the one-time premium policy and how it will be funded, and critically consider the concerns raised by stakeholders before proceeding with further attempts to implement it. For other countries planning universal coverage reforms, it is important that the terminology of their reforms clearly reflects policy objectives. Answer: The National Health Insurance Scheme (NHIS) in Ghana has been found to have a significant effect in reducing out-of-pocket healthcare expenditures (OOPHE) and offering financial protection against catastrophic health expenditures (CHE) for insured individuals and their households. Insured patients have significantly lower direct OOPHE for both outpatient and inpatient care compared to the uninsured, and the incidence of CHE is lower among insured households. Fully insured households that sought healthcare from NHIS accredited health facilities had a significantly lower incidence of CHE compared to partially insured and uninsured households. The likelihood of a household incurring CHE was 4.2 times less likely for fully insured and 2.9 times less likely for partially insured households relative to being uninsured. However, the NHIS has not completely eliminated OOPHE for the insured and their households, indicating that efforts should aim at eliminating OOPHE for the insured if the objective for establishing the NHIS is to be achieved (PUBMED:26275412). Despite these findings, there are still challenges in achieving universal financial protection through the NHIS. For instance, in the case of tuberculosis (TB) patients, there was no evidence to suggest that enrolment in NHIS defrayed medical, non-medical, or total costs, nor mitigated income loss. An expansion of the NHIS programme was not found to relieve the financial burden for TB-affected households, indicating that social protection schemes require enhancement if they are to protect TB patients from financial catastrophe (PUBMED:33848718). Moreover, while the NHIS has been associated with improved financial protection, it has been less effective among geographically remote vulnerable groups. Once insured, poor, low-educated, and self-employed households living far from hospitals had significantly lower reductions in catastrophic OOP medical spending compared with their counterparts living closer (PUBMED:31539034). In summary, the NHIS in Ghana does provide some level of financial protection to households, particularly in reducing OOPHE and protecting against CHE. However, there are still gaps in coverage and effectiveness, especially for certain diseases like TB and among vulnerable populations in remote areas, which need to be addressed to achieve the goal of universal health coverage and comprehensive financial protection (PUBMED:26275412, PUBMED:33848718, PUBMED:31539034).
Instruction: Does it matter which exercise? Abstracts: abstract_id: PUBMED:29029478 Effects of exercise on capillaries in the white matter of transgenic AD mice. Previous studies have shown that exercise can prevent white matter atrophy in APP/PS1 transgenic Alzheimer's disease (AD) mice. However, the mechanism of this protective effect remains unknown. To further understand this issue, we investigated the effects of exercise on the blood supply of white matter in transgenic AD mice. Six-month-old male APP/PS1 mice were randomly divided into a control group and a running group, and age-matched non-transgenic littermates were used as a wild-type control group. Mice in the running group ran on a treadmill at low intensity for four months. Then, spatial learning and memory abilities, white matter and white matter capillaries were examined in all mice. The 10-month-old AD mice exhibited deficits in cognitive function, and 4 months of exercise improved these deficits. The white matter volume and the total length, total volume and total surface area of the white matter capillaries were decreased in the 10-month-old AD mice, and 4 months of exercise dramatically delayed the changes in these parameters in the AD mice. Our results demonstrate that even low-intensity running exercise can improve spatial learning and memory abilities, delay white matter atrophy and protect white matter capillaries in early-stage AD mice. Protecting capillaries might be an important structural basis for the exercise-induced protection of the structural integrity of white matter in AD. abstract_id: PUBMED:33192486 Long-Term Running Exercise Delays Age-Related Changes in White Matter in Rats. Running exercise, one of the strategies to protect brain function, has positive effects on neurons and synapses in the cortex and hippocampus. However, white matter, as an important structure of the brain, is often overlooked, and the effects of long-term running exercise on white matter are unknown. Here, 14-month-old male Sprague-Dawley (SD) rats were divided into a middle-aged control group (18-month-old control group), an old control group (28-month-old control group), and a long-term runner group (28-month-old runner group). The rats in the runner group underwent a 14-month running exercise regime. Spatial learning ability was tested using the Morris water maze, and white matter volume, myelinated fiber parameters, total mature oligodendrocyte number, and white matter capillary parameters were investigated using stereological methods. The levels of growth factors related to nerve growth and vascular growth in peripheral blood and the level of neurite outgrowth inhibitor-A (Nogo-A) in white matter were measured using an enzyme-linked immunosorbent assay (ELISA). The present results indicated that long-term running exercise effectively delayed the age-related decline in spatial learning ability and the atrophy of white matter by protecting against age-related changes in myelinated fibers and oligodendrocytes in the white matter. Moreover, long-term running exercise prevented age-related changes in capillaries within white matter, which might be related to the protective effects of long-term exercise on aged white matter. abstract_id: PUBMED:29098693 Exercise protects myelinated fibers of white matter in a rat model of depression. The antidepressive effects of exercise have been a focus of research and are hypothesized to remodel the brain networks constructed by myelinated fibers. However, whether the antidepressant effects of exercise are dependent on changes in white matter myelination are unknown. Therefore, we chose chronic unpredictable stress (CUS) as a model of depression and designed an experiment. After a 4-week CUS period, 40 animals were tested using the sucrose preference test (SPT) and the open field test (OFT). The depressed rats then underwent 4-week running exercise. Next, electron microscopy and unbiased stereological methods were used to investigate white matter changes in the rats. After the 4-week CUS stimulation, body weight, sucrose preference and scores on the OFT were significantly lower in the depression rats than in the unstressed rats (p &lt; .05). After undergoing a 4-week running exercise, the depression rats showed a significantly greater sucrose preference than the depression control rats without running exercise (p &lt; .05). Furthermore, the white matter parameters of the depression rats (including the white matter volumes, the length and volumes of myelinated fibers, and the volumes and thickness of the myelin sheaths) were significantly reduced after the CUS period (p &lt; .05). However, these white matter parameters were significantly increased after running exercise (p &lt; .05). The present study is the first to provide evidence that running exercise has positive effects on white matter and the myelinated fibers of white matter in depressed rats, and this evidence might provide an important theoretical basis for the exercise-mediated treatment of depression. abstract_id: PUBMED:24797659 An 8-month exercise intervention alters frontotemporal white matter integrity in overweight children. In childhood, excess adiposity and low fitness are linked to poor academic performance, lower cognitive function, and differences in brain structure. Identifying ways to mitigate obesity-related alterations is of current clinical importance. This study examined the effects of an 8-month exercise intervention on the uncinate fasciculus, a white matter fiber tract connecting frontal and temporal lobes. Participants consisted of 18 unfit, overweight 8- to 11-year-old children (94% Black) who were randomly assigned to either an aerobic exercise (n = 10) or a sedentary control group (n = 8). Before and after the intervention, all subjects participated in a diffusion tensor MRI scan. Tractography was conducted to isolate the uncinate fasciculus. The exercise group showed improved white matter integrity as compared to the control group. These findings are consistent with an emerging literature suggesting beneficial effects of exercise on white matter integrity. abstract_id: PUBMED:34174392 White matter plasticity in healthy older adults: The effects of aerobic exercise. White matter deterioration is associated with cognitive impairment in healthy aging and Alzheimer's disease. It is critical to identify interventions that can slow down white matter deterioration. So far, clinical trials have failed to demonstrate the benefits of aerobic exercise on the adult white matter using diffusion Magnetic Resonance Imaging. Here, we report the effects of a 6-month aerobic walking and dance interventions (clinical trial NCT01472744) on white matter integrity in healthy older adults (n = 180, 60-79 years) measured by changes in the ratio of calibrated T1- to T2-weighted images (T1w/T2w). Specifically, the aerobic walking and social dance interventions resulted in positive changes in the T1w/T2w signal in late-myelinating regions, as compared to widespread decreases in the T1w/T2w signal in the active control. Notably, in the aerobic walking group, positive change in the T1w/T2w signal correlated with improved episodic memory performance. Lastly, intervention-induced increases in cardiorespiratory fitness did not correlate with change in the T1w/T2w signal. Together, our findings suggest that white matter regions that are vulnerable to aging retain some degree of plasticity that can be induced by aerobic exercise training. In addition, we provided evidence that the T1w/T2w signal may be a useful and broadly accessible measure for studying short-term within-person plasticity and deterioration in the adult human white matter. abstract_id: PUBMED:31279793 Effect of aerobic exercise on white matter microstructure in the aging brain. Aging is associated with decline in white matter (WM) microstructure, decreased cognitive functioning, and increased risk of Alzheimer's disease and related dementias. Recent research has identified aerobic physical exercise as a promising intervention for increasing white matter microstructure in aging, with the aim of increasing cognitive abilities, and protecting against neurodegenerative processes. However, the degree to which white matter microstructure can be protected or improved with exercise remains incompletely understood. Here, a sub-group of 25 healthy, sedentary participants (aged 57 to 86 years; M = 67.1; SD = 7.9; 11 female, 14 male) from the larger Brain in Motion Study (Tyndall et al., 2013) underwent diffusion tensor imaging (DTI) before and after a six-month aerobic exercise intervention. DTI data were analysed with FSL's Tract-Based Spatial Statistics (TBSS) to determine whether WM microstructure improved, as defined by increased fractional anisotropy (FA) and/or decreased mean diffusivity (MD), after the aerobic exercise intervention. Neither FA nor MD of the cerebral WM were significantly correlated with either age or cardiovascular fitness at baseline. Whole-brain WM mean FA decreased over the intervention while mean MD showed no significant change. Longitudinal TBSS analyses revealed decreased FA in the left uncinate fasciculus, left anterior corona radiata, left inferior fronto-occipital fasciculus, and left anterior thalamic radiation. MD increased in the left forceps major, left inferior longitudinal fasciculus, and left superior longitudinal fasciculus. Results indicate that six months of aerobic exercise in healthy, sedentary older adults was not associated with improvements in FA or MD measures of cerebral WM microstructure. abstract_id: PUBMED:27978791 Exercise Prevents Cognitive Function Decline and Demyelination in the White Matter of APP/PS1 Transgenic AD Mice. Background: Whether exercise could delay the cognitive function decline and structural changes in Alzheimer's disease (AD) are not fully understood. Methods: 6-month-old male APP/PS1 double transgenic mice ran four months and then the effects of exercise on the cognitive function and the white matter of AD were investigated. Results: The mean escape latency of the excercised group was significantly shortened when compared to that of the sedentary group. The percentage of time in target quadrant and the target zone frequency of the exercised group were significantly increased when compared to the sedentary group. The white matter volume, the myelinated fiber volume and axon volume in the white matter of the exercised group were significantly increased when compared to the sedentary group. Conclusion: Exercise could improve the cognitive function in AD, and the effects of exercise on the white matter of AD might be one of the structural bases for the protective effect of exercise on the cognitive function of AD. The exercise-induced protection of the white matter in AD might be due to the fact that the exercise prevented the demyelination of the myelinated fibers in the white matter of AD. abstract_id: PUBMED:37092215 Can exercise-based interventions reverse gray and white matter abnormalities in patients with chronic musculoskeletal pain? A systematic review. Background: Recent evidence has suggested that reversal of gray or white matter abnormalities could be a criterion of recovery in patients with chronic pain. Objective: To determine the effectiveness of exercise-based interventions in reversing gray and white matter abnormalities in patients with chronic musculoskeletal pain. Methods: An electronic search was performed in the MEDLINE (Via PubMed), EMBASE, Web of Science, LILACS, SPORTDiscus, CINAHL, PEDro, and CENTRAL databases. Randomized clinical trials (RCTs) including patients with chronic musculoskeletal pain, which assessed the change in gray and white matter abnormalities after exercise-based interventions were selected. The risk of bias was assessed using the Risk of Bias II tool. Results: Four RCTs were included (n= 386). Three studies showed reversal of abnormalities with exercise-based interventions compared to control groups. The reversal was observed in the gray matter volume in the medial orbital prefrontal cortex and in the supplementary motor area of patients with osteoarthritis, in the hippocampus, insula, amygdala and thalamus in fibromyalgia patients. Furthermore, in patients with chronic spinal pain, reversal was observed in the gray matter thickness of the frontal middle caudal cortex and in the caudate, putamen and thalamus gray matter volume. Conclusions: There is insufficient evidence to determine the effectiveness of exercise-based interventions for reversing gray and white matter abnormalities in patients with chronic pain. Further studies are still needed in this field. abstract_id: PUBMED:27075416 Running exercise protects the capillaries in white matter in a rat model of depression. Running has been shown to improve depressive symptoms when used as an adjunct to medication. However, the mechanisms underlying the antidepressant effects of running are not fully understood. Changes of capillaries in white matter have been discovered in clinical patients and depression model rats. Considering the important part of white matter in depression, running may cause capillary structural changes in white matter. Chronic unpredictable stress (CUS) rats were provided with a 4-week running exercise (from the fifth week to the eighth week) for 20 minutes each day for 5 consecutive days each week. Anhedonia was measured by a behavior test. Furthermore, capillary changes were investigated in the control group, the CUS/Standard group, and the CUS/Running group using stereological methods. The 4-week running increased sucrose consumption significantly in the CUS/Running group and had significant effects on the total volume, total length, and total surface area of the capillaries in the white matter of depression rats. These results demonstrated that exercise-induced protection of the capillaries in white matter might be one of the structural bases for the exercise-induced treatment of depression. It might provide important parameters for further study of the vascular mechanisms of depression and a new research direction for the development of clinical antidepressant means. J. Comp. Neurol. 524:3577-3586, 2016. © 2016 Wiley Periodicals, Inc. abstract_id: PUBMED:30850255 Determining the organic matter burden of exercise-tracking devices using adenosine triphosphate surveillance. The aim of this project is to determine the amount of organic matter burden focused on exercise-tracking devices worn by heath care professionals in the clinical setting. It was completed by using adenosine triphosphate surveillance, which measures the residual organic matter that remains on a device. Because all but 1 of the devices sampled were considered contaminated, health care professionals should be aware of the potential risk associated with wearing exercise-tracking devices in the clinical setting. Answer: The abstracts provided do not specify that a particular type of exercise is necessary to achieve the benefits discussed; rather, they generally refer to "exercise," "running exercise," or "aerobic exercise" without distinguishing between specific forms of these activities. For instance, the studies mention the use of "running" as a form of exercise in transgenic AD mice (PUBMED:29029478), in a rat model of depression (PUBMED:29098693), and in healthy older adults (PUBMED:34174392). Additionally, "long-term running exercise" is mentioned in relation to delaying age-related changes in white matter in rats (PUBMED:33192486), and "aerobic exercise" is discussed in the context of white matter microstructure in the aging brain (PUBMED:31279793) and in overweight children (PUBMED:24797659). One study specifically mentions "aerobic walking and dance interventions" (PUBMED:34174392), suggesting that both walking and dancing as forms of aerobic exercise could have positive effects on white matter integrity in older adults. Another study refers to an "exercise intervention" without specifying the type, but notes improvements in white matter integrity in overweight children (PUBMED:24797659). In summary, while the abstracts suggest that aerobic and running exercises are beneficial for white matter integrity and cognitive function, they do not provide a direct comparison of different types of exercise to determine if one is superior to another. The key point seems to be that regular physical activity, particularly aerobic in nature, is associated with positive outcomes for brain health.
Instruction: Does nonpayment for hospital-acquired catheter-associated urinary tract infections lead to overtesting and increased antimicrobial prescribing? Abstracts: abstract_id: PUBMED:22700826 Does nonpayment for hospital-acquired catheter-associated urinary tract infections lead to overtesting and increased antimicrobial prescribing? Background: On 1 October 2008, in an effort to stimulate efforts to prevent catheter-associated urinary tract infection (CAUTI), the Centers for Medicare &amp; Medicaid Services (CMS) implemented a policy of not reimbursing hospitals for hospital-acquired CAUTI. Since any urinary tract infection present on admission would not fall under this initiative, concerns have been raised that the policy may encourage more testing for and treatment of asymptomatic bacteriuria. Methods: We conducted a retrospective multicenter cohort study with time series analysis of all adults admitted to the hospital 16 months before and 16 months after policy implementation among participating Society for Healthcare Epidemiology of America Research Network hospitals. Our outcomes were frequency of urine culture on admission and antimicrobial use. Results: A total of 39 hospitals from 22 states submitted data on 2 362 742 admissions. In 35 hospitals affected by the CMS policy, the median frequency of urine culture performance did not change after CMS policy implementation (19.2% during the prepolicy period vs 19.3% during the postpolicy period). The rate of change in urine culture performance increased minimally during the prepolicy period (0.5% per month) and decreased slightly during the postpolicy period (-0.25% per month; P &lt; .001). In the subset of 10 hospitals providing antimicrobial use data, the median frequency of fluoroquinolone antimicrobial use did not change substantially (14.6% during the prepolicy period vs 14.0% during the postpolicy period). The rate of change in fluoroquinolone use increased during the prepolicy period (1.26% per month) and decreased during the postpolicy period (-0.60% per month; P &lt; .001). Conclusions: We found no evidence that CMS nonpayment policy resulted in overtesting to screen for and document a diagnosis of urinary tract infection as present on admission. abstract_id: PUBMED:24361201 Initial impact of Medicare's nonpayment policy on catheter-associated urinary tract infections by hospital characteristics. Aims And Objectives: The goal of this study was to evaluate the trend in urinary tract infections (UTIs) from 2005 to 2009 and determine the initial impact of Medicare's nonpayment policy on the rate of UTIs in acute care hospitals. Background: October 2008 commenced Medicare's nonpayment policy for the additional care required as a result of hospital-acquired conditions, including catheter-associated urinary tract infections (CAUTIs). CAUTIs are the most common form of hospital-acquired infections. Methods: Rates of CAUTIs were analyzed by patient and hospital characteristics at the hospital level on a quarterly basis, yielding 20 observation points. October 2008 was used as the intervention point. A time series analysis was conducted using the 2005-2009 Nationwide Inpatient Sample datasets. A repeated measures Poisson regression growth curve model was used to analyze the rate of CAUTIs by hospital characteristics. Results: The annual rate of CAUTIs continues to rise; however the annual rate of change is starting to decline. The change in rate of CAUTIs was not significantly different before and after the policy's payment change. The results of the adjusted time series analysis show that various hospital characteristics were associated with a significant decline in rate of CAUTIs in quarters 16-20 (after the policy implementation) compared to the rate in time 1-15 (before the policy implementation), while other characteristics were associated with a significant increase in CAUTIs. Conclusions: Medicare's nonpayment policy was not associated with a reduction in hospitals' CAUTI rates. The use of administrative data, improper coding of CAUTIs at the hospital level, and the short time period post-policy implementation were all limitations in this study. abstract_id: PUBMED:22944872 Effect of nonpayment for hospital-acquired, catheter-associated urinary tract infection: a statewide analysis. Background: Most (59% to 86%) hospital-acquired urinary tract infections (UTIs) are catheter-associated urinary tract infections (CAUTIs). As of 2008, claims data are used to deny payment for certain hospital-acquired conditions, including CAUTIs, and publicly report hospital performance. Objective: To examine rates of UTIs in adults that are coded in claims data as hospital-acquired and catheter-associated events and evaluate how often nonpayment for CAUTI lowers hospital payment. Design: Before-and-after study of all-payer cross-sectional claims data. Setting: 96 nonfederal acute care Michigan hospitals. Patients: Nonobstetric adults discharged in 2007 (n = 767 531) and 2009 (n = 781 343). Measurements: Hospital rates of UTIs (categorized as catheter-associated or hospital-acquired) and frequency of reduced payment for hospital-acquired CAUTIs. Results: Hospitals frequently requested payment for non-CAUTIs as secondary diagnoses: 10.0% (95% CI, 9.5% to 10.5%) of discharges in 2007 and 10.3% (CI, 9.8% to 10.9%) in 2009. Hospital rates of CAUTI were very low: 0.09% (CI, 0.06% to 0.12%) in 2007 and 0.14% (CI, 0.11% to 0.17%) in 2009. In 2009, 2.6% (CI, 1.6% to 3.6%) of hospital-acquired UTIs were described as CAUTIs. Nonpayment for hospital-acquired CAUTIs reduced payment for 25 of 781 343 (0.003%) hospitalizations in 2009. Limitations: Data are from only 1 state and involved only 1 year before and after nonpayment for complications. Hospital prevention practices were not examined. Conclusion: Catheter-associated UTI rates determined by claims data seem to be inaccurate and are much lower than expected from epidemiologic surveillance data. The financial impact of current nonpayment policy for hospital-acquired CAUTI is low. Claims data are currently not valid data sets for comparing hospital-acquired CAUTI rates for the purpose of public reporting or imposing financial incentives or penalties. Primary Funding Source: Blue Cross Blue Shield of Michigan Foundation. abstract_id: PUBMED:28057985 The efficacy of noble metal alloy urinary catheters in reducing catheter-associated urinary tract infection. Background: Catheter-associated urinary tract infection (CAUTI) is the most common device-related healthcare-acquired infection. CAUTI can be severe and lead to bacteremia, significant morbidity, prolonged hospital stay, and high antibiotic consumption. Patients And Methods: In this study, we evaluated the CAUTI-reducing efficacy of noble metal alloy catheters in sixty patients (thirty per group) in the Intensive Care Unit (ICU) at the King Fahad Hospital in Saudi Arabia. The study was a single-blinded, randomized, single-centered, prospective investigation that included patients using urinary catheters for 3 days. Results: A 90% relative risk reduction in the rate of CAUTI was observed with the noble metal alloy catheter compared to the standard catheter (10 vs. 1 cases, P = 0.006). When considering both catheter-associated asymptomatic bacteriuria and CAUTI, the relative risk reduction was 83% (12 vs. 2 cases, P = 0.005). In addition to CAUTI, the risk of acquiring secondary bacteremia was lower (100%) for the patients using noble metal alloy catheters (3 cases in the standard group vs. 0 case in the noble metal alloy catheter group, P = 0.24). No adverse events related to any of the used catheters were recorded. Conclusion: Results from this study revealed that noble metal alloy catheters are safe to use and significantly reduce CAUTI rate in ICU patients after 3 days of use. abstract_id: PUBMED:36407206 Catheter-Associated Urinary Tract Infection (CAUTI). One of the most prevalent health-related illnesses globally is catheter-associated urinary tract infection (CAUTI). CAUTIs account for almost half of all hospital-acquired diseases. Most of the healthcare-acquired urinary tract infections result from catheter tubes implantation. These tubes connect a collecting system and the urinary bladder via the urethra. These are known as indwelling urinary catheters. The length of catheterization has a key role in starting bacteriuria since biofilm eventually forms on all of these devices. Despite the low percentage of people with bacteriuria who start showing symptoms, there is nevertheless a significant burden associated with these contamination due to the repeated use of indwelling urinary devices. Minimizing indwelling device usage and stopping the catheter as soon as medically possible are the two most crucial preventative measures for bacteriuria and infection when device use is required. Efforts to avoid catheter-acquired urinary infections must be implemented and monitored by infection control guidelines in healthcare institutions. These approaches include monitoring device use, the suitability of device justifications, and problems. Ultimately, technological advancements in device substances that inhibit colony generation will be necessary to avoid these infestations. There is still some way by which we can bring down the increased phenomenon of catheter-associated urinary tract contamination by maintaining hygiene while handling the catheter and patients and keeping the infected patients away or isolated from unaffected patients as a precaution. This article mainly focuses on an overview that helps with discussing prevention, risk factors, diagnosis, control and management of CAUTI. abstract_id: PUBMED:37554377 Hospital, Catheter, Peritoneal Dialysis Acquired Infections: Visible Light as a New Solution to Reduce Risk and Incidence. Healthcare-associated infections, often identified as hospital-acquired infections (HAIs), are typically not present during patient contact or admission. Healthcare-associated infections cause longer lengths of stay, increasing costs and mortality. HAI occurring in trauma patients increases the risk for length of stay and higher inpatient costs. Many HAIs are preventable. Antibiotic resistance has increased to a high level making proper treatment increasingly difficult due to organisms resistant to common antibiotics. Therefore, there is a need for alternate forms of attack against these pathogens. Currently, the application of light for the treatment of topical infections has been used. Ultraviolet (UV) light has well-documented antimicrobial properties. UV is damaging to DNA and causes the degradation of plastics, etc., so its use for medical purposes is limited. Using visible light may be more promising. 405-nm light sterilization has been shown to be highly efficacious in reducing bacteria. Light Line Medical, Inc.'s (LLM) patented visible-light platform technology for infection prevention may create a global shift in the prevention of healthcare-associated infections. LLM has developed a proprietary method of delivering light to prevent catheter-associated infections. This technology uses non-UV visible light and can kill both bacteria and prevent biofilm inside and outside a luminal catheter. This is significant as prevention is key. Independent analysis of the prototype system showed the application of the device met the acceptance criterion of 4 x 109-10 reduction in Candida albicans, Staphylococcus aureus, Pseudomonas aeruginosa, and other bacteria and fungal species. Further design evolution for this technology continues, and the FDA submission process is underway. abstract_id: PUBMED:20426577 Hospital-acquired catheter-associated urinary tract infection: documentation and coding issues may reduce financial impact of Medicare's new payment policy. Objective: To evaluate whether hospital-acquired catheter-associated urinary tract infections (CA-UTIs) are accurately documented in discharge records with the use of International Classification of Diseases, Ninth Revision, Clinical Modification diagnosis codes so that nonpayment is triggered, as mandated by the Centers for Medicare and Medicaid Services (CMS) Hospital-Acquired Conditions Initiative. Methods: We conducted a retrospective medical record review of 80 randomly selected adult discharges from May 2006 through September 2007 from the University of Michigan Health System (UMHS) with secondary-diagnosis urinary tract infections (UTIs). One physician-abstractor reviewed each record to categorize UTIs as catheter associated and/or hospital acquired; these results (considered "gold standard") were compared with diagnosis codes assigned by hospital coders. Annual use of the catheter association code (996.64) by UMHS coders was compared with state and US rates by using Healthcare Cost and Utilization Project data. Results: Patient mean age was 58 years; 56 (70%) were women; median length of hospital stay was 6 days; 50 patients (62%) used urinary catheters during hospitalization. Hospital coders had listed 20 secondary-diagnosis UTIs (25%) as hospital acquired, whereas physician-abstractors indicated that 37 (46%) were hospital acquired. Hospital coders had identified no CA-UTIs (code 996.64 was never used), whereas physician-abstractors identified 36 CA-UTIs (45%; 28 hospital acquired and 8 present on admission). Catheter use often was evident only from nursing notes, which, unlike physician notes, cannot be used by coders to assign discharge codes. State and US annual rates of 996.64 coding (approximately 1% of secondary-diagnosis UTIs) were similar to those at UMHS. Conclusions: Hospital coders rarely use the catheter association code needed to identify CA-UTI among secondary-diagnosis UTIs. Coders often listed a UTI as present on admission, although the medical record indicated that it was hospital acquired. Because coding of hospital-acquired CA-UTI seems to be fraught with error, nonpayment according to CMS policy may not reliably occur. abstract_id: PUBMED:29779689 Prevalence of infections and antimicrobial prescribing in Australian aged care facilities: Evaluation of modifiable and nonmodifiable determinants. Background: Infections in aged care residents are associated with poor outcomes, and inappropriate antimicrobial prescribing contributes to adverse events, such as the emergence of antimicrobial resistance. The objective of this study was to identify resident- and facility-level factors associated with infection and antimicrobial prescribing in Australian aged care residents. Methods: Using data captured by a national point-prevalence survey (the Aged Care National Antimicrobial Prescribing Survey), risk and protective factors were determined by multivariate Poisson regression. Results: In 2017, 292 facilities were surveyed. Infection prevalence was 2.9% (95% confidence interval [CI], 2.6%-3.2%), and antimicrobial use prevalence was 8.9% (95% CI, 8.4%-9.4%). Resident-level factors associated with infection prevalence included urinary catheterization and hospital admission within the last 30 days; facility-level factors included state and multipurpose service provision. Resident-level factors associated with antimicrobial prescribing included infection signs and symptoms; facility-level factors included state, nonmetropolitan locality, and not-for-profit status. Availability of guidelines for urinary tract infection (UTI) management was associated with reduced antimicrobial prescribing. Conclusions: Looking ahead, reports should be peer grouped by significant facility-level factors. Priority should be given to implementing UTI management guidelines and prevention of infection in residents with indwelling urinary catheters. Enhanced monitoring and prevention strategies are required for residents recently admitted to hospital. abstract_id: PUBMED:30191157 Impact of the 2012 Medicaid Health Care-Acquired Conditions Policy on Catheter-Associated Urinary Tract Infection and Vascular Catheter-Associated Infection Billing Rates. In July 2012, the Centers for Medicare &amp; Medicaid Services ceased hospital Medicaid reimbursements for certain health care-acquired conditions. Using billing data from 2008-2014, we found no impact of this policy on rates of 2 targeted conditions, vascular catheter-associated infections and catheter-associated urinary tract infections, among Medicaid or non-Medicaid patients. abstract_id: PUBMED:31110876 The Impact of Catheter-Associated Urinary Tract Infection (CA-UTI) in Critically Ill Children in the Pediatric Intensive Care Unit. Objective Catheter-associated urinary tract infections (CA-UTIs) comprise a significant proportion of hospital-acquired infections. However, the impact of CA-UTIs on important outcome measures, such as length of stay (LOS) and hospital charges, has not been examined in the pediatric intensive care unit (PICU) setting. Design Single-center, retrospective, case-matched, cohort study and financial analysis. Setting PICU in a tertiary-care children's medical center. Patients A total of 41 critically ill children with CA-UTIs and 73 critically ill children without CA-UTI, matched for age, gender, severity of illness, and primary admission diagnosis. Interventions None. Measurements and Main Results We compared the length of hospital stay (LOS in PICU and in hospital), mortality, and hospital costs in critically ill children with CA-UTIs and their matched controls. Critically ill children experiencing CA-UTI had significantly longer PICU LOS, hospital LOS, duration of mechanical ventilation, and mortality compared with matched controls without CA-UTI. The longer LOS resulted in higher PICU and hospital charges in this group. Conclusion Critically ill children with CA-UTI experience worse outcomes in the PICU compared with those without CA-UTI. Further studies on the impact of CA-UTI in the PICU are warranted. Answer: Based on the provided abstracts, there is no evidence that the Centers for Medicare & Medicaid Services (CMS) nonpayment policy for hospital-acquired catheter-associated urinary tract infections (CAUTIs) has led to overtesting and increased antimicrobial prescribing. A retrospective multicenter cohort study with time series analysis found that the median frequency of urine culture performance did not change after CMS policy implementation, and the median frequency of fluoroquinolone antimicrobial use did not change substantially either (PUBMED:22700826). Additionally, another study indicated that the financial impact of the current nonpayment policy for hospital-acquired CAUTI is low, and claims data are not valid for comparing hospital-acquired CAUTI rates for the purpose of public reporting or imposing financial incentives or penalties (PUBMED:22944872). Furthermore, a study evaluating the trend in urinary tract infections from 2005 to 2009 determined that Medicare's nonpayment policy was not associated with a reduction in hospitals' CAUTI rates (PUBMED:24361201). Another study found that hospital coders rarely use the catheter association code needed to identify CA-UTI among secondary-diagnosis UTIs, suggesting that nonpayment according to CMS policy may not reliably occur due to coding errors (PUBMED:20426577). In summary, the abstracts suggest that the CMS nonpayment policy has not resulted in increased testing or antimicrobial prescribing for CAUTIs, and the policy's financial impact appears to be minimal due to issues with accurate documentation and coding.
Instruction: Left hepatic vein: can be sutured and ligated blindly in left hepatectomy? Abstracts: abstract_id: PUBMED:14599942 Left hepatic vein: can be sutured and ligated blindly in left hepatectomy? Objective: To determine whether the anatomic characteristics of the left hepatic vein, middle hepatic vein and common trunk could influence the operation procedures of left hepatectomy. Method: Fifteen fresh human liver specimens were dissected and their anatomic characteristics were recorded. Results: The left hepatic vein and middle hepatic vein formed the common trunk of 1.2+/-0.4 cm in length in the 15 liver specimens. The angle between the left hepatic vein and middle hepatic vein was 91+/-18.3 degree. Conclusion: The left hepatic vein should not be sutured and ligated blindly in left hepatectomy because there might be a potential damage to the middle hepatic vein. abstract_id: PUBMED:34782262 The feasibility of combined resection and subsequent reconstruction of the right hepatic artery in left hepatectomy for cholangiocarcinoma. Background: Combined resection of the right hepatic artery (RHA) is sometimes required to achieve complete resection of hilar cholangiocarcinoma. The present study aimed to evaluate the feasibility of combined resection and subsequent reconstruction by continuous suture of the RHA during left hepatectomy for cholangiocarcinoma. Materials And Methods: We retrospectively compared the outcomes after left hepatectomy with biliary reconstruction for cholangiocarcinoma between patients with and without RHA resection and reconstruction. Results: Of the 25 patients who underwent left hepatectomy combined with biliary reconstruction, eight patients (32%) underwent combined resection and reconstruction of the RHA (AR group). The demographic characteristics were not different between the AR and non-AR groups. The amount of intraoperative bleeding was significantly greater in patients with AR (2350 mL vs. 900 mL, p = 0.017). The prevalence of early complications above grade III in Clavien-Dindo classification and late complications were not significantly different between the AR and non-AR groups. In the AR group, complications directly associated with AR, such as thrombosis or reanastomosis, were not observed. On Kaplan-Meier analysis, recurrence-free survival (p = 0.618) and overall survival (p = 0.803) were comparable between the two groups despite the advanced T stages in the AR group. Conclusions: Combined resection and subsequent reconstruction of the RHA during left-sided hepatectomy is a feasible treatment alternative for cholangiocarcinoma. abstract_id: PUBMED:35767184 Extrahepatic approach for taping the common trunk of the middle and left hepatic veins or the left hepatic vein alone in laparoscopic hepatectomy (with videos). Background: Outflow control is difficult, and techniques required for effectively handling intraoperative hemorrhage during laparoscopic hepatectomy have not previously been adequately reported. Methods: Sixteen patients underwent surgery, of which 15 underwent laparoscopic left hepatectomy and one underwent laparoscopic partial hepatectomy of the caudate lobe. Encircling and taping of the common trunk of the middle (MHV) and left hepatic veins (LHV) was performed in 12 patients, and that of the LHV alone in four patients. Surgical techniques based on anatomical landmarks and histological findings are presented with videos. Histological confirmation of the anatomical landmarks for these procedures was performed in fresh cadavers to understand the anatomical structures and layers involved. Results: The median procedure duration was 15 (6-25) minutes. All procedures were performed safely with no major bleeding. Histological findings showed fibrous connective tissue between the tunica adventitia of the inferior vena cava (IVC) and the Laennec's capsule of the liver. The layer of dissection was along the tunica adventitia of the IVC. Conclusions: The surgical techniques for encircling and taping of the common trunk of the MHV and LHV and the LHV alone based on anatomical landmarks were feasible and could allow for efficient outflow control in laparoscopic hepatectomy. abstract_id: PUBMED:30406886 Technique of robotic left hepatectomy : how we approach it. Minimally invasive technique has been adopted as the standard of care in many surgical fields within general surgery. Hepatobiliary surgery, however, is lacking behind due to the complex nature of the operation and concerns of major bleeding. Several centers suggested that inherent limitations of conventional laparoscopy precludes its wide adoption. Robotic technique provides solutions to these limitations. In this study, we report our standardized technique of robotic left hepatectomy. We discuss aspects of robotic hepatectomy and describe our standardized approach for robotic left hepatectomy. A video is attached to this article. A 76-year-old man with a 4.5 cm biopsy-proven hepatocellular carcinoma was taken to the operating room for a robotic left hepatectomy. His past medical and surgical history was only consistent with hypertension and diabetes. Robotic extrahepatic glissonian pedicle approach was applied to gain inflow control. Left hepatic artery and portal vein were individually dissected and isolated prior to division. An intraoperative robotic ultrasound was utilized to ensure negative resection margins. Left hepatic vein was transected intrahepatically using a laparoscopic Endo GIA stapler. Segment 2,3, and part of 4 were removed. Operative time was 180 min without intraoperative complications. Estimated blood loss was less than 50 cc. The patient was discharged home on postoperative day 3. The use of robotic technology during complex hepatic resections such as left hepatectomy is safe and feasible. This approach provides an alternative technique in minimally invasive liver surgery. abstract_id: PUBMED:28667439 Left hepatectomy after right paramedian sectoriectomy. Repeat hepatectomy is beneficial for selected patients with recurrence of liver malignancies. However, the operative procedure becomes technically demanding when the previous hepatectomy was complex, with hepatic veins and stump of portal pedicles exposed on the liver transection surface. We performed left hepatectomy after right paramedian sectoriectomy (RPMS) for three patients. Here, we describe our surgical technique and the postoperative outcomes achieved. This procedure allowed for safe adhesiolysis between the middle and right hepatic veins by following a fibrous plane. The mean operative time was 8.7 h, including 4.9 h of adhesiolysis. The mean remnant liver volume (right lateral sector and the caudate lobe) was calculated as 704 ml, being 62% of total liver volume. There was no postoperative liver failure or mortality. In conclusion, left hepatectomy after RPMS is a feasible procedure for patients with sufficient remnant liver volume, even though the middle and right hepatic veins run side by side after liver regeneration. abstract_id: PUBMED:31692300 Modeling the hepatic arterial flow in living liver donor after left hepatectomy and postoperative boundary condition exploration. Preoperative and postoperative hepatic perfusion is modeled with one-dimensional (1-D) Navier-Stokes equations. Flow rates obtained from ultrasound (US) data and impedance resulted from structured trees are the inflow and outflow boundary condition (BC), respectively. Structured trees terminate at the size of the arterioles, which can enlarge their size after hepatectomy. In clinical studies, the resistance to pulsatile arterial flow caused by the microvascular bed can be reflected by the resistive index (RI), a frequently used index in assessing arterial resistance. This study uses the RI in a novel manner to conveniently obtain the postoperative outflow impedance from the preoperative impedance. The major emphasis of this study is to devise a model to capture the postoperative hepatic hemodynamics after left hepatectomy. To study this, we build a hepatic network model and analyze its behavior under four different outflow impedance: (a) the same as preoperative impedance; (b) evaluated using the RI and preoperative impedance; (c) computed from structured tree BC with increased radius of terminal vessels; and (d) evaluated using structured tree with both increased radius of root vessel, ie, the outlets of the postoperative hepatic artery, and increased radius of terminal vessels. Our results show that both impedance from (b) and (d) give a physiologically reasonable postoperative hepatic pressure range, while the RI in (b) allows for a fast approximation of postoperative impedance. Since hemodynamics after hepatectomy are not fully understood, the methods used in this study to explore postoperative outflow BC are informative for future models exploring hemodynamic effects of partial hepatectomy. abstract_id: PUBMED:28529988 Left hepatectomy with simultaneous hepatic artery and portal vein reconstructions in the operation for cholangiocarcinoma: the surgical techniques comprised of step-by-step established procedures. Hepatectomy needing simultaneous reconstruction of the hepatic artery and the portal vein in the operation for cholangiocarcinoma is a challenging procedure. We experienced three cases of left hepatectomy with simultaneous reconstructions of the right hepatic artery (RHA) and the right portal vein (RPV) in all of which the surgical procedures were performed in the same manner. At the initial step of the procedure, we confirmed that the RHA and the RPV at the porta hepatis as well as the proper hepatic artery and the main portal vein (MPV) proximal to the cancer involvement could be controlled by tapes, which meant the cancer could be resected by means of vascular reconstructions. All the vascular reconstructions were performed under loupe magnification. The mean periods of portal and arterial ischemic time of the remnant liver were 14 min. 32 sec. and 35 min. 58 sec., respectively. The mean operative time and the intraoperative blood loss were 627 min. and 804 mL, respectively. No serious postoperative complication occurred. By performing step-by-step well-established procedures, this complicated and challenging operation could be safely completed. abstract_id: PUBMED:29611092 Usefulness of the Ligamentum Venosum as an Anatomical Landmark for Safe Laparoscopic Left Hepatectomy (How I Do It). Anatomical landmarks are commonly utilized in surgical practice to help surgeons to maintain an anatomical orientation. The ligamentum venosum (LV) is an anatomical landmark that is utilized during left hepatectomy via both the open and the laparoscopic approaches. We describe the usefulness of the LV as an anatomical landmark in performing a safe laparoscopic left hepatectomy. The key characteristic of our technique is that the LV is divided at the end of the surgery. Our technique involves identification and dissection of the LV, but we do not divide it during liver mobilization. The LV marks the boundary for safe vascular inflow control of the left hemiliver. Following exposure of the middle hepatic vein, hepatic parenchymal transection is curved toward the LV, which serves as a landmark to guide surgeons to achieve an optimal plane of transection in the late stages. A suitable transection point of the left bile duct is determined based on the location of the LV. Between February 2013 and September 2017, 21 consecutive patients underwent pure laparoscopic left hepatectomy. The median operation time was 240 min (range 180-350 min), and the median intraoperative estimated blood loss was 200 ml (range 80-600 ml). Major postoperative complications occurred in one patient (4.8%). The median postoperative hospital stay was 8 days (range 5-15 days). This systematic approach using the LV as an anatomical landmark may serve as a safe and effective technique to perform a laparoscopic left hepatectomy. abstract_id: PUBMED:8356502 A direct approach to the left and middle hepatic veins during left-sided hepatectomy. A direct surgical approach to the MHV during left-sided hepatectomy is anatomically feasible. The procedure should be applied with appropriate safety measures. Prior verification of the precise anatomic relationship of the hepatic veins is necessary and the MHV should be isolated and secured by the described stepwise technique. abstract_id: PUBMED:34869557 Comparison of the Safety and Efficacy of Laparoscopic Left Lateral Hepatectomy and Open Left Lateral Hepatectomy for Hepatolithiasis: A Meta-Analysis. Background: Intrahepatic duct (IHD) stones, also known as hepatolithiasis, refers to any intrahepatic stones of the left and right hepatic ducts. It is a benign biliary tract disease with a high recurrence rate, with many complications, and difficulty in radical cure. The aim of this review and meta-analysis is to compare the safety and efficacy of the laparoscopic left lateral hepatectomy (LLLH) and open left lateral hepatectomy (OLLH) for IHD stones. Methods: Pubmed, Embase, Cochrane, WangFang Data, and China National Knowledge Infrastructure were searched for randomized controlled trials (RCTs) regarding the comparison of LLLH and OLLH in the treatment of hepatolithiasis. Standard mean difference (SMD), odds ratio (OR), and 95% CI were calculated using the random-effects model or fixed-effects model according to the heterogeneity between studies. Results: From January 01, 2001 to May 30, 2021, 1,056 articles were retrieved, but only 13 articles were finally included for the meta-analysis. The results showed that compared to the OLLH group, LLLH resulted in smaller surgical incision, less intraoperative blood loss, faster postoperative recovery, and fewer postoperative complications (surgical incision: SMD = -3.76, 95% CI: -5.40, -2.12; intraoperative blood loss: SMD = -0.95, 95% CI: -1.69, -0.21; length of hospital stay: SMD = -1.56, 95% CI: -2.37, -0.75; postoperative complications: OR = 0.45, 95% CI: 0.26, 0.78). Conclusions: In the treatment of hepatolithiasis, compared with OLLH, LLLH has the advantages of less intraoperative blood loss, smaller incisions, less postoperative complications, shorter hospital stay, shorter time to first postoperative exhaust, and postoperative ambulation, and rapid postoperative recovery. Answer: No, the left hepatic vein should not be sutured and ligated blindly in left hepatectomy. An anatomical study of human liver specimens indicated that the left hepatic vein and middle hepatic vein often form a common trunk, and there is a risk of potential damage to the middle hepatic vein if the left hepatic vein is blindly sutured and ligated (PUBMED:14599942).
Instruction: Cortical mastoidectomy in quiescent, tubotympanic, chronic otitis media: is it routinely necessary? Abstracts: abstract_id: PUBMED:25187749 Myringoplasty with and without Cortical Mastoidectomy in Treatment of Non-cholesteatomatous Chronic Otitis Media: A Comparative Study. Objective: To compare the outcome and success of repair of uncomplicated tympanic membrane perforations with myringoplasty alone and when combined with mastoidectomy. Methods: A prospective study where 40 patients with non-cholesteatomatous chronic suppurative otitis media (CSOM) were recruited during the period of June 2013 to December 2013 from the outpatient clinic of Otorhinolaryngology department, Faculty of medicine, Cairo University. Patients were managed medically and after dryness of their perforations they were operated upon. Twenty patients underwent simple myringoplasty alone and 20 patients underwent myringoplasty with cortical mastoidectomy. Underlay technique with temporalis fascia was done for all patients. Follow-up period was at least 3 months. Results: Hearing improvement was comparable in both groups. There was no significant difference in graft uptake between the myringoplasty alone group (70%) and cortical mastoidectomy group (80%) (P = 0.7). There was no significant difference in ear dryness between the myringoplasty alone group (75%) and cortical mastoidectomy group (90%) (P = 0.4). Conclusion: Mastoidectomy performed in non-cholesteatomatous CSOM in this study gives no statistically significant benefit over simple myringoplasty as regards graft success rate and dryness of the middle ear with comparable hearing outcome. abstract_id: PUBMED:18845036 Cortical mastoidectomy in quiescent, tubotympanic, chronic otitis media: is it routinely necessary? Objective: This study aimed to compare outcomes for mastoidotympanoplasty and for tympanoplasty alone in cases of quiescent, tubotympanic, chronic, suppurative otitis media. Study Design: Single-blinded, randomised, controlled study within a tertiary referral hospital. Methods: Sixty-eight cases were randomly allocated into two groups. In group one, 35 ears underwent type one tympanoplasty along with cortical mastoidectomy. In group two, 33 ears underwent type one tympanoplasty alone. Outcome measures were as follows: perforation closure and graft uptake, hearing improvement, disease eradication, and post-operative complications. Results: There were no statistically significant differences in hearing improvement, tympanic perforation closure, graft uptake or disease eradication, comparing the two groups at three and six months post-operatively. Conclusion: Mastoidotympanoplasty was not found to be superior to tympanoplasty alone over a short term follow-up period. Hence, it may not be necessary to undertake routine mastoid exploration at this stage of disease. abstract_id: PUBMED:37636614 Interlay Type-1 Tympanoplasty with or Without Cortical Mastoidectomy in an Inactive Mucosal Chronic Otitis Media with Large Central Perforation: A Retrospective Comparative Study. Background: Chronic otitis media (COM) is a pathology involving the middle ear cleft characterized by discharging ear and a non-healing perforation in tympanic membrane. Different techniques have been used for closing the perforation but interlay myringoplasty has become popular among surgeons since the past few decades. Objectives: To evaluate and compare the success rate of Type-1 interlay tympanoplasty in large tympanic membrane perforation with or without cortical mastoidectomy in terms of graft take-up rate and improvement in hearing outcomes. Materials and methods: A retrospective study for the period of eighteen months with total of 90 patients further subdivided into two groups. Group I of 45 patients underwent Type-1 interlay tympanoplasty alone, and 45 patients in Group II underwent type-1 interlay tympanoplasty with cortical mastoidectomy. Results: In group I the mean pre-operative, post-operative pure tone average and air bone gap was found to be 36.49 ± 4.49, 29.24 ± 4.39 and 25.11 ± 3.15, 14.76 ± 3.12 respectively. In group II the mean pre-operative, post-operative pure tone average and air bone gap was found to be 35.60 ± 5.27, 25.96 ± 5.29 and 23.96 ± 3.76 and 13.33 ± 3.38. An independent sample t-test was performed for intergroup comparison and found to be statistically significant (p &lt; 0.005). The graft uptake was 95.5% in group II and 82.2% in group I. Conclusion: Interlay type-1 tympanoplasty coupled with cortical mastoidectomy gives excellent results in terms air bone gap closure and graft uptake in inactive mucosal COM than Interlay type-1 tympanoplasty alone. Supplementary Information: The online version contains supplementary material available at 10.1007/s12070-023-03781-7. abstract_id: PUBMED:23998025 Myringosclerosis: an indication of a blocked aditus. Tympanoplasty has been the mainstay of treatment in chronic otitis media. In a non cholesteatomatous chronic otitis media, there has been much debate whether a cortical mastoidectomy is required or not. Creating an aerating mastoidectomy in cases of blocked aditus ad antrum helps in reducing the recurrence. However, the status of aditus is not always known unless a mastoidectomy is performed. In this study we try to find out if there is any clinical clue regarding a blocked aditus ad antrum by looking at the tympanic membrane. Fourty-three cases of cortical mastoidectomies were retrospectively studied in this series. Patency of aditus ad antrum was analyzed with respect to presence of myringosclerosis and the status of middle ear mucosa. In this study myringosclerosis was found to be significantly associated with a blocked aditus while no such association was found with the status of middle ear mucosa. The presence of myringosclerosis may indicate a blocked aditus ad antrum and performing a cortical mastoidectomy in such cases may help in creating an aerated mastoid, thereby possibly reducing the recurrence rate. abstract_id: PUBMED:37206730 Surgical Efficacy of Mastoidectomy in Chronic Otitis Media: Squamosal Type. Chronic Otitis Media-Squamosal type is an erosive process, which when confined to ossicular chain causes varying degrees of hearing impairment. As the disease progresses to involve surrounding vital structures, it causes various complications like facial palsy, vertigo, mastoid abscess, which are more common than the other intracranial complications, and require a definitive surgical intervention i.e., mastoidectomy at the earliest. A retrospective study on 60 patients who had been operated for squamosal type were analysed for the demographics, symptomatology, intraoperative extent of cholesteatoma, type of mastoidectomy done, various graft materials used for reconstruction, post operatively for graft uptake, hearing improvement and the results were analysed using ChOLE classification of cholesteatoma. Although Intact Canal Wall mastoidectomy had improved post op PTA values, there was no significant difference in the Air-Bone gap closure when Intact Canal Wall mastoidectomy was compared to Canal Wall Down Mastoidectomy. abstract_id: PUBMED:33487180 Endoscopic epitympanic exploration in mucosal chronic otitis media: is canal wall up mastoidectomy really needed? Objective: To compare endoscopic epitympanic exploration with conventional canal wall up (cortical) mastoidectomy for mucosal chronic otitis media in terms of post-operative outcomes. Methods: Seventy-six patients diagnosed with chronic otitis media (mucosal variety) were randomly assigned to two treatment groups: endoscopic epitympanic exploration and conventional canal wall up (cortical) mastoidectomy. The groups were compared in terms of: post-operative anatomical outcomes (graft uptake), middle-ear physiological outcomes (post-operative tympanometry), audiological outcomes (air-bone gap), surgical time, post-operative pain, vertigo, and long-term complications such as retraction pocket and re-perforation. Results: There was a statistically significant difference between the groups in terms of mean air-bone gap at 12 months, surgical time, and median post-operative pain measured at 6 hours (p &lt; 0.05). No statistically significant differences were noted in terms of: graft uptake at 1, 3 and 6 months, mean air-bone gap at 3 and 6 months, tympanometry at 3, 6 and 12 months, vertigo at 1 week, or long-term complications. Conclusion: Endoscopic epitympanic exploration resulted in significantly better long-term audiological outcomes, shorter operating time and less pain compared with conventional canal wall up (cortical) mastoidectomy. abstract_id: PUBMED:21195327 Cortical mastoidectomy in surgery of tubotympanic disease. Are we overdoing it? Background And Purpose Of The Study: the role of cortical mastoidectomy in the surgical treatment of tubotympanic type of otitis media has remained controversial especially when there is no evidence of active infection. Though literature is replete with studies for and against the requirement of mastoidectomy, there is a paucity of prospective studies in this regard. A randomized controlled trial was conducted to assess the impact of mastoidectomy in the management of mucosal chronic otitis media. Methods: 62 patients with uncomplicated mucosal chronic otitis media were randomly allotted to two groups of 31 each. Patients in group A underwent tympanoplasty with mastoidectomy and group B underwent tympanoplasty without mastoidectomy. All the patients were followed up for a minimum of three months and results were assessed in terms of graft uptake, hearing improvement and need for repeat procedure. Results: no significant difference in outcome was observed between the two groups in all parameters compared. The residual air-bone gap was 12.55 ± 12.98 in group A and 12.71 ± 11.54 in group B. Graft uptake rate was 93.55% in group A and 96.775% in group B. 2 patients in group A and one patient in group B underwent repeat procedures. Conclusions: there is little evidence in favour of cortical mastoidectomy in surgery of tubotympanic disease. abstract_id: PUBMED:35177155 The effect of post-auricular canal wall down mastoidectomy on the position of the auricle. Objective: This study aimed to investigate the effect of surgical incision on the auricle position in patients undergoing canal wall down mastoidectomy to treat chronic otitis media. Methods: Thirty-four patients who had undergone canal wall down mastoidectomy with a post-auricular incision approach were included in the study. Patients who had a previous auricle deformity, who underwent limited mastoidectomy surgery or mastoid obliteration, or who were younger than 18 years of age were excluded. The distances of the upper and middle parts of the auricle to the mastoid were measured. Results: Measurements in the first post-operative year were found to be 13.15 ± 3.59 mm in the upper region and 16.29 ± 5.00 mm in the middle region. It was observed that the auricle was approaching the mastoid area in both regions. Conclusion: In patients undergoing radical mastoidectomy, the distance between the auricle and the mastoid may decrease, leading to narrowing of the auriculo-cephalic angle. abstract_id: PUBMED:33542746 Surgical Management of Retraction Pockets: Does Mastoidectomy have a Role? Introduction Retraction pocket is a condition in which the eardrum lies deeper within the middle ear. Its management has no consensus in literature. Objective To assess the role of mastoidectomy in the management of retraction pockets added to a tympanoplasty. Methods Prospective study of patients with retraction pocket and referred to surgery. The patients were randomly assigned to two groups: one managed with tympanoplasty and mastoidectomy and the other group with tympanoplasty only. The minimum follow-up considered was 12 months. The outcomes were: integrity of eardrum, recurrence, and hearing status. Results This study included 43 patients. In 24 cases retraction occurred in the posterior half of the eardrum, and in 19 patients there was clinical evidence of ossicular interruption. The two groups of treatment were composed by: 21 patients that underwent tympanoplasty with mastoidectomy and 22 patients had only tympanoplasty. One case of the first group had a recurrence. In 32 cases patients follow up was longer than 48 months. The average air-bone gap changed from 22.1 dB to 5 dB. The percentage of air-bone gap improvement was assessed at 60% in those patients treated with mastoidectomy, and 64.3% in those without it ( p &gt; 0.5). Conclusion Tympanoplasty and ossiculoplasty should be considered to treat atelectatic middle ear and ossicular chain interruption. Mastoidectomy as a way to increase air volume in the ear seems to be a paradox; it does not add favorable prognostic factor to management of retraction pockets. abstract_id: PUBMED:28924294 Meatal obstruction following canal wall down mastoidectomy. Chronic otitis media (COM) is completely treated with a single procedure in most patients but may require multiple surgeries in some cases. The main goals of open cavity mastoidectomy are to completely eradicate the disease, create a large enough meatus for examination and mastoid air cells, and provide a self-cleaning epithelized dry cavity, and achieve maximum hearing. In this report, we present a very rare case of COM who underwent revision mastoidectomy in our clinic due to meatal obstruction in the right ear, total sensorineural hearing loss, and pain and tenderness in the postauricular region. Answer: The necessity of cortical mastoidectomy in quiescent, tubotympanic, chronic otitis media (CSOM) is debated in the literature. Several studies have investigated the outcomes of myringoplasty with and without cortical mastoidectomy to determine if the additional procedure offers any significant benefit. One study found that there was no significant difference in graft uptake or ear dryness between the myringoplasty alone group and the cortical mastoidectomy group, suggesting that mastoidectomy did not provide a statistically significant benefit over simple myringoplasty in non-cholesteatomatous CSOM (PUBMED:25187749). Another study supported this finding, showing no statistically significant differences in hearing improvement, tympanic perforation closure, graft uptake, or disease eradication between mastoidotympanoplasty and tympanoplasty alone (PUBMED:18845036). However, a retrospective study indicated that interlay type-1 tympanoplasty with cortical mastoidectomy resulted in better outcomes in terms of air bone gap closure and graft uptake in inactive mucosal COM compared to tympanoplasty alone (PUBMED:37636614). This suggests that in certain cases, such as those with large central perforations, the addition of cortical mastoidectomy might be beneficial. Other studies have explored the role of mastoidectomy in different contexts. For instance, one study suggested that the presence of myringosclerosis may indicate a blocked aditus ad antrum, and performing a cortical mastoidectomy in such cases may help in creating an aerated mastoid, potentially reducing recurrence rates (PUBMED:23998025). In contrast, another study concluded that there is little evidence in favor of cortical mastoidectomy in the surgery of tubotympanic disease, as no significant difference in outcome was observed between groups with and without mastoidectomy (PUBMED:21195327). In summary, the current evidence suggests that routine cortical mastoidectomy may not be necessary in all cases of quiescent, tubotympanic CSOM. The decision to perform mastoidectomy should be based on individual patient factors, such as the presence of large central perforations or indications of a blocked aditus ad antrum. It is important for clinicians to weigh the potential benefits against the risks and costs associated with the additional surgical procedure.
Instruction: Can interval appendectomy be justified following conservative treatment of perforated acute appendicitis? Abstracts: abstract_id: PUBMED:36641313 Nationwide management of perforated pediatric appendicitis: Interval versus same-admission appendectomy. Purpose: Although conservative management followed by readmission for interval appendectomy is commonly used to manage perforated appendicitis, many studies are limited to individual or noncompeting pediatric hospitals. This study sought to compare national outcomes following interval or same-admission appendectomy in children with perforated appendicitis. Methods: The Nationwide Readmission Database was queried (2010-2014) for patients &lt;18 years old with perforated appendicitis who underwent appendectomy using ICD9-CM Diagnosis codes. A propensity score-matched analysis (PSMA) utilizing 33 covariates between those with (Interval Appendectomy) and without a prior admission (Same-Admission Appendectomy) was performed to examine postoperative outcomes. Results: There were 63,627 pediatric patients with perforated appendicitis. 1014 (1%) had a prior admission for perforated appendicitis within one calendar year undergoing interval appendectomy compared to 62,613 (99%) Same-Admission appendectomy patients. The Interval Appendectomy group was more likely to receive a laparoscopic (87% vs. 78% same-admission) than open (13% vs. 22% same-admission; p &lt; 0.001) operation. Patients receiving interval appendectomy were more likely to have their laparoscopic procedure converted to open (5% vs. 3%) and receive more concomitant procedures. PSMA demonstrated a higher rate of small bowel obstruction in those receiving Same-Admission appendectomy while all other complications were similar. Although those receiving Interval Appendectomy had a shorter index length of stay (LOS) and lower admission costs, they incurred an additional $8044 [$5341-$13,190] from their prior admission. Conclusion: Patients treated with interval appendectomy experienced more concomitant procedures and incurred higher combined hospitalization costs while still having a similar postoperative complication profile compared to those receiving same-admission appendectomy for perforated appendicitis. Level Of Evidence: III. Type Of Study: Retrospective Comparative Study. abstract_id: PUBMED:35738141 Interval laparoscopic appendectomy after laparotomy drainage for acute appendicitis with abscess: A case report. Introduction: Immediate appendectomy for acute appendicitis with abscess has a high frequency of ileocecal resection and postoperative complications compared with interval appendectomy after conservative treatment. The optimal approach to acute appendicitis with abscess remains controversial. Presentation Of Case: A 69-year-old woman was referred to our hospital for abdominal pain. A computed tomography scan revealed an enlarged abscess around the cecum. The diagnosis was perforated appendicitis with abscess, and conservative treatment was performed. Percutaneous drainage was difficult because the abscess was near the intestinal tract. Because of the persistence of symptoms on the fourth day of hospitalization, laparotomy drainage was performed, and the patient's condition improved afterwards. Colonoscopy was performed on an outpatient follow-up to rule out malignant tumors of the colon. Interval laparoscopic appendectomy was performed 3 months after discharge to prevent appendicitis. The postoperative course was uneventful. Discussion: For this case of acute appendicitis with abscess, conservative treatment such as antibiotic therapy and laparotomy drainage was performed. Laparotomy drainage enabled us to approach the abscess directly and minimized the risk of its spread into the abdominal cavity compared to the laparoscopic approach. Interval laparoscopic appendectomy was more effective and easier for this case of appendectomy, wherein adhesions to the abdominal wall were expected compared to laparotomy. Conclusion: Conservative treatment approaches, such as drainage and antibiotic therapy, can be first-line for appendicitis with abscesses. Interval laparoscopic appendectomy can be useful to resect the appendix and observe the abdominal cavity. abstract_id: PUBMED:37987979 Interval appendectomy as a safe and feasible treatment approach after conservative treatment for appendicitis with abscess: a retrospective, single-center cohort study. Emergency appendectomy (EA) is the gold standard management for acute appendicitis (AA). However, whether EA or interval appendectomy (IA) after conservative treatment is the optimal approach in AA with abscess remains controversial. This study compared IA and EA in patients presenting with AA accompanied by abscess. This was a retrospective single-center study including 446 consecutive patients undergoing appendectomy between April 2009 and March 2023. AA with abscess was defined as a pericecal abscess observed by computed tomography or abdominal ultrasonography, and patients with signs of peritoneal irritation were excluded. Perioperative outcomes were compared between the patients who directly underwent EA and those who underwent IA after conservative treatment. Among 42 patients (9.4%) with AA and abscess, 34 and 8 patients underwent IA and EA, respectively. The rates of ileocecal resection and postoperative complications were lower in the IA group than in the EA group (3% vs. 50%, P &lt; 0.001 and 9% vs. 75%, P &lt; 0.001, respectively). Colonoscopy before IA was performed in 16 of the 17 patients aged ≥ 40 years in the IA group, and one patient underwent ileocecal resection because of suspicious neoplasm in the root of the appendix. IA after conservative treatment might be considered as the useful therapeutic option for AA with abscess. Colonoscopy during the waiting period between the initial diagnosis and IA should be considered in patients aged ≥ 40 years who may have malignant changes. Implementing IA as a first-line treatment will be beneficial to both patients and healthcare providers. abstract_id: PUBMED:25985296 Perforated appendix with abscess: Immediate or interval appendectomy? Some examples to explain our choice. Introduction: There are no clear guidelines in the treatment of a perforated appendicitis associated with periappendiceal abscess without generalized peritonitis. Presentation Of Cases: We retrospectively studied six examples of treated children in order to discuss the reasons of our team's therapeutic approach. Some children were treated with a conservative antibiotic therapy to solve acute abdomen pain, planning a routine interval appendectomy after some months. Others, instead, underwent an immediate appendectomy. Discussion: By examining these examples we wanted to highlight how the first approach may be associated with shorter surgery time, fewer overall hospital days, faster refeeding and minor complications. Conclusion: Our team's therapeutic choice, in the case of a perforated appendicitis with an abscess and coprolith is an initial conservative case management followed by a routine interval appendectomy performed not later than 4 months after discharge. abstract_id: PUBMED:19691990 Can interval appendectomy be justified following conservative treatment of perforated acute appendicitis? Background: There continues to be controversy about the necessity of interval appendectomy for delayed presentation of acute appendicitis. While recent studies suggest that the risk of recurrent disease is small, the risk of interval appendectomy is also small and does provide histologic identification and usually definitive treatment of the right lower quadrant inflammatory process. Methods: A retrospective analysis of medical records gathered from 2002 to 2007 at a major teaching hospital of 986 adult patients over the age of 13 with appendicitis were analyzed. Forty-six patients (5%) were found to have right lower quadrant abscess or phlegmon, and were managed with intravenous antibiotics. Some patients also underwent percutaneous drainage. These patients were then readmitted 6 to 26 wk later for an elective laparoscopic interval appendectomy. Results: There were 19 males and 27 females with an average age of 43 y. Ninety-four percent of the appendectomies were completed laparoscopically; 16% of patients were found to have a normal or obliterated appendix on pathologic evaluation and likely did not benefit from interval appendectomy. On the other hand, 84% of patients had persistent acute appendicitis, chronic appendicitis, evidence of inflammatory bowel disease, or neoplasm identified, and likely benefited from surgical appendectomy. Conclusions: Interval appendectomy provides diagnostic and therapeutic benefit to patients who present with a right lower quadrant abdominal inflammatory focus, and should be carefully considered in all adult patients. abstract_id: PUBMED:15286890 Is interval appendectomy necessary after conservative treatment of appendiceal masses? Background: This prospective study was conducted to investigate whether interval appendectomy was necessary after successful conservative treatment of appendiceal masses. Methods: Thirty-seven patients with a diagnosis of appendiceal mass by physical examination and ultrasonography were initially treated conservatively with broad-spectrum antibiotics, anti-inflammatory drugs, and, if required, intravenous fluid treatment. Interval appendectomy was ruled out in 28 patients who responded well to conservative treatment, three of whom were then lost to follow-up. The remaining 25 patients (9 females, 16 males; mean age 25 years; range 17 to 54 years) were monitored for recurrent appendicitis and other causes of appendiceal mass. The mean follow-up period was 35 months (range 6 to 66 months). Results: The mean duration of abdominal symptoms was nine days (range 3 to 20 days). The mean length of hospital stay was 14 days (range 10 to 21 days) in patients who responded to conservative treatment. Recurrent appendicitis developed in three patients (12%; 2 males, 1 female). Two patients who presented with acute appendicitis within six months after discharge and one patient who developed chronic abdominal right lower quadrant pain unresponsive to medical treatment a year after discharge underwent appendectomy. No other complications were seen with conservative treatment. Conclusion: We do not recommend routine interval appendectomy in patients who benefit from conservative treatment for an appendiceal mass unless recurrent appendicitis develops. abstract_id: PUBMED:22590661 Interval routine appendectomy following conservative treatment of acute appendicitis: Is it really needed. Conservative management of acute appendicitis (AA) is gradually being adopted as a valuable therapeutic choice in the treatment of selected patients with AA. This approach is based on the results of many recent studies indicating that it is a valuable and effective alternative to routine emergency appendectomy. Existing data do not support routine interval appendectomy following successful conservative management of AA; indeed, the risk of recurrence is low. Moreover, recurrences usually exhibit a milder clinical course compared to the first episode of AA. The role of routine interval appendectomy is also questioned recently, even in patients with AA complicated by plastron or localized abscess formation. Surgical judgment is required to avoid misdiagnosis when selecting a conservative approach in patients with a presumed "appendiceal" mass. abstract_id: PUBMED:35695921 Emergency appendectomy versus elective appendectomy following conservative treatment for acute appendicitis: a multicenter retrospective clinical study by the Japanese Society for Abdominal Emergency Medicine. Purpose: To establish the best treatment strategy for acute appendicitis. Methods: We collected data on 2142 appendectomies performed in 2017 and compared the backgrounds and surgical outcomes of patients who underwent early surgery (ES) (&lt; 48 h) with those managed with non-ES (&gt; 48 h). We performed a risk factor analysis to predict postoperative complications and subgroup analysis to propose a standard treatment strategy. Results: The incidence of postoperative complications was significantly higher in the ES group than in the non-ES group, and significantly lower in the laparoscopic surgery group than in the laparotomy group. Surgical outcomes, including the incidence of postoperative complications, were comparable after acute surgery (&lt; 12 h) and subacute surgery (12-48 h), following antibiotic treatment. The risk factors for postoperative complications in the ES group were a higher age, history of abdominal surgery, perforation, high C-reactive protein level, histological evidence of gangrenous or perforated appendicitis, a long operation time, and intraoperative complications. The risk factors for postoperative complications in the non-ES group were perforation and unsuccessful conservative treatment. Conclusions: Non-early appendectomy is feasible for acute appendicitis but should be applied with care in patients with risk factors for postoperative complications or failure of pretreatment, including diabetes mellitus, abscess formation, and perforation. abstract_id: PUBMED:25543294 Is there truly an oncologic indication for interval appendectomy? Background: The rate of recurrent appendicitis is low following nonoperative management of complicated appendicitis. However, recent data suggest an increased rate of neoplasms in these cases. Methods: The study was a retrospective review of patients with acute appendicitis at 2 university-affiliated community hospitals over a 12-year period. The primary outcome measure was the incidence of appendiceal neoplasm following interval appendectomy. Results: Six thousand thirty-eight patients presented with acute appendicitis. Appendectomy was performed in 5,851 (97%) patients at the index admission. Of the 188 patients treated with initial nonoperative management, 89 (47%) underwent interval appendectomy. Appendiceal neoplasms were identified in 11 of the 89 (12%) patients. These included mucinous neoplasms (n = 6), carcinoid tumors (n = 4), and adenocarcinoma (n = 1). The rate of neoplasm in patients over age 40 was 16%. Conclusions: There is a significant rate of neoplasms identified in patient over age 40 undergoing interval appendectomy. This should be considered following nonoperative management of complicated appendicitis. abstract_id: PUBMED:35911350 Laparoscopic Versus Open Appendectomy for Patients With Perforated Appendicitis. Introduction Acute appendicitis can lead to perforation which can be lethal. The present study assessed the outcomes of laparoscopic appendectomy versus open appendectomy in patients with perforated appendicitis. Methodology A comparative study was conducted at the Department of Surgery, Liaquat University of Medical &amp; Health Sciences (LUMHS), between March 2, 2019 and July 7, 2020. The inclusion criteria consisted of a diagnosis of perforated appendicitis. Exclusion criteria consisted of intellectual disability hindering the procurement of informed consent, pediatric patients &lt; 15 years of age, patients with an appendicular mass or abscess unsuited for laparoscopic appendectomy, severe sepsis or septic shock on presentation, and pregnant women. Patients were allocated to either open appendectomy (Group A) or laparoscopic appendectomy (Group B). The data gathering proforma recorded demographics, surgical findings, operating room (OR) time, length of inpatient care, requirement of analgesic, and any adverse events following surgery. All of the surgeries were conducted by an experienced surgical consultant with an experience of at least five years. Results A total of 85 patients were included in the laparoscopic appendectomy group, while 101 cases were included in the open appendectomy group. The use of analgesics thrice a day to manage the postoperative pain was significantly associated with the open appendectomy (p &lt; 0.0001). Moreover, the postoperative length of hospitalization was substantially greater in patients who underwent open appendectomy than those who underwent laparoscopic procedure (p &lt; 0.0001). Wound-related complications were considerably lower in patients who had laparoscopic appendectomy as compared to those who had open appendectomy (23.53% versus 40.5%; p = 0.013). Conclusion The length of stay was significantly lower in patients who underwent laparoscopic appendectomy. Moreover, laparoscopic appendectomy was also associated with a lower rate of wound infection postoperatively, thus giving the former an edge over the latter. Despite the finding that the postoperative pain was not considerably different between the two groups, patients who underwent open appendectomy group required significantly more painkillers to manage the postoperative pain. Answer: Yes, interval appendectomy can be justified following conservative treatment of perforated acute appendicitis. Several studies have provided evidence supporting the use of interval appendectomy in specific circumstances. One study found that interval appendectomy after conservative treatment for appendicitis with an abscess is a safe and feasible treatment approach. It reported lower rates of ileocecal resection and postoperative complications in the interval appendectomy group compared to the emergency appendectomy group. Additionally, colonoscopy during the waiting period between the initial diagnosis and interval appendectomy should be considered in patients aged ≥ 40 years who may have malignant changes (PUBMED:37987979). Another study suggested that interval appendectomy provides diagnostic and therapeutic benefits to patients who present with a right lower quadrant abdominal inflammatory focus. It found that 84% of patients had persistent acute appendicitis, chronic appendicitis, evidence of inflammatory bowel disease, or neoplasm identified, and likely benefited from surgical appendectomy (PUBMED:19691990). A retrospective study indicated that interval appendectomy is not routinely recommended in patients who benefit from conservative treatment for an appendiceal mass unless recurrent appendicitis develops. This study found that only 12% of patients developed recurrent appendicitis after conservative treatment (PUBMED:15286890). Furthermore, a study highlighted that the rate of neoplasms identified in patients over age 40 undergoing interval appendectomy was significant, suggesting that interval appendectomy should be considered following nonoperative management of complicated appendicitis, especially in older patients (PUBMED:25543294). However, it is important to note that the decision to perform an interval appendectomy should be individualized based on the patient's clinical situation, age, and the presence of complicating factors such as abscess or suspicion of neoplasm. Some studies argue against routine interval appendectomy following successful conservative management of acute appendicitis, as the risk of recurrence is low and recurrences usually exhibit a milder clinical course compared to the first episode (PUBMED:22590661). In summary, while interval appendectomy can be justified in certain cases following conservative treatment of perforated acute appendicitis, especially in older patients or those with complicating factors, it may not be necessary for all patients. The decision should be made on a case-by-case basis, considering the potential benefits and risks.
Instruction: Does palatal muscle reconstruction affect the functional outcome of cleft palate surgery? Abstracts: abstract_id: PUBMED:17440366 Does palatal muscle reconstruction affect the functional outcome of cleft palate surgery? Background: This study was designed to compare two-layer palatoplasty (Wardill-Kilner V-Y pushback technique) without intravelar veloplasty versus three-layer palatoplasty (Kriens technique) with intravelar veloplasty with regard to postoperative functional outcome of eustachian tube and velopharyngeal competence. Methods: A prospective cohort study was conducted enrolling 70 patients with nonsyndromic cleft palate (except submucous type of cleft) over a period of 2 years. They were divided into two main groups according to the type of cleft palate: group A (Veau class II) included 32 patients and group B (Veau class I) included 38 patients. In each group, Wardill-Kilner palatoplasty (two-layer repair without intravelar veloplasty) versus Kriens palatoplasty (three-layer repair with intravelar veloplasty) was randomly selected for patients. Results: For the three-layer palatoplasty in both groups, there was a greater tendency for resolution of secretory otitis media in the early postoperative period, less time required for extrusion of the grommet tube, and a lower incidence of recurrent secretory otitis media. The incidence of postoperative velopharyngeal incompetence was greater with two-layer palatoplasty group. The incidence of palatal fistula was greater with three-layer palatoplasty. Conclusions: Palatal muscle reconstruction in cleft palate patients confers better functional results regarding velopharyngeal competence and eustachian tube function. Although the overall incidence of postoperative palatal fistula is within the accepted range, the incidence of fistula is higher in the palatal muscle reconstruction subgroup. Future studies are required that include a larger number of patients. abstract_id: PUBMED:6867168 Levator muscle reconstruction: does it make a difference? Eighty-five children from 6 to 8 months of age underwent palatal reconstruction between 1972 and 1978. Forty had palatal repair without levator reconstruction, and 45 had an intravelar veloplasty. Speech assessment was performed at 2 years after surgery. Any nasal escape or hypernasality, whether consistent or not, was included as abnormal speech. The data revealed that 70 percent (28 of 40) had abnormal speech when no muscle reconstruction was performed compared with 63 percent (24 of 45) after having had an intravelar veloplasty. We conclude that the added operative dissection adds no morbidity to the procedure and that the improved speech results probably justify performing an intravelar veloplasty when doing a palatal repair. abstract_id: PUBMED:29053517 Total Palatal Mobilization and Multilamellar Suturing Technique Improves Outcome for Palatal Fistula Repair. Backgrounds: The success rate of the surgical repair of palatal fistula after palatoplasty is often unsatisfactory. This study is a review of 15 years of single surgeon's experience with the evolution of a reliable surgical technique with high success rate. Methods: This is a retrospective chart review of consecutive cleft cases undergoing repair of palatal fistula from 2000 to 2015. The study included 37 consecutive fistula repair cases with wide elevation and mobilization of the palatal tissues and nasal and oral layer repair. Group 1 (n = 20) were treated earlier in the study using either midline, von Langenbeck, or 2-flap palatoplasty with 3-layer suturing. Group 2 (n = 17) were treated through a Dorrance-type incision and additional repair of the oral periosteum for a total of 4-layer suturing. Results: The overall fistula closure rate was 94.6% (90% in group 1 and 100% in group 2). The difference in outcome between the 2 groups was statistically insignificant (P &gt; 0.05). Most patients (83.8%) had concomitant velar muscle retropositioning for treatment of velopharyngeal incompetence. Conclusions: Fistula repair using wide mobilization of the entire palate through previous repair incisions and multilamellar suturing technique has a very low fistula recurrence rate. Addition of the fourth layer of suturing and the use of a Dorrance-type incision further improves the outcome. This approach provides wide tissue release and access to tissue layers for better repair and tension-free closure. Combining intravelar veloplasty with fistula repair is safe and allows management of the fistula and its possible consequences on palatal function in a single procedure. abstract_id: PUBMED:34261961 Changes in Quality of Life After Secondary Closure of Palatal Defects: Prosthetic Obturation Versus Surgical Reconstruction. Background: The closure of palatal defects after tumor resection or irradiation is performed with either a prosthesis or autogenous tissue; however, there are no clear criteria regarding selection of the method. Thus, this study aimed to investigate the real-world situation and problems of palatal closure using prostheses, and examined patient opinion on how palatal closure using autogenous tissue improved their postoperative quality of life (QOL). Methods: In 5 patients whose palatal defects resulted from treatment for head and neck cancer and were closed with a prosthesis, the palate was closed secondarily with autogenous tissue; a questionnaire on daily life was administered pre- and post-operatively. Results: Functional improvements in terms of speech and eating were achieved in all and in 4 of 5 cases, respectively. In all cases, the QOL was better for palatal closure with autogenous tissue than with the prosthesis. Conclusions: As postoperative QOL was considered to be better when reconstructing the palate with autogenous tissue than with the prosthesis, we recommend to actively select autogenous tissue for palate reconstruction. abstract_id: PUBMED:37105089 Pronator quadratus musculo-osseous free flap for wide hard palatal defect reconstruction: An anatomical study. Wide hard palate defects include congenital and acquired defects that are six square centimeters or larger in size. Obturator prostheses and autologous soft tissue transfers have been used to reconstruct palatal defects. This study aims to repair wide, hard palatal defects by using a pronator quadratus musculo-osseous free flap to achieve subtotal reconstruction. Seventeen formalin-fixed cadavers were dissected. Free musculo-osseous pronator quadratus flaps were prepared after a 12 cm curvilinear volar skin incision. Standard 30 × 23 mm (690 ± 52.12 mm2) hard palate defects were made by chisels and saws. A subcutaneous tunnel was created between the mandibular edge cross point of the facial vessels and the retromolar trigone through the subcutaneous to the superficial musculoaponeurotic system by dissection. Area measurements of the pedicle and palate defects were performed by the ImageJ program (National Institutes of Health, Bethesda, MD, USA) on drawings over an acetate layer of materials. Mandibular distances of gonion-facial vessel cross point (a), gonion-gnathion (m), and facial vessels' cross point-retromolar entrance point (h) were measured. Ratios of h/m and a/m were calculated. The mean pronator quadratus area was 2349.39 ± 444.05 mm2, and the arterial pedicle pronator quadratus diameter was 2.32 ± 0.34 mm. The mean pedicle length of the pronator quadratus was 117.13 ± 8.10 mm. Study results showed that musculo-osseous pronator quadratus flaps' bone and muscle parts perfectly fit on the defects in all cadavers. Pronator quadratus musculo-osseous flap is a feasible surgical option for wide, hard palatal defect reconstruction strategies. abstract_id: PUBMED:18586465 Reconstruction of the palatal aponeurosis with autogenous fascia lata in secondary radical intravelar veloplasty: a new method. Velopharyngeal insufficiency in cleft patients with muscular insufficiency detected by nasendoscopy is commonly treated by secondary radical intravelar veloplasty, in which the palatal muscles are reoriented and positioned backwards. The dead space between the retro-displaced musculature and the posterior borders of the palatal bone remains problematic. Postoperatively, the surgically achieved lengthening of the soft palate often diminishes due to scar tissue formation in the dead space, leading to reattachment of the reoriented muscles to the palatal bone and to decreased mobility of the soft palate. To avoid this, the dead space should be restored by a structure imitating the function of the missing palatal aponeurosis. The entire dead space was covered using a double layer of autogenous fascia lata harvested from the lateral thigh, which should allow sufficient and permanent sliding of the retro-positioned musculature. A clinical case of a 9-year-old boy who underwent the operation is reported. Postoperatively, marked functional improvements were observable in speech assessment, nasendoscopy and nasometry. The case reported here suggests that the restoration of the dead space may be beneficial for effective secondary palatal repair. Fascia lata seems to be a suitable graft for this purpose. abstract_id: PUBMED:12193889 The temporalis muscle flap in reconstruction of intraoral defects: an appraisal of the technique. Purpose: The purpose of this article is to review the experience of the authors in the use of the temporalis muscle flap for reconstruction of intraoral defects. Patients And Methods: This is a retrospective review of the use of the temporalis muscle flap for reconstruction of different types of intraoral defects in 8 patients. All patients in this series previously wore obturators as a nonsurgical treatment of their defects. Criteria used to evaluate the results of this technique included flap necrosis, facial nerve deficit, limitation of mandibular range of motion, and cosmetic deformity from scarring of the incision line or from loss of muscle volume in the temporal fossa. The patients were also evaluated for their degree of satisfaction with their speech and mastication with the obturator preoperatively and with the flap postoperatively. This article also reviews the success rates and complications with use of the temporalis muscle flap reported in the English-language literature during the past 14 years. Results: All 8 patients in this series had their defects successfully reconstructed, completely eliminating any further need for prosthetic obturation of the defect. There were no incidents of flap necrosis, facial nerve deficit, or long-term changes in mandibular range of motion. Slight temporal hollowing was seen in the first 3 patients. Results of the literature review also showed a high success rate and a low incidence of complications with use of this flap. Conclusions: The temporalis flap is a useful, reliable, and versatile option for reconstruction of moderate to large sized defects. The muscle can provide abundant tissue, with minimal to no functional morbidity or esthetic deformity in the donor site. abstract_id: PUBMED:24531246 Prelaminated calvarial osteofascial flap for palatal reconstruction. Reconstruction of the hard palate defects is among the most challenging problems for plastic surgeons. Prosthetic obturations and local flaps for small defects have been used, whereas numerous regional and free flaps have been described for larger defects. The search for the ideal method offering a natural palatal structure is still ongoing. Five male patients with a mean age of 30.4 years experiencing hard palate defects due to congenital cleft palate or tumor excisions were repaired by prelaminated calvarial osteofascial flap. The mean defect size was 3.14 × 2.48 cm. Both of the surfaces of the calvarial bone elevated with superficial temporal fascia were wrapped with fascia and covered with split-thickness skin graft. The interval between the 2 sessions ranged from 3 to 6 weeks. In the second session, triple layered reconstruction involving the bony layer as well as the oral and nasal mucosa was performed. In 1 case, partial skin loss on the oral surface of the flap was seen in the second session but epithelialized spontaneously. The mean follow-up period was 21.8 months, and no complication such as wound detachment, infection, flap loss, as well as fistula or nasal regurgitation was encountered. A hard palatal reconstruction was performed, offering a natural anatomy in terms of structure and shape. This reliable technique, which is convenient for the three-dimensional reconstruction of the hard palate defects offering a near-normal anatomy owing to its triple layered structure, thickness, and the compatible shape of the calvarial bone to the palate, can be a good alternative against other regional and free flaps. abstract_id: PUBMED:11577468 Palatal aponeurosis and the insertion of the tensor muscle of the soft palate. An anatomic study and clinical applications Introduction: Knowledge of the anatomy of soft palate muscles is of great interest in cleft palate surgery, in surgical correction of obstructive sleep apnea syndrome and in excision of maxillo-facial carcinomas. Some authors described the palatal aponeurosis as the expansion of the tendon of the two tensor veli palatini muscles, others stated that the palatal aponeurosis is a distinct anatomic entity. Method: Ten dissections of the soft palate have been performed to improve our knowledge of its anatomy. Results: The palatal aponeurosis is a distinct anatomic entity continuous with the periosteum of the nasal cavity. The tendon of the tensor veli palatini terminated on the inferior side of the aponeurosis. One fifth of the tensor's tendinous fibers terminated on the posterior border of the palatine bone and the others are spreading on the anterior and inferior side of the palatal aponeurosis. Discussion: In cleft palate patients, this aponeurosis is absent, the palatal muscles are attached to the posterior border of the palatine bones. So it seems to be logical to recommend a soft-palate "pushback" to create a new space between the posterior border of the palatine bones and the soft-palate muscles. abstract_id: PUBMED:29613839 Speech in 5-Year-Olds With Cleft Palate With or Without Cleft Lip Treated With Primary Palatal Surgery With Muscle Reconstruction According to Sommerlad. Objective: To evaluate speech in 5-year-olds with cleft palate with or without cleft lip (CP±L) treated with primary palatal surgery in 1 stage with muscle reconstruction according to Sommerlad at about 12 months of age. Design: Retrospective study. Setting: Primary care university hospital. Participants: Eight 5-year-olds with cleft soft palate (SP), 22 with cleft soft/hard palate (SHP), 33 with unilateral cleft lip and palate, and 17 with bilateral CLP (BCLP). Main Outcome Measures: Percent oral consonants correct (POCC), percent consonants correct adjusted for age (PCC-A), percent oral errors, percent nonoral errors, and variables related to velopharyngeal function were analyzed from assessments of audio recordings by 3 independent speech-language pathologists. Results: The median POCC was 75.4% (range: 22.7%-98.9%), median PCC-A 96.9% (range: 36.9%-100%), median percent oral errors 3.4% (range: 0%-40.7%), and median percent nonoral errors 0% (range: 0%-20%), with significantly poorer results in children with more extensive clefts. The SP group had significantly less occurrence of audible nasal air leakage than the SHP and the BCLP groups. Before age 5 years, 1.3% of the children underwent fistula surgery and 6.3% secondary speech improving surgery. At age 5 years, 15% of the total group was perceived as having incompetent velopharyngeal function. Conclusions: Speech was poorer in many children with more extensive clefts. Children with CP±L had poorer speech compared to normative data of peers without CP±L, but the results indicated relatively good speech compared to speech of children with CP±L in previous studies. Answer: Yes, palatal muscle reconstruction does affect the functional outcome of cleft palate surgery. Studies have shown that palatal muscle reconstruction confers better functional results regarding velopharyngeal competence and eustachian tube function. For instance, a study comparing two-layer palatoplasty without intravelar veloplasty to three-layer palatoplasty with intravelar veloplasty found that the latter resulted in a greater tendency for resolution of secretory otitis media in the early postoperative period, less time required for extrusion of the grommet tube, and a lower incidence of recurrent secretory otitis media. Additionally, the incidence of postoperative velopharyngeal incompetence was greater in the group that did not receive palatal muscle reconstruction (PUBMED:17440366). Another study reported that 70 percent of children had abnormal speech when no muscle reconstruction was performed compared with 63 percent after having had an intravelar veloplasty, suggesting that the added operative dissection does not add morbidity and that the improved speech results probably justify performing an intravelar veloplasty during palatal repair (PUBMED:6867168). Moreover, combining intravelar veloplasty with fistula repair has been shown to be safe and allows management of the fistula and its possible consequences on palatal function in a single procedure, with a very low fistula recurrence rate (PUBMED:29053517). In summary, the evidence suggests that palatal muscle reconstruction improves the functional outcomes of cleft palate surgery, particularly in terms of velopharyngeal competence and eustachian tube function, which are critical for speech and middle ear health.
Instruction: Are electromyographic patterns during gait related to abnormality level of the gait in patients with spastic cerebral palsy? Abstracts: abstract_id: PUBMED:27840431 Are electromyographic patterns during gait related to abnormality level of the gait in patients with spastic cerebral palsy? Purpose: One of the aims of the treatment in ambulant cerebral palsy (CP) patients is improvement of gait. The level of gait pathology is assessed by instrumented gait analysis, including surface electromyography. The aim of this study was to investigate the relation of the abnormality level of the gait and the co-contraction of the agonist-antagonist muscles, and relation between symmetry left /right leg in gait and symmetry of muscular activity. Methods: Fifty one patients with cerebral palsy underwent clinical assessment and instrumented gait analysis, including surface electromyography. Signals were bilaterally collected from rectus femoris, medial and lateral hamstrings, tibialis anterior, lateral gastrocnemius and gluteus maximus. In older children additionally signals from soleus and lateral vastus were recorded. Sixteen gait variables were selected to calculate Gillette Gait Index, separately for left and right leg. From the envelopes a series of cross-correlation coefficients were calculated. Results: Weak correlations were found between averaged agonistantagonist correlation coefficient and Gillette Gait Index. Differences between hemiparetic less-involved legs, hemiparetic spastic legs, and diplegic legs were found for co-contraction of rectus femoris and biceps femoris and for averaged agonist-antagonist co-contraction. The differences between hemiparetic and diplegic groups were found for some muscle correlation coefficients. Conclusions: The results obtained in this study show that the activity pattern of the leg muscles is specific to a given patient, and the dependence of the kinematics pathology on the abnormal activation pattern is not a direct one. abstract_id: PUBMED:33265919 Gait Indices for Characterization of Patients with Unilateral Cerebral Palsy. As cerebral palsy (CP) is a complex disorder, classification of gait pathologies is difficult. It is assumed that unclassified patients show less functional impairment and less gait deviation. The aim of this study was to assess the different subgroups and the unclassified patients with unilateral CP using different gait indices. The Gillette Gait Index (GGI), Gait Deviation Index (GDI), Gait Profile Score (GPS) and spatiotemporal parameters derived from instrumented 3D-Gait Analysis (IGA) were assessed. Subgroups were defined using morphological and functional classification systems. Regarding the different gait indices, a ranking of the different gait patterns is evident. Significant differences were found between GMFCS level I and II, Winters et al. (Winters, Gage, Hicks; WGH) type IV and type I and the WGH-unclassified. Concerning the spatiotemporal parameters significant differences were found between GMFCS level I and II concerning velocity. The unclassified patients showed mean values for the different gait indices that were comparable to those of established subgroups. Established gait patterns cause different degrees of gait deviation and functional impairment. The unclassified patients do not differ from established gait patterns but from the unimpaired gait. Further evaluation using 3D-IGA is necessary to identify the underlying gait pathologies of the unclassified patients. abstract_id: PUBMED:31614496 Gait Classification in Unilateral Cerebral Palsy. As unilateral cerebral palsy represents a complex disorder, gait classification is difficult. Knowledge of the most frequent gait patterns and functional impairment is crucial for proper decision-making. This study analyzes the prevalence of gait patterns as well as the relation of different gait patterns and the Gross Motor Function Classification System (GMFCS). Eighty-nine patients were classified retrospectively using the GMFCS, the classification of Winters, Gage, and Hicks (WGH), and Sutherland et al. The distribution of GMFCS levels among the different gait patterns was analyzed using Chi-squared test. The most common subtypes were GMFCS level I, WGH type I, and recurvatum knee. Seventeen percent (WGH) and 59% (Sutherland) of the patients did not match any criteria. Applying both classifications complementarily reduced the number of unclassified patients significantly. There was no significant difference concerning the distribution of GMFCS levels or age among the different gait patterns. A combined use of various classification systems is beneficial for proper decision-making. Unclassified patients seem to be a heterogeneous subgroup concerning functional impairment. There is a need of further characterization of the unclassifiable gait patterns and the caused functional impairment. Instrumented gait analysis remains the gold standard and should be broadly used for future studies and in clinical practice. abstract_id: PUBMED:679531 The assessment of the internal rotation gait in cerebral palsy: an electromyographic gait analysis. A study of 12 cerebral palsied children with internal rotation revealed three patterns of electromyographic activity: (1) Diagnostic pattern--where a simple muscle group stood out as the responsible agent--notably the medial hamstrings; (2) Nondiagnostic pattern--nonrecurring pattern; (3) Nondiagnostic pattern--recurring "mass limb reflex" pattern. In all cases, electromyography was useful for: (1) confirmation of clinical impressions. Electromyographic confirmation of phasic hamstring overactivity gives a firm basis for tendon surgery with expectancy of good results. (2) Detection of the responsible muscle group where clinical methods fail to do so. It detects the "at risk" patients, where follow up with tendon surgery at the appropriate time could be performed with predictable results. (3) Selection of patients who are likely to respond to tendon surgery, and those unlikely to benefit from it. The adductors and internal rotators may play only a secondary role in children whose predominant problem is internal rotation during gait. The medial hamstrings stand out as the most important single muscle group causing this problem. Consequently, it is important to analyze gait problems with the patient walking, and examine electromyographs during walking in the overall assessment of a patient with dynamic gait problem. abstract_id: PUBMED:30930827 Gait Pattern Differences Among Children With Bilateral Cerebral Palsy. Background: The positive findings from our previous studies, which revealed the link between postural and gait patterns in children with unilateral cerebral palsy (CP) were very encouraging for recognition this relationship in children with bilateral cerebral palsy (CP). Therefore, the objective of this study was to evaluate whether different gait patterns corresponding to postural patterns in children with bilateral CP could be statistically significant according to a cluster analysis. Methods: Fifty-eight participants with bilateral CP and 45 matched children with typical growth and development. The participants walked barefoot along a treadmill at their own pace. Three-dimensional kinematic data were collected using the Measuring System for Motion Analysis. To characterize gait patterns, the Gillette Gait Index (GGI) and its 16 distinct gait parameters were used. The participants were divided into four subgroups according to their postural patterns. Results: A cluster analysis revealed 4 gait patterns corresponding to postural patterns: (1) normal gait pattern corresponded to neutral posture; (2) balanced gait pattern corresponded to balanced posture; (3) lordotic gait pattern corresponded to lordotic postural pattern; (4) swayback gait pattern corresponded to backward-leaning posture. There were significant differences in mean GGI and various clusters in the 8 GGI gait parameters: cadence, mean pelvic tilt; mean pelvic rotation, minimum hip flexion, peak hip abduction in swing; knee flexion at initial contact, and peak dorsiflexion in stance. Conclusion: Our results showed that gait discrepancies among children with bilateral CP were not simply a result of lower limb kinematic deviations in the sagittal plane. Information on different gait patterns could improve early therapy in children with bilateral CP before abnormal gait patterns are fully established. abstract_id: PUBMED:3818706 Gait patterns in spastic hemiplegia in children and young adults. Four homogeneous patterns of gait were defined in forty-six patients who had spastic hemiplegia secondary to cerebral palsy or other neurological disorders by analyzing kinematic data in the sagittal plane and electromyographic data. In Group I (twenty patients) the primary abnormality was a drop foot in the swing phase. The thirteen patients in Group II had a tight heel cord in the stance phase as well as a drop foot in the swing phase. The five patients in Group III also had more proximal involvement (that is, restricted motion of the knee) as well as an equinus deformity of the ankle. In Group IV, the eight patients had, in addition, restricted motion of the hip. abstract_id: PUBMED:21549756 Quantification of dynamic EMG patterns during gait in children with cerebral palsy. Our goal was to simplify the representation and interpretation of surface electromyographic (EMG) activity during gait to develop a clinical method for evaluating gait disabilities in children with cerebral palsy (CP). EMG was recorded from four muscles of a lower extremity. Gait cycles were tracked from one force-sensing resistor signal that was recorded synchronously with EMG. The method is based on the comparison of a patient's dynamic EMG envelope shapes and the normative gait-related patterns (norms). Developed norms were based on EMG data obtained in 10 healthy children. Due to newly introduced techniques for time and amplitude normalization, norms were developed regardless of differences in subject age, gender, basic gait parameters and the EMG measurement process. The proposed gait metric quantifies the similarity between a patient's gait-related patterns and norms by a single global value suitable for gait analysis in general, including a detailed analysis using the 10 partial values. The gait metric was experimentally validated with a control group of healthy children and a group of children with CP with different degrees of motor deficits. Gait metric values obtained in children from the control group are high for all muscles, which means that gait-related patterns are close to norms, whereas in children with CP the higher the degree of motor deficit, the lower the gait metric values. The method could be a very useful clinical tool for the recognition and tracking of motor disorders of the lower extremities in children with CP as well as many other neuromotor pathologies. abstract_id: PUBMED:23948331 Categorization of gait patterns in adults with cerebral palsy: a clustering approach. Gait patterns in adults with cerebral palsy have, to our knowledge, never been assessed. This contrasts with the large number of studies which have attempted to categorize gait patterns in children with cerebral palsy. Several methodological approaches have been developed to objectively classify gait patterns in patients with central nervous system lesions. These methods enable the identification of groups of patients with common underlying clinical problems. One method is cluster analysis, a multivariate statistical method which is used to classify an entire data set into homogeneous groups or "clusters". The aim of this study was to determine, using cluster analysis, the principal gait patterns which can be found in adults with cerebral palsy. Data from 3D motion analyses of 44 adults with cerebral palsy were included. A hierarchical cluster analysis was used to subgroup the different gait patterns based on spatiotemporal and kinematic parameters in the sagittal and frontal planes. Five clusters were identified (C1-C5) among which, 3 subgroups were determined, based on spontaneous gait speed (C1/C2: slow, C3/C4: moderate and C5: almost normal). The different clusters were related to specific kinematic parameters that can be assessed in routine clinical practice. These 5 classifications can be used to follow changes in gait patterns throughout growth and aging as well to assess the effects of different treatments (physiotherapy, surgery, botulinum toxin, etc.) on gait patterns in adults with cerebral palsy. abstract_id: PUBMED:38186965 Factors associated with gait efficiency in children with cerebral palsy: association between gait abnormality and balance ability. [Purpose] Children with cerebral palsy require more gait energy than healthy children. The association between gait abnormalities and gait efficiency remains unclear. We investigated the association between gait abnormalities, balance, and maximum step length to determine contributors to gait efficiency in children with cerebral palsy. [Participants and Methods] The study included 33 patients with cerebral palsy, who could walk without the use of walking aids. All participants were instructed to walk for 6 min, and the Total Heart Beat Index was calculated as a measure of walking efficiency. The Edinburgh Visual Gait Score was used to assess gait abnormalities. Additionally, the maximum step length was recorded, and all participants performed the Berg Balance Scale. Correlation analysis and stepwise multiple regression analysis were used to confirm the association between the aforementioned parameters and the Total Heart Beat Index. [Results] The Edinburgh Visual Gait Score was correlated with the heel lift during the stance, knee position during the terminal swing of gait as factors associated with the Total Heartbeat Index. The Berg Balance Scale was correlated with turning 360°, standing with feet together. [Conclusion] Our findings emphasize the need for treatment strategies focused on gait abnormalities and balance. abstract_id: PUBMED:28446871 Prevalence of Joint Gait Patterns Defined by a Delphi Consensus Study Is Related to Gross Motor Function, Topographical Classification, Weakness, and Spasticity, in Children with Cerebral Palsy. During a Delphi consensus study, a new joint gait classification system was developed for children with cerebral palsy (CP). This system, whose reliability and content validity have previously been established, identified 49 distinct joint patterns. The present study aims to provide a first insight toward the construct validity and clinical relevance of this classification system. The retrospective sample of convenience consisted of 286 patients with spastic CP (3-18 years old, GMFCS levels I-III, 166 with bilateral CP). Kinematic and kinetic trials from three-dimensional gait analysis were classified according to the definitions of the Delphi study, and one classified trial was randomly selected for each included limb (n = 446). Muscle weakness and spasticity were assessed for different muscle groups acting around the hip, knee, and ankle. Subsequently, Pearson Chi square tests, Cramer's V, and adjusted standardized residuals were calculated to explore the strength and direction of the associations between the joint patterns, and the different patient-specific characteristics (i.e., age, GMFCS level, and topographical classification) or clinical symptoms (muscle weakness and spasticity). Patient-specific characteristics showed several significant associations with the patterns of different joints, but the strength of most identified associations was weak. Apart from the knee during stance phase and the pelvis in the sagittal plane, the results systematically showed that the patterns with "minor gait deviations" were the most frequently observed. These minor deviations were found significantly more often in limbs with a lower level of spasticity and good muscle strength. Several other pathological joint patterns were moderately associated with weakness or spasticity, including but not limited to "outtoeing" for weakness and "intoeing" for spasticity. For the joints in the sagittal plane, significantly stronger associations were found with muscle weakness and spasticity, possibly because most of the evaluated muscles in this study mainly perform sagittal plane motions. Remarkably, the hip patterns in the coronal plane did not associate significantly with any of the investigated variables. Although further validation is warranted, this study contributes to the construct validity of the joint patterns of the Delphi consensus study, by demonstrating their ability to distinguish between clinically relevant subgroups in CP. Answer: Yes, electromyographic (EMG) patterns during gait are related to the abnormality level of the gait in patients with spastic cerebral palsy (CP). A study investigating the relation between the abnormality level of gait and the co-contraction of agonist-antagonist muscles found weak correlations between averaged agonist-antagonist correlation coefficients and the Gillette Gait Index (GGI), which is a measure of gait pathology. The study also identified differences in co-contraction between hemiparetic less-involved legs, hemiparetic spastic legs, and diplegic legs, indicating that the activity pattern of leg muscles is specific to a given patient and that the relationship between kinematics pathology and abnormal activation pattern is not direct (PUBMED:27840431). Additionally, EMG has been used to confirm clinical impressions, detect responsible muscle groups where clinical methods fail, and select patients likely to respond to tendon surgery. For instance, in children with internal rotation gait, EMG can identify the most important muscle groups causing the problem, such as the medial hamstrings (PUBMED:679531). Furthermore, EMG patterns have been quantified during gait in children with CP to develop a clinical method for evaluating gait disabilities. The method compares a patient's dynamic EMG envelope shapes with normative gait-related patterns, and the degree of similarity is quantified by a gait metric. This metric has been validated with control groups and children with CP, showing that higher degrees of motor deficit correspond to lower gait metric values (PUBMED:21549756). In summary, EMG patterns during gait are indeed related to the level of gait abnormality in patients with spastic CP, and EMG analysis can provide valuable information for clinical assessment and treatment planning.
Instruction: Are liver transplant recipients protected against hepatitis A and B? Abstracts: abstract_id: PUBMED:23622657 Are liver transplant recipients protected against hepatitis A and B? Background: Liver transplant recipients are at an increased risk for liver failure when infected with hepatitis A virus (HAV) and hepatitis B virus (HBV). Therefore, it is important to vaccinate these individuals. The aim of the study was to evaluate how well liver transplanted patients in our unit were protected against HAV and HBV infection. Furthermore we investigated the vaccination rate and the antibody response to vaccination in these liver transplanted patients. Methods: Patients liver transplanted from January 2007 until August 2010 with a posttransplant check-up during the period March-November 2010 were included (n = 51). Information considering diagnose, date of transplantation, Child-Pugh score, and vaccination were collected from the patient records. Anti-HAV IgG and anti-HBs titers in serum samples were analyzed and protective levels were registered. Results: Of the patients 45% were protected against hepatitis A infection and 29% against hepatitis B infection after transplantation. Only 26% were vaccinated according to a complete vaccination schedule and these patients had a vaccine response for HAV and HBV of 50% and 31%, respectively. An additional 31% received ≥ 1 doses of vaccine, but not a complete vaccination and the vaccine response was much lower among these patients, stressing the importance of completing the vaccination schedule. Conclusion: Even when patients were fully vaccinated, they did not respond to the same degree as healthy individuals. Patients seemed to be more likely to respond to a vaccination if they had a lower Child-Pugh score, suggesting that patients should be vaccinated as early as possible in the course of their liver disease. abstract_id: PUBMED:37265180 High prevalence of hepatitis A and B nonimmunity in pediatric liver transplant recipients. Background: Pediatric liver transplant recipients are at increased risk of post-transplant infections. The purpose of this study was to quantify hepatitis A and B non-immunity based on antibody titers in liver transplant recipients. Methods: We conducted a retrospective chart review of 107 pediatric liver transplant recipients at a single medical center from 2000 to 2017. We compared hepatitis immune patients to non-immune patients and studied response to vaccination in patients immunized post-transplantation. Results: Eighty-one percent of patients had pre-transplant immunity to hepatitis A whereas 68% had pre-transplant immunity to hepatitis B. Post-transplant hepatitis B immunity decreased to 33% whereas post-transplant hepatitis A immunity remained high at 82%. Older age and time since transplantation were significantly associated with hepatitis B non-immunity. Most patients responded to doses post-transplantation with 78% seroconversion following hepatitis A re-immunization and 83% seroconversion following hepatitis B re-immunization. Conclusions: Pediatric liver transplant recipients are at risk of hepatitis A and B non-immunity, particularly with respect to hepatitis B. Boosters post-transplant may improve immunity to hepatitis viruses. abstract_id: PUBMED:29044729 Outcome comparison of liver transplantation for hepatitis A-related versus hepatitis B-related acute liver failure in adult recipients. Hepatitis A virus (HAV) can cause acute liver failure (ALF). This study compares outcomes between liver transplantation (LT) for HAV-related ALF (HAV-ALF) and LT for hepatitis B virus (HBV)-related ALF (HBV-ALF). Of 3616 adult LTs performed between January 2005 and December 2014, we performed LT for HAV-ALF recipients (n = 29) and LT for HBV-ALF recipients (n = 34). HAV-ALF group included 18 males and 11 females with mean age of 33.1 years. Graft survival rates in HAV-ALF and HBV-ALF were 65.5% and 88.0% (1 year) and 65.5% and 84.0% (5 years) (P = .048). Patient survival rates in HAV-ALF and HBV-ALF were 69.0% and 88.0% (1 year) and 69.0% and 84.0% (5 years) (P = .09). Multivariate analyses demonstrated that acute pancreatitis and HAV recurrence were independent risk factors of graft and patient survival. Post-transplant outcome was poorer in patients with HAV-ALF than in those with HBV-ALF. This weakens LT's appropriateness in HAV-ALF patients with pancreatitis. HAV recurrence after LT for HAV-ALF is common and often fatal; thus, HAV recurrence should be monitored vigilantly, beginning early post-transplant. abstract_id: PUBMED:31607643 Vaccination in adult liver transplantation candidates and recipients. In patients with chronic liver disease and liver transplant recipients, cirrhosis-associated immune dysfunction syndrome and immunosuppressant drug regimens required to prevent graft rejection lead to a high risk of severe infections, associated with acute liver decompensation, graft loss and increased mortality. In addition to maintain their global health status, vaccination represents a major preventive measure against specific infectious risks of particular concern in this population, such as invasive pneumococcal diseases, influenza or viral hepatitis A and B. However, immunization in this setting raises several issues: i) recommended vaccination schedules rely on sparse immunogenicity data without clinical efficacy and effectiveness trials designed for this specific population; ii) dynamics of immunosuppression makes timing of immunization challenging; iii) live attenuated vaccines are contraindicated after transplantation; and iv) vaccines tolerance is poorly known in cirrhotic patients. This review outlines the rational for vaccination in adult liver transplant candidates and recipients and available data regarding immunization in this specific population. abstract_id: PUBMED:38103788 Assessing vaccine-induced immunity against pneumococcus, hepatitis A and B over a 9-year follow-up in pediatric liver transplant recipients: A nationwide retrospective study. Pediatric liver transplant recipients are particularly at risk of infections. The most cost-effective way to prevent infectious complications is through vaccination, which can potentially prevent infections due to hepatitis B (HBV) virus, hepatitis A virus (HAV), and invasive pneumococcal diseases. Here, we performed a retrospective analysis of HBV, HAV, and pneumococcal immunity in pediatric liver transplant recipients between January 1, 2009, and December 31, 2020, to collect data on immunization and vaccine serology. A total of 94% (58/62) patients had available vaccination records. At transplant, 90% (45/50) were seroprotected against HBV, 63% (19/30) against HAV, and 78% (18/23) had pneumococcal immunity, but immunity against these 3 pathogens remained suboptimal during the 9-year follow-up. A booster vaccine was administered to only 20% to 40% of patients. Children who had received &gt;4 doses of HBV vaccine and &gt; 2 doses of HAV vaccine pretransplant displayed a higher overall seroprotection over time post-solid organ transplant. Our findings suggest that a serology-based approach should be accompanied by a more systematic follow-up of vaccination, with special attention paid to patients with an incomplete vaccination status at time of transplant. abstract_id: PUBMED:34023475 Immunity to hepatitis A virus in liver transplant recipients: A population-based study in Iran. Background: Acute hepatitis A is usually a self-limited viral disease but can be severe and even fatal in special groups of patients including those with chronic liver disease and recipients of liver transplantation. To take appropriate preventive measures, it is important to determine the immune status against the hepatitis A virus in patients at risk of grave clinical outcomes following infection. To assess the need for immunization against hepatitis A, we aimed to determine the immune status against hepatitis A in a population of liver transplant recipients. We also investigated the association between hepatitis A immune status and demographic factors such as age and sex, underlying liver disease, source of drinking water, geographical area of residence and socioeconomic status. Methods: This cross-sectional study was performed on 242 recipients of allogenic liver transplants at Abu Ali Sina Organ Transplant Hospital in Shiraz, Iran, between January 2017 and April 2017. The level of immunity was assessed using hepatitis A antibody detection kits. Results: The rate of immunity against hepatitis A was detected as 88.8% in our study population. In the multivariable logistic regression model, younger age (OR=1.175, P&lt;0.001) and higher education level (OR=2.142, P=0.040) were the main determinants of non-immune status. However, hepatitis A immunity was independent of gender, monthly family income, water supply source, residential area and underlying liver disorder. Conclusion: Although a significant proportion of liver transplant recipients in this study showed evidence of natural immunity to hepatitis A, a considerable proportion of younger patients and those with a higher level of education were non-immune. The results of this study signify the importance of screening for hepatitis A immunity in this at-risk population of patients and the need for vaccinating non-immune patients. abstract_id: PUBMED:25755386 Evaluation of liver transplant recipients. The outcome of liver transplantation (LT) is dependent on many factors including graft quality, surgical techniques, postoperative care, immunosuppressive regimens and most importantly, careful pre-transplant recipient evaluation and selection. Currently, the expected 1-year and 5-year survival rates after LT are 85-95% and 75-85%, respectively. The improvement in outcomes and better awareness has resulted in an increasing demand for LT around the world including India. Transplant physicians have responded to this increased demand by developing several strategies including the use of older donors, grafts from hepatitis C positive donors or those with previous hepatitis B infection (positive hepatitis B virus [HBV] core immunoglobulin G [IgG] antibody), graft from nonheart beating donors, domino transplantation (liver from patients with familial amyloid polyneuropathy transplanted into older recipients), split-liver grafts, and live donor liver transplant (LDLT). Currently, the only treatment that prolongs survival in those with end-stage acute or chronic liver failure is transplantation of either partial or full liver donor graft. Because of the enormous disparity in supply and demand for donor organs, costs, and potential morbidity and mortality of live donors in LDLT, it has become incumbent on the transplant community to ration the available organs in a way that provides the best outcomes and in the process, serves the best interest of the population as a whole. When evaluating a potential candidate for LT, it is imperative to determine whether the recipient is going to benefit from the procedure immediately and in the long-term. In this review, we will discuss the process of selection and optimal evaluation of potential LT recipients. abstract_id: PUBMED:16271537 Acute hepatitis A and B in patients with chronic liver disease: prevention through vaccination. Retrospective and prospective studies have demonstrated that the occurrence of acute hepatitis A in patients with chronic liver disease is associated with higher rates of morbidity and mortality than in previously healthy individuals with acute hepatitis A. The mortality associated with acute hepatitis A may be particularly high in patients with preexisting chronic hepatitis C. Although acute hepatitis B in patients with preexisting chronic liver disease is less well studied, worse outcomes than in previously healthy individuals are apparent. However, numerous studies convincingly demonstrate that chronic hepatitis B virus coinfection with hepatitis C virus (or hepatitis D virus) is associated with an accelerated natural history of liver disease and worse outcomes. These observations led to studies that demonstrated the safety and efficacy of hepatitis A and hepatitis B vaccination in patients with mild-to-moderate chronic liver disease. Hepatitis A and B vaccination is less effective in patients with advanced liver disease, especially after decompensation, such as in patients awaiting liver transplantation, and in liver transplant recipients. The emerging lower rates of inherent immunity in younger individuals, higher morbidity and mortality of acute hepatitis A or B superimposed on chronic liver disease, and greater vaccine efficacy in milder forms of chronic liver disease suggest that it is a reasonable policy to recommend hepatitis A and B vaccination in patients early in the natural history of chronic liver disease. abstract_id: PUBMED:19695000 Immunization of liver and renal transplant recipients: a seroepidemiological and sociodemographic survey. Several life-threatening infections, a major risk to adult solid organ transplant (SOT) recipients on immunosuppressive therapy, can be prevented by immunization. We analyzed sociodemographic parameters and the immunization status of adult liver transplant recipients (LTX-R, n=267) and renal transplant recipients (RTX-R, n=197) SOT recipients at the Transplantation Center, Berlin, Germany. Date, number, and provider of recommended vaccines were recorded and seroprotection rates determined. The social status in both groups was similar. Most patients (89%) were not adequately informed about immunizations; and if informed, main sources were physicians (47%) and the media (40%). Vaccinations were predominantly provided by family doctors (LTX-R, 66%; RTX-R, 31%) or hemodialysis centers (RTX-R, 37%). Before transplantation, RTX-R had significantly more often received booster vaccinations against tetanus and diphtheria (P&lt;0.005), and a primary hepatitis B immunization (55%); whereas in LTX-R, post-transplant vaccinations against hepatitis A (16%) and pneumococcal disease (13%) were more frequent. Seroprotection rates against tetanus were fairly high in LTX-R (85.3%) and RTX-R (86.8%), and considerably lower for diphtheria, hepatitis A, and influenza. Immunization rates are too low in SOT recipients. Improvement will depend on a more active role of health care providers. abstract_id: PUBMED:10558960 Immunogenicity and safety of hepatitis A vaccine in liver and renal transplant recipients. Organ transplant recipients with chronic hepatitis B or hepatitis C virus infection may be at increased risk of fulminant hepatitis A. Liver transplant (LTX) recipients, renal transplant (RTX) recipients, and healthy controls received 2 doses of hepatitis A vaccine 6 months apart. Anti-hepatitis A virus (anti-HAV) seroconversion after the primary dose occurred in 41% of the LTX patients, 24% of the RTX patients, and 90% of the controls. After the booster dose, the respective rates were 97%, 72%, and 100% (P&lt;.001). RTX patients also had significantly lower geometric mean titers (GMTs) of anti-HAV than LTX patients and controls. In the RTX group, the seroconversion rate and GMT were inversely associated with the number of immunosuppressive drugs received by the patients. The vaccine was well tolerated. Hepatitis A vaccine can be recommended to LTX and RTX patients, but the patients should receive a full course of 2 doses before imminent exposure. Answer: Liver transplant recipients are at an increased risk for liver failure when infected with hepatitis A virus (HAV) and hepatitis B virus (HBV), making vaccination against these viruses important. However, the protection against HAV and HBV in liver transplant recipients varies and can be suboptimal. In one study, only 45% of liver transplant patients were protected against hepatitis A infection and 29% against hepatitis B infection after transplantation. The response to vaccination was also lower than in healthy individuals, with only 50% and 31% vaccine response for HAV and HBV, respectively, among those who were fully vaccinated. The study highlighted the importance of completing the vaccination schedule and suggested that patients with a lower Child-Pugh score were more likely to respond to vaccination, indicating that vaccination should occur as early as possible in the course of liver disease (PUBMED:23622657). Another study focusing on pediatric liver transplant recipients found that 81% had pre-transplant immunity to hepatitis A, while 68% had pre-transplant immunity to hepatitis B. Post-transplant, hepatitis B immunity decreased to 33%, although hepatitis A immunity remained high at 82%. The study also noted that most patients responded to doses post-transplantation, with 78% seroconversion following hepatitis A re-immunization and 83% seroconversion following hepatitis B re-immunization (PUBMED:37265180). A population-based study in Iran reported an 88.8% rate of immunity against hepatitis A among liver transplant recipients, with younger age and higher education level being determinants of non-immune status (PUBMED:34023475). Overall, while some liver transplant recipients have immunity against hepatitis A and B, the level of protection is not uniform and can be significantly lower for hepatitis B post-transplant. Vaccination remains a critical preventive measure, but the timing and completion of the vaccination schedule are crucial for achieving adequate protection. Additionally, the response to vaccination can be influenced by factors such as the severity of liver disease and the immunosuppressive regimens required post-transplant (PUBMED:31607643).
Instruction: Small dense low-density lipoprotein in renal transplant recipients: a potential target for prevention of cardiovascular complications? Abstracts: abstract_id: PUBMED:16980076 Small dense low-density lipoprotein in renal transplant recipients: a potential target for prevention of cardiovascular complications? Background: Immunosuppressive therapy is frequently associated with dyslipidemia, which is involved in cardiovascular morbidity and mortality in transplant patients. Beyond classical factors, such as low-density lipoprotein (LDL) cholesterol (LDL-C), qualitative abnormalities of lipoproteins, such as presence of the atherogenic factor, small dense LDL, may be of interest for a cardiovascular risk assessment. This study was designed to explore LDL size in renal transplant recipients in relation to quantitative lipid parameters and apolipoprotein (apo) CIII polymorphism. Methods: Total cholesterol (TC), high-density lipoprotein cholesterol (HDL-C), LDL-C, apoA1, apoB, apoCIII, and LDL size were measured in 62 patients of mean age 45 +/- 13 years including 71% men at 2 +/- 0.5 years after renal transplantation. Thirty-two patients received cyclosporine (CsA), while 30 received tacrolimus (FK). ApoCIII Sstl genotype was determined by restriction fragment length polymorphism. Results: The CsA group exhibited higher TC (P = .001), LDL-C (P = .004), non-HDL-C (P = .009), HDL-C (P = .03), apoB (P = .008), and apoCIII (P = .002) levels than the FK group. However, LDL-C (CsA: 3.7 +/- 1.2, FK: 3.0 +/- 0.6 mmol/L) and triglyceride levels (CsA: 1.55 mmol/L, FK: 1.37 mmol/L) were near the normal range in both groups. Allelic frequency of the sparse A2 allele associated with hypertriglyceridemia was 6%, similar to the general population. LDL size, which was comparable in the CsA and FK groups (25.87 +/- 0.89 vs 25.75 +/- 0.62 nm, respectively), inversely correlated with TG/HDL ratio (P = 10(-4)). Prevalence of small dense LDL (defined as &lt;25.5 nm) was 26% in the CsA group and 33% in the FK group. Conclusion: After LDL-C goal has been achieved, LDL size modulation may be taken into account in order to prevent cardiovascular complications. abstract_id: PUBMED:31890996 Achievement of Low-Density Lipoprotein Cholesterol Targets in CKD. Introduction: We describe the characteristics of patients with moderate/advanced chronic kidney disease (CKD) according to receipt of lipid-lowering therapy (LLT), and whether they achieved low-density lipoprotein cholesterol (LDL-C) targets for high- and very high-risk patients. Methods: CKD-REIN (NCT03381950), a prospective cohort study conducted in 40 nephrology clinics in France, enrolled 3033 patients with moderate (stage G3) or advanced (stage G4/G5) CKD (2013-2016) who had not been on chronic dialysis or undergone kidney transplantation. Data were collected from patients' interviews and medical records. Patients were followed up at 1 year. Results: Among 2542 patients (mean [SD] age 67 [13] years, 34% women) with LDL-C measurements at baseline (mean [SD] LDL-C 2.7 [1.1] mmol/l; cholesterol 4.8 [1.3] mmol/l), 63% were on LLT; 24% were at high (CKD stage G3, no cardiovascular disease [CVD] or diabetes) and 74% at very high (CKD stage G3 with diabetes or CVD, or CKD stage G4/5) cardiovascular risk. Among high-risk patients, 45% of those on statin and/or ezetimibe achieved the LDL-C treatment target (&lt;2.6 mmol/l). Among very high-risk patients, the percentage at goal (&lt;1.8 mmol/l) was 38% for CKD stage G3 and 29% for stage G4/5. There was a trend toward higher achievement of LDL-C targets with increasing LLT intensity (adjusted odds ratios for moderate vs. low intensity 1.20; 95% confidence interval 0.92-1.56; high vs. low intensity 1.46; 1.02-2.09; Ptrend = 0.036). Conclusion: Many patients with CKD stage G3-G5 who are eligible for LLT are not treated, and those on LLT rarely achieve LDL-C targets. abstract_id: PUBMED:20003032 Low levels of high-density lipoprotein cholesterol: an independent risk factor for late adverse cardiovascular events in renal transplant recipients. Long-term kidney transplant graft and patient survival is often limited by cardiovascular (CV) disease. Risk factors for CV disease such as diabetes, hypertension and elevated low-density lipoprotein levels are well documented; however, the impact of low levels of high-density lipoprotein (HDL) has not been defined. We performed a retrospective chart review of 324 consecutive renal transplant recipients from 2001 to 2007 to correlate baseline HDL levels with major adverse cardiovascular events (MACEs) defined as a composite of new onset CV illness, cerebral vascular events and peripheral vascular disease. A total of 92 MACEs occurred over a total of 1913 patient years of follow-up. Low HDL cholesterol levels were noted in 58.3% of patients. Compared with those with normal HDL levels, a greater percentage of patients with low HDL levels had post-transplant MACEs (20% vs. 60% respectively) and experienced an increased rate of all cause mortality. Sixty-two percent of all MACEs occurred in patients with low HDL levels. In the low HDL group, the odds ratio for experiencing a MACE was 1.92. Therefore, HDL cholesterol may provide an important new therapeutic target to prevent vascular morbidity and mortality following renal transplantation. abstract_id: PUBMED:21528490 Association between moderately oxidized low-density lipoprotein and high-density lipoprotein particle subclass distribution in hemodialyzed and post-renal transplant patients. Disturbances in the metabolism of lipoprotein profiles and oxidative stress in hemodialyzed (HD) and post-renal transplant (Tx) patients are proatherogenic, but elevated concentrations of plasma high-density lipoprotein (HDL) reduce the risk of cardiovascular disease. We investigated the concentrations of lipid, lipoprotein, HDL particle, oxidized low-density lipoprotein (ox-LDL) and anti-ox-LDL, and paraoxonase-1 (PON-1) activity in HD (n=33) and Tx (n=71) patients who were non-smokers without active inflammatory disease, liver disease, diabetes, or malignancy. HD patients had moderate hypertriglyceridemia, normocholesterolemia, low HDL-C, apolipoprotein A-I (apoA-I) and HDL particle concentrations as well as PON-1 activity, and increased ox-LDL and anti-ox-LDL levels. Tx patients had hypertriglyceridemia, hypercholesterolemia, moderately decreased HDL-C and HDL particle concentrations and PON-1 activity, and moderately increased ox-LDL and anti-ox-LDL levels as compared to the reference, but ox-LDL and anti-ox-LDL levels and PON-1 activity were more disturbed in HD patients. However, in both patient groups, lipid and lipoprotein ratios (total cholesterol (TC)/HDL-C, LDL-C/HDL-C, triglyceride (TG)/HDL-C, HDL-C/non-HDL-C, apoA-I/apoB, HDL-C/apoA-I, TG/HDL) were atherogenic. The Spearman's rank coefficient test showed that the concentration of ox-LDL correlated positively with HDL particle level (R=0.363, P=0.004), and negatively with TC (R=-0.306, P=0.012), LDL-C (R=-0.283, P=0.020), and non-HDL-C (R=-0.263, P=0.030) levels in Tx patients. Multiple stepwise forward regression analysis in Tx patients demonstrated that ox-LDL concentration, as an independent variable, was associated significantly positively with HDL particle level. The results indicated that ox-LDL and decreased PON-1 activity in Tx patients may give rise to more mildly-oxidized HDLs, which are less stable, easily undergo metabolic remodeling, generate a greater number of smaller pre-β-HDL particles, and thus accelerate reverse cholesterol transport, which may be beneficial for Tx patients. Further studies are necessary to confirm this. abstract_id: PUBMED:37731962 High-Density Lipoprotein Lipidomics and Mortality in CKD. Rationale & Objective: Patients with chronic kidney disease (CKD) have dysfunctional high-density lipoprotein (HDL) particles that lack cardioprotective properties; altered lipid composition may be associated with these changes. To investigate HDL lipids as potential cardiovascular risk factors in CKD, we tested the associations of HDL ceramides, sphingomyelins, and phosphatidylcholines with mortality. Study Design: We leveraged data from a longitudinal prospective cohort of participants with CKD. Setting & Participants: We included participants aged greater than 21 years with CKD, excluding those on maintenance dialysis or with prior kidney transplant. Exposure: HDL particles were isolated using density gradient ultracentrifugation. We quantified the relative abundance of HDL ceramides, sphingomyelins, and phosphatidylcholines via liquid chromatography tandem mass spectrometry (LC-MS/MS). Outcomes: Our primary outcome was all-cause mortality. Analytical Approach: We tested associations using Cox regressions adjusted for demographics, comorbid conditions, laboratory values, medication use, and highly correlated lipids with opposed effects, controlling for multiple comparisons with false discovery rates (FDR). Results: There were 168 deaths over a median follow-up of 6.12 years (interquartile range, 3.71-9.32). After adjustment, relative abundance of HDL ceramides (HR, 1.22 per standard deviation; 95% CI, 1.06-1.39), sphingomyelins with long fatty acids (HR, 1.44; 95% CI, 1.05-1.98), and saturated and monounsaturated phosphatidylcholines (HR, 1.22; 95% CI, 1.06-1.41) were significantly associated with increased risk of all-cause mortality (FDR &lt; 5%). Limitations: We were unable to test associations with cardiovascular disease given limited power. HDL lipidomics may not reflect plasma lipidomics. LC-MS/MS is unable to differentiate between glucosylceramides and galactosylceramides. The cohort was comprised of research volunteers in the Seattle area with CKD. Conclusions: Greater relative HDL abundance of 3 classes of lipids was associated with higher risk of all-cause mortality in CKD; sphingomyelins with very long fatty acids were associated with a lower risk. Altered lipid composition of HDL particles may be a novel cardiovascular risk factor in CKD. Plain-language Summary: Patients with chronic kidney disease have abnormal high-density lipoprotein (HDL) particles that lack the beneficial properties associated with these particles in patients with normal kidney function. To investigate if small lipid molecules found on the surface of HDL might be associated with these changes, we tested the associations of lipid molecules found on HDL with death among patients with chronic kidney disease. We found that several lipid molecules found on the surface of HDL were associated with increased risk of death among these patients. These findings suggest that lipid molecules may be risk factors for death among patients with chronic kidney disease. abstract_id: PUBMED:21848901 Characteristics of low-density and high-density lipoprotein subclasses in pediatric renal transplant recipients. Renal transplant recipients often suffer from dyslipidemia which is one of the principal risk factors for cardiovascular disease. This study sought to determine characteristics of high-density lipoprotein (HDL) and low-density lipoprotein (LDL) particles and their associations with carotid intima-media thickness (cIMT) in a group of pediatric renal transplant recipients. We also examined the influence of immunosuppressive therapy on measured LDL and HDL particle characteristics. HDL size and subclass distribution were determined using gradient gel electrophoresis, while concentrations of small, dense LDL (sdLDL)-cholesterol (sdLDL-C) and sdLDL-apolipoprotein B (sdLDL-apoB) using heparin-magnesium precipitation method in 21 renal transplant recipients and 32 controls. Renal transplant recipients had less HDL 2b (P &lt; 0.001), but more HDL 3a (P &lt; 0.01) and 3b (P &lt; 0.001) subclasses. They also had increased sdLDL-C (P &lt; 0.01) and sdLDL-apoB (P &lt; 0.05) levels. The proportion of the HDL 3b subclasses was a significant predictor of increased cIMT (P &lt; 0.05). Patients treated with cyclosporine had significantly higher sdLDL-C and sdLDL-apoB concentrations (P &lt; 0.05) when compared with those on tacrolimus therapy. Pediatric renal transplant recipients have impaired distribution of HDL and LDL particles. Changes in the proportion of small-sized HDL particles are significantly associated with cIMT. Advanced lipid testing might be useful in evaluating the effects of immunosuppressive therapy. abstract_id: PUBMED:10232852 Cyclosporin A does not increase the oxidative susceptibility of low density lipoprotein in vitro. Accelerated atherosclerosis is the leading cause of morbidity in renal transplant recipients. The pathogenic mechanisms responsible for the progression of atherosclerosis in renal transplant recipients have not been elucidated. Cyclosporin A (CsA) is an immunosuppressive agent used post-transplant and may contribute to increased oxidative susceptibility of low density lipoprotein (LDL). There is a paucity of data testing the effect of CsA on LDL oxidation. Hence, the aim of this study was to test the effect of in vitro enrichment of LDL with CsA on LDL oxidation. LDL oxidation in presence of different concentrations of CsA was tested using metal-dependent (copper), metal-independent (AAPH) and cell-mediated (macrophages) oxidation systems. In all 3 systems, CsA had no significant effect on LDL oxidation. Also, pre-incubation of LDL with CsA did not affect LDL oxidation and LDL alpha tocopherol levels. Thus, the results of our studies with CsA indicate that it is not a direct pro-oxidant. abstract_id: PUBMED:10412758 Calcineurin inhibitors enhance low-density lipoprotein oxidation in transplant patients. Background: Our objective was to assess the pro-oxidant status of neoral and tacrolimus in renal transplant patients and monitor the protection provided by vitamin C and vitamin E in normalizing low density lipoprotein (LDL) oxidation lag time of tacrolimus-treated patients. Methods: Plasma LDL was isolated by density gradient ultracentrifugation from renal transplant patients receiving neoral, tacrolimus and tacrolimus with vitamin C and vitamin E. Oxidation was initiated by the addition of CuCl2 at 37 degrees C and monitored at 234 nm over 480 minutes and oxidation lag time was computed. Total antioxidant capacity of serum was measured using the enhanced chemiluminescent method. Results: LDL from tacrolimus-treated patients had significantly lower oxidation lag time and serum antioxidant activity in comparison with neoral-treated patients, and this was particularly significant during the first four months after transplantation. Vitamin C and E supplementation in tacrolimus treated patients provided protection against oxidation and normalized their oxidation lag time. Conclusion: Calcineurin-inhibiting drugs, CsA and tacrolimus, have pro-oxidant activity and they increase the susceptibility of LDL to oxidation. Neoral formulation is fortified with DL-alpha tocopherol and therefore provides protection against oxidation. The present study clearly demonstrates the benefit of giving vitamin C and E supplements to patients taking tacrolimus and this seems to be particularly important during the early period after transplantation. abstract_id: PUBMED:20005364 Effects of cyclosporine-tacrolimus switching in posttransplantation hyperlipidemia on high-density lipoprotein 2/3, lipoprotein a1/b, and other lipid parameters. Objective: In renal transplant recipients, cyclosporine treatment appears to cause more frequent hyperlipidemia than tacrolimus usage. In this study, hyperlipidemic renal transplant recipients who use cyclosporine were investigated for changes in high-density lipoprotein (HDL)-2/3, apolipoprotein (Apo) A1/B, other lipid and biochemical parameters, and body mass index after prospective cyclosporine to tacrolimus switching. Materials And Methods: Fifteen patients, including 9 females of overall mean age of 33.2 +/- 10.7 years and posttransplantation time of 78.06 +/- 42.93 months with a mean body mass index of 23.77 +/- 3.34 kg/m(2), were included if they were nondiabetic, hyperlipidemic, and had undergone renal transplantation between 1992 and 2000, using cyclosporine and candidates for a switch to tacrolimus due to hyperlipidemia. Before switching to tacrolimus and at 12 months of tacrolimus use we studied fasting blood samples for creatinine, uric acid, glucose, triglyceride, Apo A1, Apo B, low-density lipoprotein (LDL), HDL2, HDL3, and total cholesterol. Results: There were no significant differences in creatinine, uric acid, glucose levels, or body mass index before tacrolimus versus 12 months thereafter. It was observed that tacrolimus significantly decreased triglyceride, Apo A1, Apo B, LDL, HDL, and total cholesterol levels (P &lt; .001; P = .006; P = .01; P &lt; .001; P = .03; P &lt;/= .001, respectively), but had no effect on homocysteine, Apo A1/B, HDL 2, HDL 3, or HDL 2/3 levels (P &gt; .05). Conclusion: Switching from cyclosporine to tacrolimus was associated with a more favorable cardiovascular risk profile by improving hyperlipidemia. abstract_id: PUBMED:16549161 Incidence of cardiovascular risk factors and complications before and after kidney transplantation. Background: Cardiovascular disease is a leading cause of death after renal transplantation with an incidence considerably higher than that in the general population. The aim of this study was to evaluate the association of atherosclerotic cardiovascular complications and the prevalence of cardiovascular risk factors prior to and following transplantation. Patients And Methods: Atherosclerotic cardiovascular diseases including coronary artery disease, as well as cerebral and peripheral vascular disease, and cardiovascular risk factors pre- and posttransplantation were analyzed in 500 renal transplant recipients between 1988 and 1992. The mean recipient age at transplantation was 45 +/- 12 years, with 58% men and 7% diabetics. Results: Following transplantation 11.7% developed atherosclerotic cardiovascular diseases, the majority being coronary artery disease (9.8%). Comparison of the risk factors before and after transplantation showed the increased prevalence of systemic hypertension to be 67% to 86%, of diabetes mellitus, 7% to 16%, and obesity, with a body mass index &gt; 25 kg/m2 from 26% to 48%, whereas the number of smokers was halved to 20%. The triglycerides decreased significantly (from 235 +/- 144 mg/dL to 217 +/- 122 mg/dL) but the total and high-density lipoprotein (HDL) cholesterol rose significantly (from 232 +/- 65 mg/dL to 273 +/- 62 mg/dL and from 47 +/- 29 mg/dL to 56 +/- 21 mg/dL, respectively). The low-density lipoprotein (LDL) cholesterol increase was insignificant (from 180 +/- 62 mg/dL to 189 +/- 53 mg/dL). Upon univariate analysis, cardiovascular diseases were significantly associated with male gender; age over 50 years; diabetes mellitus (DM); smoking; total cholesterol &gt; 200 mg/dL; LDL cholesterol &gt; 180 mg/dL; HDL cholesterol &lt; 55 mg/dL; fibrinogen &gt; 350 mg/dL; body mass index &gt; 25 kg/m2; and more than two antihypertensive agents per day. The Cox proportional hazards model revealed DM with a relative risk (RR) of 4.3; age &gt; 50 years (RR = 2.7); body mass index &gt; 25 kg/m2 (RR = 2.6); smoking (RR = 2.5); and LDL cholesterol &gt; 180 mg/dL (RR = 2.3) as independent risk factors. Conclusions: The high incidence of cardiovascular disease following renal transplantation is mainly due to a high prevalence and accumulation of classical risk factors before and following transplantation. The treatment of risk factors must be introduced early in the course of renal failure and continued following transplantation. Future prospective studies should evaluate the success of treatment regarding reduction of cardiovascular morbidity and mortality in this high-risk population. Answer: Small dense low-density lipoprotein (sdLDL) has been identified as a potential target for the prevention of cardiovascular complications in renal transplant recipients. Immunosuppressive therapy, which is common in transplant patients, is often associated with dyslipidemia, a known contributor to cardiovascular morbidity and mortality. Beyond classical factors like low-density lipoprotein cholesterol (LDL-C), qualitative abnormalities of lipoproteins, such as the presence of sdLDL, may be important for cardiovascular risk assessment (PUBMED:16980076). In a study exploring LDL size in renal transplant recipients, it was found that LDL size inversely correlated with the triglyceride/high-density lipoprotein (TG/HDL) ratio, and the prevalence of sdLDL was significant in both cyclosporine and tacrolimus groups. This suggests that after achieving LDL-C goals, modulation of LDL size could be considered to prevent cardiovascular complications (PUBMED:16980076). Moreover, the characteristics of LDL and HDL subclasses in pediatric renal transplant recipients have been studied, revealing that these patients have an impaired distribution of HDL and LDL particles. Changes in the proportion of small-sized HDL particles were significantly associated with increased carotid intima-media thickness (cIMT), a marker of cardiovascular risk. This indicates that advanced lipid testing might be useful in evaluating the effects of immunosuppressive therapy and in managing cardiovascular risk (PUBMED:21848901). Additionally, the effects of switching from cyclosporine to tacrolimus in hyperlipidemic renal transplant recipients showed an improvement in hyperlipidemia and a more favorable cardiovascular risk profile (PUBMED:20005364). This suggests that the choice and management of immunosuppressive therapy can influence lipid profiles and potentially cardiovascular risk. In summary, sdLDL is a potential target for the prevention of cardiovascular complications in renal transplant recipients. Modulating LDL size and managing dyslipidemia through appropriate immunosuppressive therapy and lipid-lowering strategies could be beneficial in reducing cardiovascular risk in this population.
Instruction: Establishment of reference values for novel urinary biomarkers for renal damage in the healthy population: are age and gender an issue? Abstracts: abstract_id: PUBMED:23648635 Establishment of reference values for novel urinary biomarkers for renal damage in the healthy population: are age and gender an issue? Background: Recently, a lot of research has focused on the discovery of novel renal biomarkers. Among others, the urinary kidney injury molecule 1 (KIM-1) and neutrophil gelatinase-associated lipocalin (NGAL) have been proven to be promising biomarkers in a wide variety of renal pathologies. However, little is known about the normal concentrations in urine of healthy subjects. Therefore, the goal of our study is to establish reference values for urinary KIM-1, NGAL, N-acetyl-β-D-glucosamidase (NAG), and cystatin C in a healthy population, taking into account possible effects of age and gender. Methods: We collected urine samples from 338 healthy, nonsmoking subjects between 0 and 95 years old. Subjects with elevated α1-microglobulin values were excluded. Next to the urinary concentrations of KIM-1, NGAL, NAG, and cystatin C, we measured urinary creatinine and specific gravity to correct for urinary dilution. The possible effect of age and gender on the four urinary biomarkers was investigated, and the reference values were established. Results: For the absolute urinary concentrations of the biomarkers, age had a significant effect on all the biomarkers, except for cystatin C, whereas gender significantly affected all four of them, except for NAG. The normalization of biomarkers for creatinine and specific gravity had an effect on the correlation between the biomarkers on one hand and age and gender on the other. Conclusions: In conclusion, age and gender had different effects on KIM-1, NGAL, NAG, and cystatin C. Based on this knowledge, age- and gender-specific reference values for KIM-1, NGAL, NAG, and cystatin C were established. abstract_id: PUBMED:22395790 Evaluation of novel urinary renal biomarkers: biological variation and reference change values. A number of novel urinary biomarkers have been identified and partially qualified for use as markers for renal injury in rats. We used two multiplex assays for these novel biomarkers to quantify biomarker concentration in serial urine collections from rats of both sexes administered varying concentrations of cisplatin. From these data, we calculate inter-individual variation and reference ranges from predose animals and intra-individual variation and reference change values from undosed control animals. The biomarkers evaluated are albumin, α glutathione s-transferase, glutathione S-transferase-yb1, lipocalin-2, kidney injury molecule-1, osteopontin, and renal papillary antigen 1. For any creatinine-corrected novel biomarkers, we found intra-individual variation to be no greater than 44% and inter-individual variation to be no greater than 46%. Reference change values for most corrected analytes (except osteopontin) were 50-100%, indicating that a &gt;100% increase in analyte concentration between serial samples would be unlikely to be associated with inherent analytical or biological variation. abstract_id: PUBMED:34377388 Reference values of glomerular filtration rate for healthy adults in southern China: a cross-sectional survey. Background: Currently the global data on the glomerular filtration rate of healthy adults are insufficient, with relatively little data for other races and countries. Especially in China, there are no such figures. Methods: In this cross-sectional study, we included healthy Han adults in southern China. Participants completed a lifestyle and medical history questionnaire and had their blood pressure measured, and blood and urine samples collected. Serum creatinine was measured and used to estimated glomerular filtration rate (eGFR) by the Modification of Diet in Renal Disease (MDRD) and Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) formulae. The normal range of eGFR is described, and the influence of gender and age on eGFR is analyzed by the statistical method. Results: We provided the largest sample size of eGFR research in China at present. The mean age of the 20,930 healthy individuals was 40.9 ± 12.3 years, 58.8% were women. The eGFRMDRD for women and men were 111.3 ± 17.4 mL/min per 1.73 m2 and 103.3 ± 15.9 mL/min per 1.73 m2, respectively. The eGFRCKD-EPI for women and men were 110.3 ± 12.1 mL/min per 1.73 m2 and 103.8 ± 13.3 mL/min per 1.73 m2, respectively. The eGFRMDRD of women and men in all age groups decreased continuously by 7.3 ml/min/1.73 m2/decade and 4.4 ml/min/1.73 m2/decade, respectively. The eGFRCKD-EPI of women and men in all age groups decreased continuously by 8.4 ml/min/1.73 m2/decade and 6.9 ml/min/1.73 m2/decade. Conclusions: The eGFR of women is higher than men and with the increasing age, the eGFR of women declines faster than men. abstract_id: PUBMED:22420102 BNP and NT-proBNP: reference values and cutoff limits Natriuretic peptides, particularly BNP and NT-proBNP, are increasingly used as screening test in patients with symptoms suggestive of heart failure (HF). Due to their high negative predictive values, natriuretic peptide determinations allow to exclude chronic HF with great certainty and to identify patients for whom echography is not necessary. These biomarkers are also useful for diagnostic purposes, high plasma levels being related to an increased risk of cardiovascular hospitalisation and death. Risk stratification in patients with HF symptoms is based on "low" and "high" cut-off limits, for which different values have been proposed. The aim of this paper is to discuss the delineation of the decision limits and the intermediate grey zone in comparison to NT-proBNP reference values obtained in a representative group of subjects living in the Liège area (Belgium). Data were analysed in relation to age and gender, two of the main parameters influencing the natriuretic peptide plasma levels. abstract_id: PUBMED:26616147 Kidney injury biomarkers and urinary creatinine variability in nominally healthy adults. Environmental exposure diagnostics use creatinine concentrations in urine aliquots as the internal standard for dilution normalization of all other excreted metabolites when urinary excretion rate data are not available. This is a reasonable approach for healthy adults as creatinine is a human metabolite that is continually produced in skeletal muscles and presumably excreted in the urine at a stable rate. However, creatinine also serves as a biomarker for glomerular filtration rate (efficiency) of the kidneys, so undiagnosed kidney function impairment could affect this commonly applied dilution calculation. The United States Environmental Protection Agency (US EPA) has recently conducted a study that collected approximately 2600 urine samples from 50 healthy adults, aged 19-50 years old, in North Carolina in 2009-2011. Urinary ancillary data (creatinine concentration, total void volume, elapsed time between voids), and participant demographic data (race, gender, height, and body weight) were collected. A representative subset of 280 urine samples from 29 participants was assayed using a new kidney injury panel (KIP). In this article, we investigated the relationships of KIP biomarkers within and between subjects and also calculated their interactions with measured creatinine levels. The aims of this work were to document the analytical methods (procedures, sensitivity, stability, etc.), provide summary statistics for the KIP biomarkers in "healthy" adults without diagnosed disease (distribution, fold range, central tendency, variance), and to develop an understanding as to how urinary creatinine level varies with respect to the individual KIP proteins. Results show that new instrumentation and data reduction methods have sufficient sensitivity to measure KIP levels in nominally healthy urine samples, that linear regression between creatinine concentration and urinary excretion explains only about 68% of variability, that KIP markers are poorly correlated with creatinine (r(2) ∼ 0.34), and that statistical outliers of KIP markers are not random, but are clustered within certain subjects. In addition, we interpret these new adverse outcome pathways based in vivo biomarkers for their potential use as intermediary chemicals that may be diagnostic of kidney adverse outcomes to environmental exposure. abstract_id: PUBMED:34438575 Urinary Biomarkers of Renal Injury KIM-1 and NGAL: Reference Intervals for Healthy Pediatric Population in Sri Lanka. Emerging renal biomarkers (e.g., kidney injury molecule-1 (KIM-1) and neutrophil gelatinase-associated lipocalin (NGAL)) are thought to be highly sensitive in diagnosing renal injury. However, global data on reference intervals for emerging biomarkers in younger populations are lacking. Here, we aimed to determine reference intervals for KIM-1 and NGAL across a pediatric population in Sri Lanka; a country significantly impacted by the emergence of chronic kidney disease of unexplained etiology (CKDu). Urine samples were collected from children (10-18 years) with no prior record of renal diseases from the dry climatic zone of Sri Lanka (N = 909). Urinary KIM-1 and NGAL concentrations were determined using the enzyme-linked immunosorbent assay (ELISA) and adjusted to urinary creatinine. Biomarker levels were stratified by age and gender, and reference intervals derived with quantile regression (2.5th, 50th, and 97.5th quantiles) were expressed at 95% CI. The range of median reference intervals for urinary KIM-1 and NGAL in children were 0.081-0.426 ng/mg Cr, 2.966-4.850 ng/mg Cr for males, and 0.0780-0.5076 ng/mg Cr, 2.0850-3.4960 ng/mg Cr for females, respectively. Renal biomarkers showed weak correlations with age, gender, ACR, and BMI. Our findings provide reference intervals to facilitate screening to detect early renal damage, especially in rural communities that are impacted by CKDu. abstract_id: PUBMED:9821699 Values for urinary beta 2-microglobulin and N-acetyl-beta-D-glucosaminidase in normal healthy infants. Background: Measuring urinary beta 2 microglobin (B2M) and N-acetyl-beta-D-glucosaminidase (NAG) excretion is widely used as a valuable clinical tool in assessing renal tubular lesions. However, few data are available on normal values for urinary excretion of B2M and NAG in infancy. Methods: Urinary B2M and NAG were measured in healthy infants. The logarithmic values of urinary B2M, NAG, B2M/creatinine ratio and NAG/creatinine ratio were distributed almost normally and reference ranges were calculated from the logarithms of the observed values. Results: The levels of urinary B2M and B2M/creatinine ratio were highest in the 1-month-old group, followed by a decrease during the first 3 months. Urinary B2M excretions in the 3-month-old group showed rather lower levels than those of the 12-month-old and 36-month-old groups. Although urinary NAG excretions were almost constant throughout all groups, urinary NAG/creatinine ratio decreased gradually until 3 years of age. Conclusions: We suggest that these reference ranges are of importance in evaluating tubular damage due to a variety of renal diseases in infancy. abstract_id: PUBMED:22246543 Evaluation of novel urinary renal biomarkers with a cisplatin model of kidney injury: effects of collection period. A number of novel urinary biomarkers have been identified and partially qualified for use as markers for renal injury in rats. To date, all evaluation studies have been made using 18 to 24 hour collection periods. However, shorter, more welfare friendly, urine collection periods are also used in industry. In this article, we quantify urinary biomarker concentration in serial paired sequential short and long urine collections from male rats administered varying concentrations of cisplatin. We calculate the rate of biomarker excretion in normal animals for both collection periods and the bias and correlation in urinary biomarker concentration between collection periods in dosed and control animals, and we estimate the level of agreement in biomarker concentration between both collection periods. We conclude that although there are minor differences in the concentration of some urinary biomarkers that are dependent upon the time and duration of collection, shorter collection protocols do not influence subsequent interpretation of normalized urinary biomarker data for most biomarkers. abstract_id: PUBMED:20505955 Reference values for serum creatinine in children younger than 1 year of age. Reliable reference values of enzymatically assayed serum creatinine categorized in small age intervals are lacking in young children. The aim of this study was to determine reference values for serum creatinine during the first year of life and study the influence of gender, weight and height on these values. Serum creatinine determinations between 2003 and 2008 were retrieved from the hospital database. Strict exclusion criteria ensured the selection of patients without kidney damage. Correlation analysis was performed to evaluate the relation between height, weight and serum creatinine; the Mann-Whitney test was used to evaluate the relation between gender and serum creatinine. A broken stick model was designed to predict normal serum creatinine values. Mean serum creatinine values were found to decrease rapidly from 55 micromol/L on day 1 to 22 micromol/L in the second month of life; they then stabilized at 20 micromol/L until the seventh month, followed by a slight increase. No significant relation was found between serum creatinine and gender, weight and height. We present here reference values of serum creatinine in infants not at risk of decreased renal function. The absence of a relationship with gender, weight and height confirms that height-based equations to estimate glomerular filtration rate are less useful in patients of this age group. abstract_id: PUBMED:22491933 Evaluation of novel renal biomarkers with a cisplatin model of kidney injury: gender and dosage differences. A number of novel urinary biomarkers have been identified and partially qualified for use as markers for renal injury in rats. We use two novel multiplex assays to quantify biomarker concentration in multiple urine collections made prior to and following administration of cisplatin, a common nephrotoxicant, to rats. We investigate the correlation of the magnitude of biomarker changes with the severity of histopathological observations and explore the relationship of these to both dose and sex. The novel biomarkers evaluated are urinary albumin, alpha glutathione s-transferase (α-GST), glutathione S-transferase-yb1 (GSTYb1), lipocalin-2, kidney injury molecule-1 (KIM-1), osteopontin, and renal papillary antigen 1 (RPA-1) and plasma cystatin C, alongside the traditional biomarkers of plasma urea, creatinine, and urinary n-acetyl-beta-d-glucosaminidase (NAG), total protein, and glucose. We show for all time points, and for almost all doses, that male rats consistently had either more severely graded or a higher incidence of histologically observed lesions than females; that changes in urinary glucose, total urinary protein, NAG, and the novel urinary biomarkers albumin, osteopontin, and KIM-1 are clearly temporally associated; and that changes are related to the severity of injury. We also found that receiver operating characteristic curve analysis and area under the curve are significantly higher than urea or creatinine for all new biomarkers except aGST, GSTYb1, cystatin c, and total protein in both sexes. Answer: The establishment of reference values for novel urinary biomarkers for renal damage in the healthy population does indeed take into account age and gender as significant factors. Research has shown that age has a significant effect on all the biomarkers except for cystatin C, while gender significantly affects all four biomarkers except for N-acetyl-β-D-glucosamidase (NAG). Normalization of biomarkers for creatinine and specific gravity also impacts the correlation between the biomarkers and age and gender. Consequently, age- and gender-specific reference values for urinary biomarkers such as kidney injury molecule 1 (KIM-1), neutrophil gelatinase-associated lipocalin (NGAL), NAG, and cystatin C have been established (PUBMED:23648635). Additionally, in the context of glomerular filtration rate (eGFR), it has been observed that the eGFR of women is higher than that of men, and with increasing age, the eGFR of women declines faster than that of men (PUBMED:34377388). This indicates that gender and age are important considerations when evaluating renal function through eGFR. For pediatric populations, reference intervals for urinary biomarkers like KIM-1 and NGAL have been determined, with findings showing weak correlations with age, gender, albumin-to-creatinine ratio (ACR), and body mass index (BMI) (PUBMED:34438575). This suggests that while age and gender are considered, their impact may vary across different age groups and biomarkers. In studies involving rats, intra-individual variation for creatinine-corrected novel biomarkers was found to be no greater than 44%, and inter-individual variation no greater than 46%. Reference change values for most corrected analytes were 50-100%, indicating that a greater than 100% increase in analyte concentration between serial samples would likely not be due to inherent analytical or biological variation (PUBMED:22395790). Overall, the evidence suggests that both age and gender are important factors that influence the reference values of urinary biomarkers for renal damage, and they must be considered when establishing reference ranges for healthy populations.
Instruction: Temporary tricuspid valve detachment for ventricular septal defect closure: is it worth doing it? Abstracts: abstract_id: PUBMED:27777518 Long-Term Follow-Up Study of Temporary Tricuspid Valve Detachment as Approach to VSD Repair without Consequent Tricuspid Dysfunction. Temporary tricuspid valve detachment improves the operative view of certain congenital ventricular septal defects (VSDs), but its long-term effects on tricuspid valve function are still debated. From 2002 through 2012, we performed a prospective study of 68 children (mean age, 1.28 ± 1.01 yr) who underwent transatrial closure of VSDs following temporary tricuspid valve detachment. Sixty patients had conoventricular and 8 had mid-muscular VSDs. All were in sinus rhythm. Seventeen patients had systemic pulmonary artery pressures. Preoperative echocardiograms showed trivial-to-mild tricuspid regurgitation in 62 patients and tricuspid dysplasia with severe regurgitation in 6 patients. Patients were clinically and echocardiographically monitored at 30 postoperative days, 3 months, 6 months, every 6 months thereafter for the first 2 years, and then once a year. No in-hospital or late death was observed at the median follow-up evaluation of 5.9 years. Mean intensive care unit and hospital stays were 1.6 ± 1.1 and 7.3 ± 2.7 days, respectively. Residual small VSDs occurred in 3 patients, and temporary atrioventricular block in one. After VSD repair, 62 patients (91%) had trivial or mild tricuspid regurgitation, and 6 moderate. Five of these last had severe tricuspid regurgitation preoperatively and had undergone additional tricuspid valve repair during the procedure. The grade of residual tricuspid regurgitation remained stable postoperatively, and no tricuspid stenosis was documented. All patients were in New York Heart Association class I at follow-up. Temporary tricuspid valve detachment is a simple and useful method for a complete visualization of certain VSDs without incurring substantial tricuspid dysfunction. abstract_id: PUBMED:12638669 Tricuspid valve detachment in closure of congenital ventricular septal defect. From January 1991 through December 2001, 600 patients underwent closure of a perimembranous ventricular septal defect through a right atrial approach at our institution. In 122 of these patients, the operation included temporary detachment of a tricuspid valve septal leaflet from the annulus to allow complete visualization of a perimembranous ventricular septal defect The mean age of the patients at surgery was 4.6 years in those who underwent leaflet detachment and 4.7 years in the 478 patients who did not (P &gt; 0.05). Preoperatively, all patients were in sinus rhythm. Echocardiography showed trivial tricuspid regurgitation in 21 of the patients undergoing detachment and in 39 of the non-detachment patients. There was no difference in bypass time or aortic cross-clamp time between the 2 groups. Postoperatively, 3 patients in the non-detachment group had heart block; all other patients were in sinus rhythm. Echocardiograms on the 7th postoperative day showed small residual ventricular septal defects in none of the patients who underwent valve detachment and in 10 of the non-detachment patients; mild tricuspid regurgitation was present in 12 non-detachment patients only; and trivial tricuspid regurgitation was present in 19 patients who underwent valve detachment and in 29 who did not. There was no hospital death in either group. Long-term follow-up showed no progression of tricuspid regurgitation or tricuspid stenosis. All patients remained in sinus rhythm. This study suggests that tricuspid valve detachment is a safe, effective technique that improves exposure for ventricular septal defect repair and does not adversely affect valve competence. abstract_id: PUBMED:32675928 Assessment of Tricuspid Valve Detachment Efficiency for Ventricular Septal Defect Closure: A Retrospective Comparative Study. Background: The aim of this study was to investigate the efficiency of tricuspid valve detachment (TVD) during the surgical treatment of perimembranous ventricular septal defects (VSDs) and to compare the early and mid-term results to patients without TVD in terms of tricuspid insufficiency. Methods: A total of 170 patients who had undergone surgical closure of perimembranous VSDs between November 2012 and January 2019 were included in this study, of whom 50 had an additional TVD procedure during the surgery. All patients were examined by transthoracic echocardiography before and after the operation with regular intervals, and the tricuspid valve function was then evaluated. Results: There was no significant difference between subgroups with an unchanging degree of TVR, however, the result was also similar among those who had a decreased degree of TVR at any level (p = 0.271, p = 0.451). At the end of the study, all patients were in New York Heart Association class I. Conclusions: We suggest that, in appropriate patients, VSD closure can be safely performed with an additional TVD application through an incision of the septal leaflet of the tricuspid valve without impairing the valve function or reducing the growth potential of the valve at midterm follow-up. abstract_id: PUBMED:33301568 Does tricuspid valve detachment improve outcomes compared with the non-tricuspid valve detachment approach in ventricular septal defect closure? A best evidence topic in cardiac surgery was written according to a structured protocol. The question addressed was whether the tricuspid valve detachment (TVD) approach to ventricular septal defect repair provides superior outcomes compared with the non-TVD approach. Altogether more than 54 papers were found using the reported search, of which 10 represented the best evidence to answer the clinical question. The authors, journal, date and country of publication, patient group studied, study type, relevant outcomes and results of these papers are tabulated. A total of 2059 participants were enrolled in the 10 studies, including 2 prospective studies and 8 retrospective studies. Six studies demonstrated a longer cardiopulmonary bypass time or aortic clamp time in the TVD group, whereas 4 studies showed no difference. Only 1 study reported a lower incidence of trivial tricuspid regurgitation in the TVD group, whereas the other 9 studies showed no significant difference. One study showed that a higher incidence of residual shunting occurred in those patients who had indications for TVD but did not perform detachment during surgery. No difference in postoperative residual shunting was demonstrated in the other 9 studies. We conclude that surgeons should be reassured that if TVD is required to repair the ventricular septal defect, although it may lead to longer cardiopulmonary bypass time and cross-clamp times, outcomes are equivalent in terms of the degree of tricuspid regurgitation and incidence of the residual ventricular septal defect. abstract_id: PUBMED:16928516 Indications for tricuspid valve detachment in closure of ventricular septal defect in children. Background: Different techniques have been described for tricuspid valve detachment to improve visualization in ventricular septal defect repair. Our hypothesis was that preoperative echocardiographic criteria are important in deciding which patients should undergo ventricular septal defect repair by tricuspid valve detachment, and patients who undergo this procedure may have a better surgical outcome than those who fulfilled the criteria but were actually operated on with the standard surgical approach. Methods: Between January 2000 and December 2004 we prospectively studied 179 patients scheduled for ventricular septal defect repair and criteria for tricuspid valve detachment were established. Of these, 84 patients did not have any criteria for tricuspid valve detachment and were classified as the control group (group 1). Ninety-five patients with at least one criterion for tricuspid valve detachment were intraoperatively divided by patients who underwent tricuspid valve detachment into group 2 (n = 41), and those who did not undergo tricuspid valve detachment into group 3 (n = 53). Results: Surgical complications occurred more frequently in group 3 (26%) as opposed to group 2 (10%) and group 1 (7%). Residual ventricular septal defect and atrioventricular block occurred only in group 3. Tricuspid regurgitation occurred in 15% of group 3 versus 9.8% of group 2 and 7.1% of group 1. Conclusions: Preoperative criteria for tricuspid valve detachment can be established before repair of ventricular septal defect. Patients who had indications for tricuspid valve detachment who actually had detachment performed during repair had fewer postoperative surgical complications as opposed to patients who fulfilled the criteria but did not undergo detachment. abstract_id: PUBMED:7488779 Closure of isolated ventricular septal defect with detachment of the tricuspid valve. Detachment of the septal leaflet of the tricuspid valve is an alternative technique for obtaining complete visualization of a perimembranous ventricular septal defect (VSD) in cases where the VSD is obscured by the chordae tendineae or a pouch formation of the septal leaflet. This method presents theoretical concerns because it has the potential for causing postoperative valvular insufficiency. We therefore evaluated valvular function in patients who underwent VSD closure with detachment of the tricuspid valve. In a consecutive series of 153 patients who underwent VSD closure using a transatrial approach, 13 had incision of the tricuspid valve. Follow-up echocardiographic studies were performed on these patients at least 1 year following operation. There were no operative deaths. Color Doppler echocardiography revealed no residual shunt in any of these patients. Ten patients had no evidence of tricuspid stenosis or regurgitation. One patient had trivial tricuspid regurgitation. Moderate tricuspid regurgitation was observed in two patients of these, one patient was a small infant who had a VSD complicated by pulmonary hypertension. The other patient had a VSD with a mitral cleft, pulmonary hypertension, and Down's syndrome. The incised tricuspid valve was resuspended by solely running sutures. In conclusion, detachment of the tricuspid valve is a safe and useful method for adequate exposure of a VSD. However, this method should be avoided in patients with Down's syndrome and in small infants. Furthermore, repair of the incised valve should not be performed using only running sutures. abstract_id: PUBMED:8011348 Temporary tricuspid valve detachment in closure of congenital ventricular septal defect. In a consecutive series of 149 patients with congenital ventricular septal defect (VSD), temporary tricuspid valve detachment was applied in 39 (detached group) to facilitate the transatrial approach for closure of the defect. Baseline characteristics showed that, preoperatively, the detached group were younger (1.3 +/- 2.3 vs. 3.5 +/- 4.1 years, P = 0.002), shorter (0.67 +/- 0.20 vs 0.87 +/- 0.34 m, P = 0.001), lighter (6.9 +/- 5.4 vs 13.5 +/- 12.0 kg, P &lt; 0.002), and had a higher mean right atrial pressure (6 +/- 2 vs 4 +/- 3 mm Hg, P &lt; 0.003), mean end-diastolic right ventricular pressure (10 +/- 3 vs 8 +/- 3 mm Hg, P &lt; 0.01) and mean pulmonary vascular resistance (267 +/- 202 vs 170 +/- 131 dyn s cm-5, P &lt; 0.02) on cardiac catheterization. At surgery the aortic cross-clamp time was longer (48 +/- 17 vs 39 +/- 15 min, P = 0.003). Seven patients died (2 detached, 5 not-detached), from causes not related to either tricuspid detachment or VSD closure. Follow-up was complete with a mean duration of 2.0 years (range 0.1-5.5). All 142 survivors were investigated by echocardiography, which showed normal tricuspid valve function in all but 29 patients who had trivial regurgitation (6 detached, 23 not-detached). There was no tricuspid stenosis. In 30 patients (8 detached, 22 not-detached) a trivial residual VSD could be detected. One reoperation (not-detached) was performed 12.5 months after the initial surgery for recurrent VSD.(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:23304017 Novel method for evaluating tricuspid valve function after tricuspid valve detachment in the repair of perimembranous ventricular septal defects. Tricuspid valve detachment has been used for decades in the repair of type II ventricular septal defects (VSDs); however, the procedure can damage the tricuspid valve and conduction system. We retrospectively reviewed 177 consecutive type II VSD repairs performed at our hospital from 1997 through 2004. Patients were included if they had symptoms, pulmonary hypertension, or a Qp/Qs ratio&gt;1.5: 86 underwent tricuspid valve detachment (TVD group) and 84 underwent VSD repair without this detachment (non-TVD group). There was no significant difference between groups in age, body weight, VSD size, Qp/Qs ratio, follow-up duration, or incidence of residual shunting. Cross-clamp times (109.6±42.6 vs 92.2±38.1 min) and cardiopulmonary bypass times (155.1±53.8 vs 137±47 min) were longer in the TVD group. No patients developed tricuspid stenosis or heart block. After excluding patients who underwent tricuspid repair, we found similar grades of postoperative tricuspid regurgitation in both groups. In applying our novel criterion (last postoperative regurgitation grade minus preoperative regurgitation grade) to evaluate changes between preoperative and postoperative tricuspid regurgitation, we found significant deterioration in the non-TVD group (P=0.018). Had conventional evaluation methods been used, severity in the groups would not have differed significantly. Our method enables additional evaluation of late tricuspid function in individual patients. Tricuspid valve detachment is safe for type II VSD repair and has no adverse effect on late tricuspid valve function. In addition, we recommend the interrupted-suture technique for leaflet reattachment. abstract_id: PUBMED:36862697 Ventricular Septal Defect Exposure by Tricuspid Valve Chordal Detachment-A Retrospective Matched Study. Background: Transatrial approach is the standard method in repairing ventricular septal defects (VSD) in the pediatric population. However, the tricuspid valve (TV) apparatus might obscure the inferior border of the VSD risking the adequacy of repair by leaving residual VSD or heart block. Detachment of the TV chordae has been described as an alternative technique to TV leaflet detachment. The aim of this study is to investigate the safety of such a technique. Methods: Retrospective review of patients who underwent VSD repair between 2015 and 2018. Group A (n = 25) had VSD repair with TV chordae detachment were matched for age and weight to group B (n = 25) without tricuspid chordal or leaflet detachment. Electrocardiogram (ECG) and echocardiogram at discharge and at 3 years of follow-up were reviewed to identify new ECG changes, residual VSD, and TV regurgitation. Results: Median ages in groups A and B were 6.13 (IQR 4.33-7.91) and 6.33 (4.77-7.2) months. New onset right bundle branch block (RBBB) was diagnosed at discharge in 28% (n = 7) of group A versus 56% (n = 14) in group B (P = .044), while the incidence dropped to 16% (n = 4) in group A versus 40% (n = 10) in group B (P = .059) in the 3 years follow-up ECG. Echocardiogram at discharge showed moderate tricuspid regurgitation in 16% (n = 4) in group A and 12% (n = 3) in group B (P = .867). Three years of follow-up echocardiography revealed no moderate or severe tricuspid regurgitation and no significant residual VSD in either group. Conclusion: No significant difference in operative time was observed between the two techniques. TV chordal detachment technique reduces the incidence of postoperative RBBB without increasing the incidence of TV regurgitation at discharge. abstract_id: PUBMED:14529988 Liberal use of tricuspid valve detachment for transatrial ventricular septal defect closure. Background: Although temporary tricuspid valve detachment is useful for improved visualization of ventricular septal defect through right atriotomy, liberal use of this adjunct is not widely supported, mainly because of concerns about iatrogenic complications such as heart blocks and tricuspid valve dysfunction. The objective of this study was to determine whether liberal use of this adjunct can improve operative outcome. Methods: Between January 1997 and March 2002, trans-atrial closure of isolated ventricular septal defect (conoventricular or canal type) was performed in 87 consecutive patients. Tricuspid valve detachment was used in 4 out of 44 patients (prudent-use group) and 19 out of 43 patients (liberal-use group) in the first and second half of this period, respectively (p = 0.0002). Patient demographics and use of other surgical and cardiopulmonary bypass techniques remained virtually unchanged during this period. Results: In the prudent-use group, there was one operative death with prolonged bypass time and one residual defect that required reoperation; neither of these patients underwent tricuspid valve detachment. All other patients (both groups) were free from mortality and clinically significant complications, including heart block, tricuspid regurgitation, and residual defect. The liberal-use group had shorter cardiopulmonary bypass time than the prudent-use group (59 +/- 14 vs 67 +/- 22 minutes, p = 0.037). Conclusions: Tricuspid valve detachment should be used liberally for moderate- or even low-difficulty exposure of ventricular septal defect, regardless of patient background, because it is a safe and effective adjunct that can improve speed, programmability, reproducibility, and reliability. Answer: Temporary tricuspid valve detachment (TVD) is a surgical technique used to improve visualization during the repair of certain types of ventricular septal defects (VSDs). The evidence from multiple studies suggests that TVD is a safe and effective method that does not adversely affect tricuspid valve function or lead to significant complications such as heart blocks or tricuspid regurgitation in the long term. Several studies have reported that patients who underwent TVD had no significant progression of tricuspid regurgitation or tricuspid stenosis postoperatively, and the majority remained in New York Heart Association class I at follow-up (PUBMED:27777518, PUBMED:12638669, PUBMED:32675928, PUBMED:16928516, PUBMED:7488779, PUBMED:8011348, PUBMED:23304017, PUBMED:36862697, PUBMED:14529988). Additionally, the technique has been associated with improved surgical outcomes, such as fewer postoperative complications, including residual VSD and atrioventricular block, especially in patients who met preoperative criteria for TVD (PUBMED:16928516). While some studies have indicated that TVD may lead to longer cardiopulmonary bypass time and cross-clamp times (PUBMED:33301568), the overall outcomes in terms of the degree of tricuspid regurgitation and incidence of residual VSD were equivalent to those in patients who did not undergo TVD. Moreover, one study suggested that liberal use of TVD could even shorten cardiopulmonary bypass time (PUBMED:14529988). In conclusion, the evidence supports the use of temporary tricuspid valve detachment during VSD closure as it provides superior exposure of the defect without causing significant long-term tricuspid valve dysfunction. It appears to be a worthwhile technique, particularly in cases where the VSD is obscured and difficult to visualize, and when preoperative criteria suggest that TVD could lead to better surgical outcomes.
Instruction: Expression of cancer testis antigens in human BRCA-associated breast cancers: potential targets for immunoprevention? Abstracts: abstract_id: PUBMED:21465317 Expression of cancer testis antigens in human BRCA-associated breast cancers: potential targets for immunoprevention? Introduction: Novel breast cancer risk-reducing strategies for individuals with germline mutations of the BRCA1 and/or BRCA2 genes are urgently needed. Identification of antigenic targets that are expressed in early cancers, but absent in normal breast epithelium of these high-risk individuals, could provide the basis for the development of effective immunoprophylactic strategies. Cancer testis (CT) antigens are potential candidates because their expression is restricted to tumors, and accumulating data suggest that they play important roles in cellular proliferation, stem cell function, and carcinogenesis. The objective of this study was to examine the expression of CT antigens and their frequency in BRCA-associated breast cancers. Methods: Archived breast cancer tissues (n = 26) as well as morphologically normal breast tissues (n = 7) from women carrying deleterious BRCA 1 and/or 2 mutations were obtained for antigen expression analysis by immunohistochemistry. Expression of the following CT antigens was examined: MAGE-A1, MAGE-A3, MAGE-A4, MAGE-C1.CT7, NY-ESO-1, MAGE-C2/CT10, and GAGE. Results: CT antigens were expressed in 16/26 (61.5%, 95% CI 43-80%) of BRCA-associated cancers, including in situ tumors. Thirteen of twenty-six (50%) breast cancers expressed two or more CT antigens; three cancers expressed all seven CT antigens. MAGE-A was expressed in 13/26 (50%) of cancers, NY-ESO-1 was expressed in 10/26 (38%) of tumors. In contrast, none of the CT antigens were expressed in adjacent or contralateral normal breast epithelium (P = 0.003). Conclusions: We report a high CT antigen expression rate in BRCA-associated breast cancer as well as the lack of expression of these antigens in benign breast tissue of carriers, identifying CT antigens as potential vaccine targets for breast cancer prevention in these high-risk individuals. abstract_id: PUBMED:29926750 Cancer testis antigens as immunogenic and oncogenic targets in breast cancer. Breast cancer cells frequently express tumor-associated antigens that can elicit immune responses to eradicate cancer. Cancer-testis antigens (CTAs) are a group of tumor-associated antigens that might serve as ideal targets for cancer immunotherapy because of their cancer-restricted expression and robust immunogenicity. Previous clinical studies reported that CTAs are associated with negative hormonal status, aggressive tumor behavior and poor survival. Furthermore, experimental studies have shown the ability of CTAs to induce both cellular and humoral immune responses. They also demonstrated the implication of CTAs in promoting cancer cell growth, inhibiting apoptosis and inducing cancer cell invasion and migration. In the current review, we attempt to address the immunogenic and oncogenic potential of CTAs and their current utilization in therapeutic interventions for breast cancer. abstract_id: PUBMED:15892621 New target antigens for cancer immunoprevention. Prevention of cancer through the activation of the immune system has been explored in recent years in preclinical systems thanks to the availability of several new transgenic mouse models that closely mimic the natural history of human tumors. The most thoroughly investigated model of cancer immunoprevention is the mammary carcinoma of HER-2/neu transgenic mouse. In this system it has clearly been shown that the activation of immune defences in healthy individuals can effectively prevent the subsequent onset of highly aggressive mammary carcinomas. A complete prevention was obtained using a combination of three signals (the so called "triplex" vaccine) that included the specific antigen (p185, the product of HER-2/neu) and nonspecific signals like allogeneic histocompatibility antigens and interleukin 12. The analysis of protective immune responses in models of cancer immunoprevention revealed some unexpected features, in particular the central role of antibodies in immunoprevention, at variance with conventional immuno-therapy which is firmly based on cytotoxic T cells. In the HER-2/neu system anti-p185 antibodies, in addition to immunological functions leading to tumor cell lysis, inhibit p185 dimerization and induce its internalization, resulting in the inhibition of mitogenic signaling. Most current tumor antigens appear to be unsuitable targets for cancer immunoprevention. An ideal antigen should have a crucial pathogenetic role in tumor growth to avoid the selection of antigen loss variants. Downregulation of major histocompatibility complex (MHC) expression during tumor progression frequently limits antigen recognition by MHC-restricted T cells. Thus an ideal antigen for cancer immunoprevention should be recognized both by T cells and by antibodies. Antibody binding to cell surface oncogenic determinants, in addition to complement- and cell-mediated tumor cell lysis, can block mitogenic signaling and induce internalization, resulting in tumor growth arrest. A search for new tumor antigens should be conducted among molecules that are directly involved in neoplastic transformation and are recognizable by the immune response also in MHC loss variants. Novel tumor antigens fulfilling both conditions will be crucial for the development of cancer immunoprevention and will provide new targets also for cancer immunotherapy. abstract_id: PUBMED:37610673 Expression of four cancer-testis antigens in TNBC indicating potential universal immunotherapeutic targets. Objective: Immunotherapy is an attractive treatment for breast cancer. Cancer-testis antigens (CTAs) are potential targets for immunotherapy for their restricted expression. Here, we investigate the expression of CTAs in breast cancer and their value for prognosis. So as to hunt for a potential panel of CTAs for universal immunotherapeutic targets. Material And Methods: A total of 137 breast cancer tissue specimens including 51 triple-negative breast cancer (TNBC) were assessed for MAGE-A4, MAGEA1, NY-ESO-1, KK-LC-1 and PRAME expression by immunohistochemistry. The expression of PD-L1 and TILs was also calculated and correlated with the five CTAs. Clinical data were collected to evaluate the CTA's value for prognosis. Data from the K-M plotter were used as a validation cohort. Results: The expression of MAGE-A4, NY-ESO-1 and KK-LC-1 in TNBC was significantly higher than in non-TNBC (P = 0.012, P = 0.005, P &lt; 0.001 respectively). 76.47% of TNBC expressed at least one of the five CTAs. Patients with positive expression of either MAGE-A4 or PRAME had a significantly extended disease-free survival (DFS). Data from the Kaplan-Meier plotter confirm our findings. Conclusions: MAGE-A4, NY-ESO-1, PRAME and KK-LC-1 are overexpressed in breast cancer, especially in TNBC. Positive expression of MAGE-A4 or PARME may be associated with prolonged DFS. A panel of CTAs is attractive universal targets for immunotherapy. abstract_id: PUBMED:24491090 Cancer-testis genes as candidates for immunotherapy in breast cancer. Cancer-testis (CT) antigens are tumor-associated antigens attracting immunologists for their possible application in the immunotherapy of cancer. Several clinical trials have assessed their therapeutic potentials in cancer patients. Breast cancers, especially triple-negative cancers are among those with significant expression of CT genes. Identification of CT genes with high expression in cancer patients is the prerequisite for any immunotherapeutic approach. CT genes have gained attention not only for immunotherapy of cancer patients, but also for immunoprevention in high-risk individuals. Many CT genes have proved to be immunogenic in breast cancer patients suggesting the basis for the development of polyvalent vaccines. abstract_id: PUBMED:21150711 Sperm-associated antigens as targets for cancer immunotherapy: expression pattern and humoral immune response in cancer patients. The identification of novel cancer-related and immunogenic proteins is still a challenge to be faced to improve antigen-specific tumor immunotherapy. The category of so-called cancer-testis (CT) antigens is one of the most perspective groups of proteins for anticancer immune response activation as normally they are expressed in immunoprivileged tissues and are immunogenic if aberrantly generated in tumors. The heterogeneous group of proteins called sperm-associated antigens (SPAG) might encompass novel CT antigens owing to their common expression in male germ cells, their ability to elicit immune response underlying infertility, and lately proposed oncogenic properties. We carried out a comprehensive analysis of the expression pattern in various normal and cancerous tissues and assessed the frequency of spontaneous humoral immune response against members of the SPAG group in cancer patients using phage-displayed antigen microarrays. Our results show that out of 15 analyzed SPAG genes only SPAG1, SPAG6, SPAG8, SPAG15, and SPAG17 are predominantly expressed in testis, whereas the others are ubiquitously expressed with only a testis-associated alternative splice variant of SPAG16. mRNA expression of SPAG1, SPAG6, and alternative splice variants of SPAG8, SPAG16, and SPAG17 was elevated in various tumors with frequencies ranging from approximately 10% to 70%. The upregulation of SPAG6 in lung and breast cancer was confirmed by immunohistochemical analysis of tumor and normal tissue microarrays. Cancer-associated spontaneous humoral immune response was detected against SPAG1, SPAG6, SPAG8, and a novel testis-specific splice variant of SPAG17 and ascribe these as novel CT antigens that potentially are applicable as immunotherapeutic targets and serologic biomarkers. abstract_id: PUBMED:15181820 Serological screening of xenogeneic antigens from rat testis Objective: It is the intent of this study to analyzed the rat testis library by employing a modified SEREX (serological analysis of recombinant cDNA expression library) approach to find the xenogeneic homologous tumor antigens which could be useful in developing cancer vaccines. Methods: The screening serum was obtained from the immunized rabbits with human ovarian cancer cells, and 10 positive clones were isolated from the rat testis cDNA library using SEREX technology. Results: We found these 10 clones encoded seven different proteins by means of bioinformatic analysis. Among them, OV-2 and OV-4 encode proteins related to carcinoma in human, yet OV-6 and OV-7 are novel and their encoded proteins remain unknown. Conclusion: This study indicates that the utilization of xenogeneic immunized serum in the serological screening of xenogeneic homologous tumor antigens may expand the application of traditional SEREX technology. abstract_id: PUBMED:29093010 Targeting "Retired Antigens" for Cancer Immunoprevention. Identification of immune targets for cancer immunoprevention, or immunotherapy, has historically focused on tumor-associated (self) antigens or neoantigens expressed on malignant cells. For self-antigens, overcoming tolerance can be a difficult challenge. Neoantigens do not suffer from this limitation, but the lack of recurrent mutations yielding common neoantigens that can be exploited in vaccines is a problem for many tumor types. Targeting "retired antigens," a specialized type of self-antigen, may have considerable advantages. Antigens no longer expressed in mature or aged individuals should pose reduced risk of autoimmune sequelae. Indeed, self-tolerance of these antigens may have naturally faded. Thus, when the retired antigens are highly expressed in cancer cells, it may be easier to overcome the remaining tolerance. Women who are BRCA1/2 carriers may be among the first to benefit as candidate retired antigens have been identified as highly expressed in ovarian and breast cancer cells. Although there is good preclinical data supporting this immune targeting concept, additional research is needed to understand the underlying immune phenomena and optimize the vaccine strategy. Cancer Prev Res; 10(11); 607-8. ©2017 AACRSee related article by Mazumder et al., p. 612. abstract_id: PUBMED:28274891 Roles of cancer/testis antigens (CTAs) in breast cancer. Breast cancer is the most common cancer diagnosed and is the second leading cause of cancer death among women in the US. For breast cancer, early diagnosis and efficient therapy remains a significant clinical challenge. Therefore, it is necessary to identify novel tumor associated molecules to target for biomarker development and immunotherapy. In this regard, cancer testis antigens (CTAs) have emerged as a potential clinical biomarker targeting immunotherapy for various malignancies due to the nature of its characteristics. CTAs are a group of tumor associated antigens (TAAs) that display normal expression in immune-privileged organs, but display aberrant expression in several types of cancers, particularly in advanced cancers. Investigation of CTAs for the clinical management of breast malignancies indicates that these TAAs have potential roles as novel biomarkers, with increased specificity and sensitivity compared to those currently used in the clinic. Moreover, TAAs could be therapeutic targets for cancer immunotherapy. This review is an attempt to address the promising CTAs in breast cancer and their possible clinical implications as biomarkers and immunotherapeutic targets with particular focus on challenges and future interventions. abstract_id: PUBMED:25172043 Retired self-proteins as vaccine targets for primary immunoprevention of adult-onset cancers. We propose that optimized control of adult-onset cancers requires the incorporation of a defense-based strategy in the form of preemptive immunity induced in healthy cancer-free subjects prior to the appearance of tumors. However, development of such prophylactic immunity has traditionally targeted etiopathogenic agents. We propose that in the absence of available cancer-inducing pathogens, safe and effective protection against the emergence of tumors may be achieved by inducing targeted immunity against tissue-specific self-proteins that are 'retired' from expression at immunogenic levels in normal tissues due to the normal aging process, but are expressed in emerging tumors. Thus, 'retired' self-proteins may substitute for unavailable pathogens as targets for developing prophylactic immunity against tumors we confront with age like breast, ovarian and prostate cancer. Our current efforts involve testing this primary 'immunoprevention' strategy in clinical trials focused on prevention of the more aggressive and lethal forms of breast cancer. Answer: The expression of cancer testis (CT) antigens in human BRCA-associated breast cancers has been identified as a potential target for immunoprevention. A study examining the expression of CT antigens in archived breast cancer tissues from women carrying deleterious BRCA1 and/or BRCA2 mutations found that CT antigens were expressed in 61.5% of BRCA-associated cancers, including in situ tumors. Notably, 50% of the breast cancers expressed two or more CT antigens, and three cancers expressed all seven CT antigens examined. The CT antigens studied included MAGE-A1, MAGE-A3, MAGE-A4, MAGE-C1.CT7, NY-ESO-1, MAGE-C2/CT10, and GAGE. Importantly, none of the CT antigens were expressed in adjacent or contralateral normal breast epithelium, highlighting their potential as vaccine targets for breast cancer prevention in high-risk individuals (PUBMED:21465317). CT antigens are considered ideal targets for cancer immunotherapy due to their restricted expression to cancer cells and their ability to elicit robust immune responses. They are associated with negative hormonal status, aggressive tumor behavior, and poor survival. CT antigens have been shown to promote cancer cell growth, inhibit apoptosis, and induce cancer cell invasion and migration (PUBMED:29926750). The immunogenic and oncogenic potential of CT antigens, along with their current utilization in therapeutic interventions for breast cancer, underscores their relevance in the context of immunoprevention (PUBMED:29926750). The concept of cancer immunoprevention has been explored in preclinical systems, with the HER-2/neu transgenic mouse model demonstrating that immune defenses can prevent the onset of aggressive mammary carcinomas. An ideal antigen for cancer immunoprevention should be recognized by both T cells and antibodies, and should have a crucial pathogenetic role in tumor growth to avoid the selection of antigen loss variants (PUBMED:15892621). In summary, the expression of CT antigens in BRCA-associated breast cancers presents a promising avenue for the development of immunopreventive strategies, potentially leading to the creation of vaccines that could reduce the risk of cancer development in individuals with genetic predispositions (PUBMED:21465317).
Instruction: Is pre-shock wave lithotripsy stenting necessary for ureteral stones with moderate or severe hydronephrosis? Abstracts: abstract_id: PUBMED:17070256 Is pre-shock wave lithotripsy stenting necessary for ureteral stones with moderate or severe hydronephrosis? Purpose: We performed a prospective, randomized clinical trial to evaluate the outcome of ureteral stents for solitary ureteral stones 2 cm or less in moderately or severely obstructed systems using shock wave lithotripsy. Materials And Methods: Between 2001 and 2004, 186 patients who met study criteria were randomized into 2 groups. Group 1 received a pre-shock wave lithotripsy 6Fr Double-J stent and group 2 had no stent. Patients were treated with a Dornier MFL 5000 lithotripter. Results were compared in terms of clearance rates, number of shock waves and sessions, irritative voiding symptoms, incidence of complications and secondary interventions. Failure was defined as the need for additional procedure(s) for stone extraction. Results: Overall 164 patients (88.2%) became stone-free after shock wave lithotripsy. Complete stone fragmentation was achieved after 1 to 3 and more than 3 session in 108 (58.1%), 30 (16.1%), 13 (7%) and 14 patients (7.5%), respectively. Ureteral stent insertion did not affect the stone-free rate, which was 84.9% and 91.4% in groups 1 and 2, respectively (p = 0.25). There was no statistical difference in the re-treatment rate, flank pain or temperature in the 2 groups. However, all patients in the stented group significantly complained of side effects attributable to the stent, including dysuria, suprapubic pain, hematuria, pyuria and positive urinary culture. Conclusions: Pretreatment stenting provides no advantage over in situ shock wave lithotripsy for significantly obstructing ureteral calculi. Shock wave lithotripsy is reasonable initial therapy for ureteral stones 2 cm or less that cause moderate or severe hydronephrosis. abstract_id: PUBMED:18270943 Transureteral lithotripsy versus extracorporeal shock wave lithotripsy in management of upper ureteral calculi: a comparative study. Introduction: Our aim was to compare transureteral lithotripsy (TUL) and extracorporeal shock wave lithotripsy (SWL) in the management of upper ureteral calculi larger than 5 mm in diameter. Materials And Methods: Patients who had upper ureteral calculi greater than 5 mm in diameter were enrolled in this clinical trial. The calculi had not responded to conservative or symptomatic therapy. Semirigid ureteroscopy and pneumatic lithotripsy were used for TUL in 52 patients and SWL was performed in 48. Analysis of the calculi compositions was done and the patients were followed up by plain abdominal radiography and ultrasonography 3 month postoperatively. Results: The stone-free rates were 76.9% in the patients of the TUL group and 68.8% in the patients of the SWL group. These rates in the patients with mild or no hydronephrosis were 85.7% and 59.1% for the SWL and TUL groups, respectively. In the TUL group, half of the patients with no hydronephrosis developed upward calculus migration. The stone-free rates were 75.0% and 89.3% for the patients with moderate hydronephrosis and 70.0% and 100.0% for those with severe hydronephrosis in the SWL and TUL groups, respectively. All of the failed cases were treated by double-J stenting and TUL or SWL successfully. There were no serious complications. Upward calculus migration after TUL was more frequent in cases with no hydronephrosis or mild hydronephrosis (41.0%). Conclusion: Upper ureteral calculi smaller than 1 cm can be safely and effectively managed using semirigid ureteroscopy and pneumatic lithotripsy. However, the SWL approach has still its role if an experienced endourologist is not available. abstract_id: PUBMED:11547053 Prospective randomized trial comparing shock wave lithotripsy and ureteroscopy for management of distal ureteral calculi. Purpose: We compared the efficacy of shock wave lithotripsy and ureteroscopy for treatment of distal ureteral calculi. Materials And Methods: A total of 64 patients with solitary, radiopaque distal ureteral calculi 15 mm. or less in largest diameter were randomized to treatment with shock wave lithotripsy (32) using an HM3 lithotriptor (Dornier MedTech, Kennesaw, Georgia) or ureteroscopy (32). Patient and stone characteristics, treatment parameters, clinical outcomes, patient satisfaction and cost were assessed for each group. Results: The 2 groups were comparable in regard to patient age, sex, body mass index, stone size, degree of hydronephrosis and time to treatment. Procedural and operating room times were statistically significantly shorter for the shock wave lithotripsy compared to the ureteroscopy group (34 and 72 versus 65 and 97 minutes, respectively). In addition, 94% of patients who underwent shock wave lithotripsy versus 75% who underwent ureteroscopy were discharged home the day of procedure. At a mean followup of 21 and 24 days for shock wave lithotripsy and ureteroscopy, respectively, 91% of patients in each group had undergone imaging with a plain abdominal radiograph, and all studies showed resolution of the target stone. Minor complications occurred in 9% and 25% of the shock wave lithotripsy and ureteroscopy groups, respectively (p value was not significant). No ureteral perforation or stricture occurred in the ureteroscopy group. Postoperative flank pain and dysuria were more severe in the ureteroscopy than shock wave lithotripsy group, although the differences were not statistically significant. Patient satisfaction was high, including 94% for shock wave lithotripsy and 87% for ureteroscopy (p value not significant). Cost favored ureteroscopy by $1,255 if outpatient treatment for both modalities was assumed. Conclusions: Ureteroscopy and shock wave lithotripsy were associated with high success and low complication rates. However, shock wave lithotripsy required significantly less operating time, was more often performed on an outpatient basis, and showed a trend towards less flank pain and dysuria, fewer complications and quicker convalescence. Patient satisfaction was uniformly high in both groups. Although ureteroscopy and shock wave lithotripsy are highly effective for treatment of distal ureteral stones, we believe that HM3 shock wave lithotripsy, albeit slightly more costly, is preferable to manipulation with ureteroscopy since it is equally efficacious, more efficient and less morbid. abstract_id: PUBMED:31606779 Does the presence or degree of hydronephrosis affect the stone disintegration efficacy of extracorporeal shock wave lithotripsy? A systematic review and meta-analysis. The aim of this study was to determine whether the presence or degree of hydronephrosis (HN) affects the stone disintegration efficacy of shock wave lithotripsy (SWL). A comprehensive literature search using PubMed, Embase, Cochrane Library, and Web of Science was conducted to retrieve relevant studies. Risk ratios (RRs) and mean differences (MDs) with corresponding 95% confidence intervals (CIs) were calculated for comparisons of outcomes of interest. In total, seven comparative studies with 2033 patients were included. Overall results indicated no significant difference in stone-free rate (SFR) and retreatment rate between two groups. Subgroup analysis further revealed: (1) compared with moderate or severe HN, non-HN SWL brought significantly lower retreatment rate (RR 0.67, 95%CI 0.52-0.87, P = 0.002 and RR 0.55, 95%CI: 0.40-0.76, P = 0.0003, respectively) and shorter clearance time (MD - 3.80, 95%CI - 5.81 to - 1.79, P = 0.0002 and MD - 5.93, 95%CI - 10.29 to - 1.57, P = 0.008, respectively); (2) SWLs performed without stone-induced HN or with artificial HN were associated with significantly higher SFR (RR 1.11, 95%CI 1.04-1.18, P = 0.001 and RR 0.93, 95%CI 0.87-0.99, P = 0.02, respectively); (3) non-HN SWL brought significantly higher SFR than HN group when treating proximal ureteral stones (RR 1.14, 95%CI 1.04-1.24, P = 0.005). Generally, SWLs performed with HN were shown to offer similar stone disintegration efficacy to those without HN. However, it seemed preferable to perform SWL: (1) without severe to moderate HN or stone-induced HN; (2) with artificial HN; (3) without HN when treating proximal ureteral stones. abstract_id: PUBMED:19181581 Impact of hydronephrosis on treatment outcome of solitary proximal ureteral stone after extracorporeal shock wave lithotripsy. The purpose of this study was to investigate the impact of hydronephrosis on the treatment outcome of patients with a solitary proximal ureteral stone after extracorporeal shock wave lithotripsy (ESWL). A total of 182 consecutive patients who underwent ESWL for a solitary proximal ureteral stone of between 5 and 20 mm in size in our institution were included in this study. The degree of hydronephrosis was defined by renal ultrasonography. Patient data, stone size, shock wave numbers and shock wave energy were also recorded. Treatment outcome was evaluated 3 months after the first session of ESWL. In multivariate analysis, only the maximal stone length (odds ratio [OR], 0.15; 95% confidence interval [CI], 0.03-0.91; p = 0.04) and the degree of hydronephrosis (OR, 0.40; 95% CI, 0.16-0.98; p = 0.045) were significant predicting factors for stone-free status 3 months after ESWL. For stones &lt; or = 10 mm, the stone-free rate decreased from 80% in patients with mild hydronephrosis to 56.4% in those with moderate to severe hydronephrosis. For stones &gt; 10 mm, the stone-free rate decreased further, from 65.2% in patients with mild hydronephrosis to 33.3% in those with moderate to severe hydronephrosis. In summary, patients with a solitary proximal ureteral stone and a stone &gt; 10 mm, the treatment outcome after ESWL was not good if moderate to severe hydronephrosis was noted on ultrasonography. Alternative treatments, such as ureteroscopic lithotripsy, may be appropriate as initial treatment or after failure of one session of ESWL. abstract_id: PUBMED:18592459 Solo extracorporeal shock wave lithotripsy for management of upper ureteral calculi with hydronephrosis. Introduction: The aim of this study was to evaluate extracorporeal shock wave lithotripsy (SWL) outcomes as a solo therapy in patients with upper ureteral calculi and varying degrees of hydronephrosis. Materials And Methods: Eighty patients with upper ureteral calculi and a body mass index between 19.5 kg/m2 and 22.5 kg/m2 were included. They were categorized into 4 groups according to the severity of hydronephrosis as seen on ultrasonography and intravenous urography: group 1, no dilatation; group 2, mild dilatation; group 3, moderate dilatation; and group 4, severe dilatation of the pyelocaliceal system. The size of calculi, time to calculus clearance, success rate of solo SWL, and the need for additional therapeutic methods were recorded and compared between the four groups of patients. Results: The median size of the calculi was 13.5 mm, and the mean time to calculus clearance was 56.0 +/- 24.2 days. In 71.3% of the patients, solo SWL was successful in the treatment of the calculi. Twenty-three patients required other therapies including double-J stenting, ureteroscopy, and nephrolithotomy. The patients without hydronephrosis and those with severe hydronephrosis (groups 1 and 4) showed a significant difference in the days to clearance of the calculus (mean, 31.7 days versus 85.6 days; P &lt; .001). Conclusion: Patients with upper ureteral calculi and mild hydronephrosis can be effectively treated with solo SWL therapy. In those with moderate hydronephrosis, clearance takes longer or requires secondary interventions. In patients with severe hydronephrosis, we recommend alternative/adjunctive procedures. abstract_id: PUBMED:17382137 Does degree of hydronephrosis affect success of extracorporeal shock wave lithotripsy for distal ureteral stones? Objectives: To investigate the relation between the degree of stone-induced hydronephrosis and the outcome of shock wave lithotripsy in patients with distal ureter stones. Methods: A total of 215 patients with a solitary distal ureter stone with or without hydronephrosis were treated with shock wave lithotripsy. The degree of hydronephrosis was determined by renal ultrasonography. The patients were divided into four groups according to the degree of stone-induced hydronephrosis. Group 0 (44.2%) had no urinary system dilation, group 1 (32.5%) had mild dilation, group 2 (16.3%) had moderate dilation, and group 3 (7%) had severe dilation. The patients were treated with the Dornier MFL 5000 lithotripter. The results were compared in terms of the stone-free rates, number of shock waves, number of sessions, incidence of complications, number of secondary interventions, and time to stone clearance. Results: The mean stone size was 11.2 +/- 2.5 mm. In the hydronephrotic group, the stone-free rate was 74% compared with 83% in patients without hydronephrosis (P = 0.27). The mean time to stone clearance was 16.3 +/- 9.2 days. The differences among the four groups in terms of stone size and treatment outcome were not significant. However, the presence of hydronephrosis was significantly associated with repeat treatment (2.2 versus 1.6, P &lt;0.001) and prolonged clearance time (18.7 versus 15.4 days, P &lt;0.001). Conclusions: The results of our study have shown that in patients with solitary distal ureter stones, the degree of hydronephrosis caused by the stone does not affect the overall treatment success with shock wave lithotripsy. However, stones in obstructed systems tended to require repeat treatment and prolonged time for stone clearance. abstract_id: PUBMED:7551876 In situ piezoelectric extracorporeal shock wave lithotripsy of ureteric stones. Objective: To evaluate the efficacy of the EDAP LT 02 lithotripter for the in situ treatment of ureteric calculi. Patients And Methods: One hundred consecutive patients presenting with ureteric calculi were treated with in situ piezoelectric extracorporeal shock wave lithotripsy (ESWL) using the EDAP LT 02 lithotripter. There were 49 patients with upper, nine with mid and 42 with lower ureteric stones. The largest diameter of the stones varied from 7 to 21 mm (mean 9.6 mm). Mild or severe hydronephrosis was present in 53 cases. Mid and lower ureteric stones were treated with the patients in the prone position, with no anaesthesia or pre-medication, and upper ureteric stones in the supine position, with intravenous sedation in 44 cases. Results: Localization of the stones was easy in 81 cases and more difficult in 19, but an intravenous pyelogram was only necessary in three cases. The number of sessions per patient varied from 1 to 3 (mean 1.17). Complete success rate was achieved in 75% of patients and partial success (residual stones &lt; or = 3 mm) in 6%. The stone-free rate was statistically affected by stone size but was independent of stone localization or the degree of obstruction. The rate of infective and obstructive complications was 14% and auxiliary treatments were necessary in 5% of patients. Conclusion: In situ piezoelectric ESWL with the EDAP LT 02 device is a convenient and efficient method for the treatment of ureteric stones. abstract_id: PUBMED:34687343 Variables measured on three-dimensional computed tomography are preferred for predicting the outcomes of shock wave lithotripsy. Purpose: Shock wave lithotripsy (SWL) is used to treat upper urinary tract stones. Recently, some volume analyzers have enabled preoperative assessment using three-dimensional computed tomography (3D-CT). We evaluated the efficacy of 3D-CT variables for predicting the outcomes of SWL. Methods: The study population included 193 patients who underwent SWL between November 2014 and August 2020. In addition to conventional two-dimensional computed tomography (2D-CT) assessments, 3D-CT assessments of targeted stones were retrospectively performed, and stone size and stone density (SD) were measured. The successful and unsuccessful treatment groups were compared and risk factors for an unsuccessful first SWL session were investigated. The predictive accuracy of variables measured on 3D-CT was evaluated by receiver operating characteristic curves and multivariate analyses. Results: The success rate of the first SWL session was 73.1%. Stone volume, mean SD and highest SD on 3D-CT were significantly higher in the unsuccessful group than in the successful group. Stone volume showed a higher area under the curve (AUC) than the estimated volumetric stone burden and stone diameter, which were measured on 2D-CT (0.729, 0.683, and 0.672, respectively). The AUCs of the mean SD and highest SD on 3D-CT were higher than those on 2D-CT (0.699, 0.680, 0.617, and 0.627, respectively). Multivariate analyses identified stone volume (≥ 0.29 ml), mean SD on 3D-CT (≥ 421 HU), and absence of hydronephrosis as independent predictive factors for unsuccessful SWL. Conclusion: 3D-CT variables were promising predictors of the outcomes of SWL. Preoperative 3D-CT assessment is helpful for selecting favorable patients for SWL. abstract_id: PUBMED:10934604 An investigation of factors associated with failure of extracorporeal shock wave lithotripsy for ureteral calculi In this study, we attempted to determine which patients with ureteral calculi are likely to have an unsuccessful outcome for extracorporeal shock wave lithotripsy (ESWL). The records of 388 patients with ureteral calculi treated with ESWL at Ijinkai Takeda General Hospital between January 1997 and December 1998 were retrospectively reviewed. Three hundred seventeen patients had been treated with ESWL alone (single session 156, multiple sessions 161) and 71 with transurethral ureterolithotripsy (TUL) due to failure of ESWL. We compared these two groups by multivariate logistic regression analysis, which revealed that (1) middle and lower ureteral calculi, (2) stones &gt; 8 mm, (3) moderate to severe hydronephrosis and (4) failure of the first session ESWL were factors associated with poor outcome of ESWL. Although each predictive value of these factors was relatively low (29.6-38.6%), the rate of failure of ESWL was 85.0% when patients had all four factors. We therefore recommend an early change in treatment modality to TUL for patients with ureteral calculi when all four risk factors listed above are present. Answer: Pre-shock wave lithotripsy (SWL) stenting is not necessary for ureteral stones with moderate or severe hydronephrosis. A prospective, randomized clinical trial evaluated the outcome of ureteral stents for solitary ureteral stones 2 cm or less in moderately or severely obstructed systems using SWL. The study found that pretreatment stenting provided no advantage over in situ SWL for significantly obstructing ureteral calculi. SWL was considered reasonable initial therapy for ureteral stones 2 cm or less that cause moderate or severe hydronephrosis, as the stone-free rate was not affected by ureteral stent insertion (84.9% and 91.4% in stented and non-stented groups, respectively) (PUBMED:17070256). Furthermore, the presence or degree of hydronephrosis did not significantly affect the stone disintegration efficacy of SWL. A systematic review and meta-analysis indicated that SWL performed with hydronephrosis offered similar stone disintegration efficacy to those without hydronephrosis. However, it seemed preferable to perform SWL without severe to moderate hydronephrosis or stone-induced hydronephrosis, with artificial hydronephrosis, or without hydronephrosis when treating proximal ureteral stones (PUBMED:31606779). Additionally, another study showed that the degree of hydronephrosis was a significant predicting factor for stone-free status 3 months after SWL. For stones ≤ 10 mm, the stone-free rate decreased from 80% in patients with mild hydronephrosis to 56.4% in those with moderate to severe hydronephrosis. For stones > 10 mm, the stone-free rate decreased further, from 65.2% in patients with mild hydronephrosis to 33.3% in those with moderate to severe hydronephrosis (PUBMED:19181581). In conclusion, based on the evidence from these studies, pre-SWL stenting does not appear to be necessary for ureteral stones with moderate or severe hydronephrosis, and the presence of hydronephrosis does not significantly impact the efficacy of SWL treatment.
Instruction: Are cholesterol and depression inversely related? Abstracts: abstract_id: PUBMED:18787911 Are cholesterol and depression inversely related? A meta-analysis of the association between two cardiac risk factors. Background: Cholesterol and depression are both cardiac risk factors, but the direction and magnitude of the association between these risk factors is unclear. Purpose: Meta-analytic techniques were used to evaluate the associations among total, high-, and low-density cholesterol (TC, HDL, LDL, respectively) and depression in empirical studies. Methods: PubMed, CINAHL, PsycINFO, and manual search strategies were used to identify descriptive studies reporting associations among TC, HDL, LDL, and depression; 30 reports were found for TC, 16 for HDL, and 11 for LDL. Effect sizes were computed and aggregated in accord with Hedges and Olkin's (Statistical methods for meta-analysis. New York: Academic Press; 1985) procedures. Results: Higher TC was associated with lower levels of depression, d = -0.29; this association was substantially larger among medication-free samples (d = -0.51). An inverse, non-significant association was observed between LDL and depression (d = -0.17). High HDL was related to higher levels of depression, especially in women (d = 0.20). Conclusions: TC and depression were inversely related, with the strongest associations in medically naïve samples, which is noteworthy because such samples should involve fewer confounds. One clinical implication is that the lipids of patients treated for depression should be monitored. abstract_id: PUBMED:2339138 Age-associated reduction of prostacyclin and thromboxane synthesis is inversely related to plasma cholesterol levels: modulation by dietary cholesterol supplementation. Abnormalities of vasoactive eicosanoid synthesis with age are reported. We observed an age-associated reduction of vascular prostacyclin production and thrombin-stimulated thromboxane A2 production in blood. Amounts of production of these eicosanoids were inversely related to plasma cholesterol levels. However, there were no such relationships in rats supplemented with cholesterol. Dietary cholesterol supplementation induced a reduction of thromboxane A2/prostacyclin ratio regardless of age. These results suggest that age-associated changes of blood cholesterol levels are closely linked with vasoactive eicosanoid synthesis and that excessive consumption of cholesterol may induce a compensatory reaction by reducing the thromboxane A2/prostacyclin ratio. abstract_id: PUBMED:25163727 HDL-cholesterol concentrations are inversely associated with Edinburgh Postnatal Depression Scale scores during pregnancy: results from a Brazilian cohort study. Serum lipids have been associated with depression in the adult population; however, this association during pregnancy remains unclear. The aim of this study was to evaluate the association between serum lipids and depressive symptom scores during pregnancy. A prospective cohort of 238 pregnant women was followed at the 5th-13th, 20th-26th and 30th-36th weeks of gestation. Depressive symptoms were assessed using the Edinburgh Postnatal Depression Scale (EPDS). Serum concentrations (mg/dL) of triglycerides, total cholesterol, and low- and high-density lipoproteins (LDL-c; HDL-c) were the main exposures. Marital status (married/single), physical activity (active or very active/low or very low active), unplanned pregnancy (no/yes), pre-pregnancy BMI (&lt;25/≥ 25 kg/m(2)), generalized anxiety disorder (no/yes) and current suicidal ideation (no/yes) were considered as potential confounders. Analyses were performed using linear mixed-effects models. The results showed that the EPDS mean score (95%CI) decreased with time during pregnancy trimesters [1st: 8.89 (95%CI = 8.28-9.51), 2nd: 7.32 (95%CI = 6.67-7.97) and 3rd: 7.08 (95%CI = 6.41-7.74)]. Suicidal ideation frequency at baseline was 18%. HDL-c concentrations were inversely associated with changes in EPDS score (β = -0.080, 95%CI = -0.157 to -0.002), while low or very low active women (β = 1.288, 95%CI = 0.630-1.946), with single marital status (β = 1.348, 95%CI = 0.163-2.534), unplanned pregnancy (β = 1.922, 95%CI = 0.714-3.131), generalized anxiety disorder (β = 2.139, 95%CI = 0.410-3.868) and current suicidal ideation (β = 1.927, 95%CI = 0.596-3.258) tended to have higher EPDS scores. No relationship was observed between other lipids and EPDS scores. HDL-c concentration was inversely associated with changes in depressive symptom scores during pregnancy after adjusting for socio-economic, demographic, behavioral, nutritional, biochemical and mental health disorders. abstract_id: PUBMED:35369876 Low cholesterol is not associated with depression: data from the 2005-2018 National Health and Nutrition Examination Survey. Background: Although high serum cholesterol is widely recognized as a major risk factor for heart disease, the health effects of low cholesterol are less clear. Several studies have found a correlation between low cholesterol and depression, but the results are inconsistent. Methods: Data from the National Health and Nutrition Examination Survey (NHANES) 2005-2018 were utilized in this cross-sectional study. The analysis of the relationship between cholesterol and depression was performed at three levels: low total cholesterol, low high-density lipoprotein (HDL) cholesterol and low-density lipoprotein (LDL) cholesterol. The inclusion criteria were as follows: (1) people with low (&lt;4.14 mmol/L) or normal (4.14-5.16 mmol/L) total cholesterol for Sample 1; people with low (&lt;1 mmol/L) or normal (≥1 mmol/L) HDL cholesterol levels for Sample 2; and people with low (&lt;1.8 mmol/L) or normal (1.8-3.4 mmol/L) LDL cholesterol levels for Sample 3; and (2) people who completed the Patient Health Questionnaire-9 depression scale. Age, sex, educational level, race, marital status, self-rated health, alcohol status, smoking status, body mass index (BMI), poverty income ratio, physical function, comorbidities, and prescription use were considered potential confounders. The missing data were handled by multiple imputations of chained equations. Logistic regression was used to assess the relationship between low cholesterol and depression. Results: After controlling for potential confounding factors in the multivariate logistic regression, no association was observed between depression and low total cholesterol (OR=1.0, 95% CI: 0.9-1.2), low LDL cholesterol (OR=1.0, 95% CI: 0.8-1.4), or low HDL cholesterol (OR=0.9, 95% CI: 0.8-1.1). The results stratified by sex also showed no association between low total cholesterol, low LDL cholesterol, low HDL cholesterol and depression in either men or women. Conclusion: This population-based study did not support the assumption that low cholesterol was related to a higher risk of depression. This information may contribute to the debate on how to manage people with low cholesterol in clinical practice. abstract_id: PUBMED:37069633 Association of remnant cholesterol with depression among US adults. Background: Remnant cholesterol is receiving increasing attention because of its association with various diseases. However, there have been no studies on remnant cholesterol levels and depression. Methods: A cross-sectional analysis was performed based on the National Health and Nutrition Examination Survey (NHANES) 2005-2016. Depression was assessed using a Patient Health Questionnaire (PHQ-9). Fasting remnant cholesterol was calculated as the total cholesterol minus high-density lipoprotein cholesterol (HDL-C) minus low-density lipoprotein cholesterol (LDL-C). Logistic regression analysis with sampling weights was used to examine the association between remnant cholesterol concentration and depression. Results: Among 8,263 adults enrolled in this study (weighted mean age, 45.65 years), 5.88% (weighted percentage) had depression. Compared to the participants without depression, those with depression had higher concentration of remnant cholesterol (weighted mean, 26.13 vs. 23.05, P &lt; 0.001). There was a significant positive relationship between remnant cholesterol concentration and depression and multivariable-adjusted OR with 95% CI was 1.49 (1.02-2.17). Among the subgroup analyses, remnant cholesterol concentration was positively associated with depression among participants less than 60 years (OR, 1.62; 95% CI, 1.09-2.42), male (OR, 2.02; 95% CI, 1.01-4.05), BMI under 30 (OR, 1.83; 95% CI, 1.14-2.96), and those with diabetes (OR, 3.88; 95% CI, 1.43-10.49). Conclusions: Remnant cholesterol concentration positively correlated with depression, suggesting that a focus on remnant cholesterol may be useful in the study of depression. abstract_id: PUBMED:10367605 Relations of trait depression and anxiety to low lipid and lipoprotein concentrations in healthy young adult women. Objective: Recent evidence suggests that naturally occurring low cholesterol concentrations (&lt;4.14 mmol/liter) are associated with depression as well as poor psychological health. For the most part, these associations have been observed in men. The current study assessed the relation of naturally occurring low lipid and lipoprotein concentrations to trait measures of depression and anxiety in 121 healthy young adult women. Methods: Fasting lipid samples were collected at the same time as health history. Trait depression and anxiety were assessed using the Neuroticism, Extraversion, Openness-Personality Inventory (NEO-PI) depression subscale and Spielberger's Trait Personality Inventory (STPI) anxiety subscale. Analyses were conducted using both univariate and multivariate procedures. Results: NEO depression was inversely associated with total cholesterol (p = .027), triglycerides (p = .012), and the ratio of total cholesterol to high-density lipoprotein cholesterol (p = .059). Similarly, STPI anxiety was inversely associated with total cholesterol (p = .002), low-density lipoprotein cholesterol (p = .016), triglycerides (p = .024), and ratio of total cholesterol to high-density lipoprotein cholesterol (p = .075). These associations were significant after adjustment for age, body mass index, physical activity, oral contraceptive use, and hostility. Neither depression nor anxiety was associated with high-density lipoprotein cholesterol. Univariate analyses indicated that women with low total cholesterol concentrations (&lt;4.14 mmol/liter), relative to those with moderate to high cholesterol levels, were more likely to have higher scores on the NEO depression subscale (27 of 69 (39%) vs. 10 of 52 (19%)) and STPI anxiety subscale (24 of 69 (35%) vs. 11 of 52 (21%)). Conclusions: In healthy young adult women, low lipid and lipoprotein concentrations are inversely associated with trait measures of depression and anxiety. These findings are independent of age, body mass index, physical activity, and other factors known to influence lipid concentrations. abstract_id: PUBMED:7566558 Relationship between cholesterol levels and depression in the elderly The aim of our study is to evaluate the possible association between lower plasma cholesterol and depression in the elderly. 140 subjects over 65 years old of both sexes were enrolled, of which 60 were affected by depression (DSM-III-R and Hamilton test) and 80 composed a control group homogeneous for sex and age with the previous one. Plasma cholesterol, HLD-cholesterol (HDL-C), LDL-cholesterol (LDL-C) and triglycerides were measured. A statistically significant difference between cholesterol and LDL-C (p &lt; 0.001) was noted in the total group, in both males and females. Such modifications were independent of sex. In the group with lower cholesterol (cut-off &lt; = 160 mg/dl) a prevalence of depression three times greater than subjects with higher cholesterol was found. In conclusion, the authors recommended a prudent use of lipid-lowering medications in the elderly because of its uncertain benefits. abstract_id: PUBMED:25522992 Beyond the genetics of HDL: why is HDL cholesterol inversely related to cardiovascular disease? There is unequivocal evidence that high-density lipoprotein (HDL) cholesterol levels in plasma are inversely associated with the risk of cardiovascular disease (CVD). Studies of families with inherited HDL disorders and genetic association studies in general (and patient) population samples have identified a large number of factors that control HDL cholesterol levels. However, they have not resolved why HDL cholesterol and CVD are inversely related. A growing body of evidence from nongenetic studies shows that HDL in patients at increased risk of CVD has lost its protective properties and that increasing the cholesterol content of HDL does not result in the desired effects. Hopefully, these insights can help improve strategies to successfully intervene in HDL metabolism. It is clear that there is a need to revisit the HDL hypothesis in an unbiased manner. True insights into the molecular mechanisms that regulate plasma HDL cholesterol and triglycerides or control HDL function could provide the handholds that are needed to develop treatment for, e.g., type 2 diabetes and the metabolic syndrome. Especially genome-wide association studies have provided many candidate genes for such studies. In this review we have tried to cover the main molecular studies that have been produced over the past few years. It is clear that we are only at the very start of understanding how the newly identified factors may control HDL metabolism. In addition, the most recent findings underscore the intricate relations between HDL, triglyceride, and glucose metabolism indicating that these parameters need to be studied simultaneously. abstract_id: PUBMED:8879460 Increases in depression after cholesterol-lowering drug treatment. To investigate the possibility that increases in depressive symptoms might occur in patients who have undergone cholesterol-lowering interventions, the authors administered the Center for Epidemiological Studies-Depression scale before and after cholesterol lowering to 6 men who were referred to a lipid clinic. All of the patients' cholesterol levels were reduced after the 6-week intervention, and 4 of the patients' depression scores increased; scores of 2 of the 4 met the criteria for mild clinical depression. Further study of possible links among low cholesterol, depressive symptoms, and serotonergic activity is needed. abstract_id: PUBMED:31046446 Nutritional status in hemodialysis patients is inversely related to depression and introversion. Although hemodialysis (HD) is a life-sustaining treatment for the majority of patients with end-stage renal disease, it may adversely affect their psychological status. Depression is highly prevalent among these patients, and it is associated with malnutrition, morbidity and mortality. Recent studies have demonstrated that depression is positively associated with neuroticism and introversion in HD patients. The aim of the present study was to explore the relationship between depression, personality traits (extraversion-introversion, neuroticism, psychoticism) and nutritional status among HD patients. Fifty-two HD patients were assessed via questionnaires for depression (CES-D) and personality traits (EYSENCK, EPQ). Nutritional assessment was conducted through bioelectrical impedance analysis (BIA) and anthropometry measurements (height, post-dialysis weight). The study revealed novel significant correlations. Both post-dialysis weight and phase angle were negatively related to introversion (r = -0.314, p &lt; 0.05 and r = -0.542, p &lt; 0.01, respectively) and depression (r = -0.456, p &lt; 0.01 and r = -0.467, p &lt; 0.01, respectively). This study demonstrates that both depression and introversion are inversely related to adequate nourishment in HD patients and suggests that personality plays an important role in the nutritional status of these patients. Answer: The relationship between cholesterol and depression appears to be complex and somewhat inconsistent across different studies. Some research suggests that there is an inverse relationship between cholesterol levels and depression. For instance, a meta-analysis found that higher total cholesterol (TC) was associated with lower levels of depression, particularly in medication-free samples, suggesting that TC and depression may be inversely related (PUBMED:18787911). Similarly, HDL-cholesterol concentrations were inversely associated with Edinburgh Postnatal Depression Scale scores during pregnancy in a Brazilian cohort study, indicating that higher HDL-cholesterol may be linked to lower depression scores (PUBMED:25163727). Another study found that in healthy young adult women, low lipid and lipoprotein concentrations were inversely associated with trait measures of depression and anxiety (PUBMED:10367605). However, other studies have reported different findings. For example, a study using data from the National Health and Nutrition Examination Survey (NHANES) 2005-2018 found no association between low cholesterol (including low total cholesterol, low HDL cholesterol, and low LDL cholesterol) and depression (PUBMED:35369876). Another study found that remnant cholesterol concentration was positively correlated with depression, suggesting a direct rather than inverse relationship (PUBMED:37069633). Additionally, some studies have indicated that the relationship between cholesterol and depression may be influenced by other factors such as age, sex, body mass index, and dietary habits. For example, an age-associated reduction of prostacyclin and thromboxane synthesis was inversely related to plasma cholesterol levels, but this relationship was not observed in rats supplemented with cholesterol (PUBMED:2339138). In the elderly, lower plasma cholesterol was associated with depression, suggesting a potential inverse relationship in this population (PUBMED:7566558). Overall, while there is some evidence to suggest an inverse relationship between cholesterol and depression, the findings are not consistent across all studies, and the relationship may be influenced by various confounding factors. Further research is needed to fully understand the nature of the association between cholesterol levels and depression.
Instruction: Is transobturator suburethral sling effective for treating female urodynamic stress incontinence with low maximal urethral closure pressure? Abstracts: abstract_id: PUBMED:21482369 Is transobturator suburethral sling effective for treating female urodynamic stress incontinence with low maximal urethral closure pressure? Objective: To assess retrospectively the efficacy and safety of MONARC (American Medical Systems) transobturator suburethral slings in the treatment of female urodynamic stress incontinence with and without low maximal urethral closure pressure (MUCP). Materials And Methods: Seventy-three women with urodynamic stress incontinence, fitted with the transobturator suburethral sling at a medical center in central Taiwan, participated in the study. Objective postoperative evaluations, including a 1-hour pad test, cough stress test, uroflowmetry, and residual urine volume, were conducted 6 months after operation. Subjective outcomes were evaluated by telephone interview. Charts were reviewed for perioperative complications, urinary retention, and requirements for postoperative medication for symptoms of urgency. The mean follow-up was 48 months. Results: Objective cure rate was 80.8% (dry pad test and negative stress test), 82.4% for MUCP less than 30cmH(2)O, and 80.4% for MUCP greater than 30cmH(2)O (p=1.000). Mean pad weight gain changed from 25.8g preoperatively to 1.8g postoperatively (p&lt;0.05). There was no significant change in urinary flow rate or residual volume. Subjectively, 98.6% of subjects experienced complete improvement; only one patient found no improvement. Very few perioperative complications occurred. Immediate postoperative difficulty in voiding occurred in 6.8% of patients. Postoperative de novo urgency was 2.7%. Conclusions: The MONARC transobturator suburethral sling is a safe and highly effective treatment for stress urinary incontinence even in women with low MUCP at a mean follow-up of 48 months. Evaluation of the outcomes after a longer follow-up period is necessary. abstract_id: PUBMED:26927242 Is single incision midurethral sling effective in patients with low maximal urethral closure pressure? Objective: To ascertain whether low preoperational maximal urethral closure pressure (MUCP) affects the outcomes of single incision sling (SIS) procedures and changes MUCP values postsurgery. Material And Methods: There were 112 (MUCP ≥ 40 cmH2O, n = 88; MUCP &lt; 40 cmH2O, n = 24) consecutive women with urodynamic stress incontinence who had undergone SIS (MiniArc) procedures included in this study. The threshold of 40 cmH2O was used since it has been shown to be a significant risk factor for failed incontinence surgery. Clinical outcomes were assessed by the cough stress test, the 1-hour pad test, the Incontinence Impact Questionnaire-Short Form, the Urogenital Distress Inventory six-item questionnaire, the Sexual Questionnaire-SF, and postoperative changes in the urodynamic parameters. A comparison of the 1-year follow-up data is presented. Results: Three months postsurgery, a significant decrease was observed in the 1-hour pad test, from 20.6 g preoperatively to 0.73 g postoperatively (p &lt; 0.001). The objective cure rate was 82.1% without any significant differences between the two groups (p = 0.202). At 3 months and 1 year after surgery, significantly decreasing Urogenital Distress Inventory six-item questionnaire and Incontinence Impact Questionnaire-Short Form, and increasing Sexual Questionnaire-SF scores were observed in both groups, without any significant differences between the two groups. No statistically significant difference in the subjective cured rate was noted between the two groups at the 3-month and 18.4 month follow-ups. The postoperative MUCP was significantly decreased in the MUCP ≥ 40 group (p &lt; 0.05) while significantly increased in the MUCP &lt; 40 group (p = 0.006). Conclusions: These results suggest that SIS is a safe and highly effective treatment for urodynamic stress incontinence even in women with low MUCP at a mean follow-up of 18.4 months. Evaluation of the outcomes with more subjects after a longer follow-up period is necessary. abstract_id: PUBMED:2231562 Stress incontinence and low urethral closure pressure. Correlation of preoperative urethral hypermobility with successful suburethral sling procedures. Forty-eight women with genuine stress incontinence and low urethral closure pressure were treated with a suburethral sling procedure using polytetrafluoroethylene. All patients underwent a preoperative clinical evaluation and multichannel urodynamic testing. The clinical examination included a "Q-tip" test to determine the presence or absence of urethral hypermobility. Urethral hypermobility was defined as a maximal angle change of greater than or equal to 30 degrees from the horizontal, measured during straining or coughing in the lithotomy position. Thirty-four patients underwent repeat multichannel urodynamic testing three months postoperatively to determine the objective surgical success. Ninety-three percent of patients (27/29) with a positive preoperative Q-tip test were cured. Of patients with a negative preoperative Q-tip test, only 20% (1/5) were cured. Preoperative urethral hypermobility was a good prognostic indicator of operative success when a suburethral sling procedure was used to treat genuine stress incontinence and low urethral closure pressure. abstract_id: PUBMED:29956423 Comparative study of transobturator sling with and without concomitant prolapse surgery for female urodynamic stress incontinence. Aim: To demonstrate the clinical and urodynamic outcomes of transobturator sling (TOT) with or without concomitant prolapse surgery for the treatment of urodynamic stress incontinence (USI). Methods: We recruited 143 consecutive patients diagnosed with USI, who received outside-in TOT in a university hospital. Preoperative and postoperative examinations were implemented using structured urogynecological questionnaires, pelvic organ prolapse quantification examination and urodynamic testing. Patient demographics, surgical and urodynamic results were compared between TOT with and without concomitant prolapse surgery. Results: The mean follow-up was 30.1 months (range 12-57). Postoperative stress urinary incontinence (SUI) occurred in 10 (7%) patients at 3 months and 10 (7%) patients at 12 months postoperatively. There was no significant difference in prevalence of postoperative SUI between groups of TOT only and TOT combined with pelvic surgery. Preoperative urodynamic results demonstrated that TOT only (n = 96) had a higher maximal flow rate and a lower residual urine amount when compared to TOT combined with pelvic surgery (n = 47). A significant decrease in maximal urethral closure pressure (MUCP) was found in 119 patients who received postoperative urodynamic examination. In comparison with preoperative urodynamic data, postoperative urodynamic results showed a significant decrease in MUCP in the TOT combined with prolapse surgery group, but no significant urodynamic changes in the group of TOT only. Conclusion: Both TOT and TOT combined with prolapse surgery can be effective in correcting SUI in patients with USI 12 months postoperatively, with significant changes in MUCP. abstract_id: PUBMED:36427964 Comparison of two outside-in transobturator midurethral slings in the treatment of female urodynamic stress incontinence. Objective: To explore the difference between two brands of outside-in transobturator midurethral sling (TOT) for urodynamic stress incontinence (USI). Materials And Methods: Women who underwent an outside-in TOT procedure by either Monarc or Obtryx were retrospectively reviewed. Data of women with available information at baseline and postoperative 12-month follow-up were analyzed. The analyzed data included standardized interview, pelvic examination, as well as sling location and sling tension explored by introital four-dimensional ultrasound. Sling position were explored through the distances between the sling center and the caudal margin of the pubic symphysis (SPd) as well as sling percentile (SP) along the urethral length as a percentage in the midsagittal plane. SPd was also used to explore sling tension. Clinical outcomes were compared between two groups. Sling location and sling tension were compared in success cases between two groups. Results: There were 138 women in Monarc group and 140 women in Obtryx group. Rates of stress urinary continence and adverse events were not statistically different after two TOT. SPd was similar between both procedures. Obtryx located more ventrally than Monarc, indicated by a smaller SP during resting (41.6% vs 58.5%, P &lt; 0.001), straining (38.0% vs 54.4%, P &lt; 0.001), and coughing (39.8% vs 48.8%, P &lt; 0.001). Conclusion: At 12-month assessment, both outside-in TOT procedures were not significantly different in terms of clinical results and sling tension, while Obtryx sling located more ventrally than Monarc. abstract_id: PUBMED:3353056 A suburethral sling procedure with polytetrafluoroethylene for the treatment of genuine stress incontinence in patients with low urethral closure pressure. One indication for suburethral sling procedures has been recurrent genuine stress incontinence after previous incontinence surgery. Patients with low urethral closure pressures (20 cm H2O or less) in association with genuine stress incontinence are at particular risk for failure of standard anti-incontinence procedures. Urodynamic evaluation was used to select 17 patients with genuine stress incontinence and low urethral closure pressures for surgical treatment with a sling procedure using polytetrafluoroethylene. The technique of the procedure, cure rate, and postoperative complications were assessed. An 85% subjective and objective cure rate was found on urodynamic testing three months postoperatively. Complications included wound seroma, urinary tract infection, and urinary retention. abstract_id: PUBMED:17014810 Is transobturator tape as effective as tension-free vaginal tape in patients with borderline maximum urethral closure pressure? Introduction: The purpose of this study was to compare transobturator tape (MONARC) with tension-free vaginal tape in patients with borderline low maximum urethral closure pressure. Study Design: Historical cohort analysis of 3-month outcomes in 145 subjects (MONARC = 85; tension-free vaginal tape = 60). A cut-off point of 42 cm H2O for preoperative maximum urethral closure pressure was identified as predictor of success in the entire cohort. The cohort was stratified by sling type and analyzed. Outcome variables included urodynamic stress incontinence, urethral pressure profiles, subjective stress incontinence symptoms, and complications. Results: The relative risk of postoperative urodynamic stress incontinence 3 months after surgery in patients with a preoperative maximum urethral closure pressure of 42 cm or less H2O was 5.89 (1.02 to 33.90, 95% confidence interval) when we compared MONARC with tension-free vaginal tape. Subjects in the MONARC and tension-free vaginal tape groups did not differ significantly in baseline characteristics. We defined subjects as failures if they demonstrated postoperative objective stress incontinence on multichannel urodynamic testing. Conclusion: In subjects with maximum urethral closure pressure of 42 cm or less H2O, the MONARC was nearly 6 times more likely to fail than tension-free vaginal tape at 3 months after surgery. Long-term follow-up and randomized controlled trials are needed. abstract_id: PUBMED:30874306 Autologous transobturator sling as an alternative therapy for stress urinary incontinence. Objective: To evaluate efficacy and outcomes of the autologous transobturator midurethral sling for treatment of stress urinary incontinence (SUI). Methods: In a prospective cohort study, an autologous transobturator mid-urethral sling was used to treat SUI among women attending a university hospital in Montevideo, Uruguay, from June 2017 to July 2018. In the first phase, autologous tissue of the abdominis rectus fascia was collected. In the second phase, the midurethral sling was placed via the transobturator approach. Outcomes were measured every 3 months by the International Consultation on Incontinence Questionnaire Female Lower Urinary Tract Symptoms (ICIQ-FLUTS) Score. Preoperative and postoperative results were compared by Wilcoxon test. Results: Eighteen women with a median age of 51 years were enrolled. The median follow-up was 9 months (range 6-15 months). Overall, 17 women showed symptomatic improvement after the procedure. In a comparison of preoperative versus postoperative ICIQ-FLUTS questionnaires, improvement in the incontinence subscore was observed at 3 (P&lt;0.001), 6 (P&lt;0.001), and 12 (P=0.008) months. No severe complications were observed. Conclusion: Use of an autologous transobturator urethral sling was found to be technically feasible and safe for SUI, with good short-term outcomes. Longer follow up and larger series are needed to validate the procedure. abstract_id: PUBMED:21683412 Baseline urodynamic predictors of treatment failure 1 year after mid urethral sling surgery. Purpose: We determined whether baseline urodynamic study variables predict failure after mid urethral sling surgery. Materials And Methods: Preoperative urodynamic study variables and postoperative continence status were analyzed in women participating in a randomized trial comparing retropubic to transobturator mid urethral sling. Objective failure was defined by positive standardized stress test, 15 ml or greater on 24-hour pad test, or re-treatment for stress urinary incontinence. Subjective failure criteria were self-reported stress symptoms, leakage on 3-day diary or re-treatment for stress urinary incontinence. Logistic regression was used to assess associations between covariates and failure controlling for treatment group and clinical variables. Receiver operator curves were constructed for relationships between objective failure and measures of urethral function. Results: Objective continence outcomes were available at 12 months for 565 of 597 (95%) women. Treatment failed in 260 women (245 by subjective criteria, 124 by objective criteria). No urodynamic variable was significantly associated with subjective failure on multivariate analysis. Valsalva leak point pressure, maximum urethral closure pressure and urodynamic stress incontinence were the only urodynamic variables consistently associated with objective failure on multivariate analysis. No specific cut point was determined for predicting failure for Valsalva leak point pressure or maximum urethral closure pressure by ROC. The lowest quartile (Valsalva leak point pressure less than 86 cm H2O, maximum urethral closure pressure less than 45 cm H2O) conferred an almost 2-fold increased odds of objective failure regardless of sling route (OR 2.23, 1.20-4.14 for Valsalva leak point pressure and OR 1.88, 1.04-3.41 for maximum urethral closure pressure). Conclusions: Women with a Valsalva leak point pressure or maximum urethral closure pressure in the lowest quartile are nearly 2-fold more likely to experience stress urinary incontinence 1 year after transobturator or retropubic mid urethral sling. abstract_id: PUBMED:34003308 Impact of intrinsic sphincter deficiency on mid-urethral sling outcomes. Introduction And Hypothesis: Our primary objective was to study outcomes of patients with intrinsic sphincter deficiency (ISD) following mid-urethral slings (MUS) at 1-year. Our secondary objective was to delineate factors affecting success in these patients. Methods: Six hundred eighty-eight patients who had MUS between January 2004 and April 2017 were reviewed retrospectively; 48 women were preoperatively diagnosed with ISD. All completed urodynamic studies and validated quality-of-life (QOL) questionnaires at baseline and 1 year. Primary outcomes were objective and subjective cure of stress incontinence, defined as no involuntary urine leakage during filling cystometry and 1-h pad test &lt; 2 g and negative response to Urogenital Distress Inventory-6 Question 3. Ultrasound was performed to determine tape position, urethral mobility and kinking at 1 year. Results: Women with ISD had significantly lower objective and subjective cure rates of 52.1% and 47.9%, respectively, compared to an overall of 88.2% and 85.9%. QOL scores significantly improved in those with successful surgeries. The sling type did not make a difference. Multivariate logistic regression identified reduced urethral mobility [OR 2.11 (1.24-3.75)], lower maximum urethral closure pressure (MUCP) [OR 1.61 (1.05-3.41)] and tape position [OR 3.12 (1.41-8.71)] to be associated with higher odds of failed slings for women with ISD. Conclusions: Although there are good overall success in women undergoing MUS, those with ISD have significantly lower cure rates at 1 year. Factors related to failure include reduced urethral mobility, low MUCP and relative tape position further away from the bladder neck. Optimal management of patients with ISD and reduced urethral mobility remains challenging. Answer: Yes, the transobturator suburethral sling is effective for treating female urodynamic stress incontinence with low maximal urethral closure pressure (MUCP). Studies have shown that the MONARC transobturator suburethral sling is a safe and highly effective treatment for stress urinary incontinence even in women with low MUCP, with an objective cure rate of 80.8% and a subjective improvement rate of 98.6% at a mean follow-up of 48 months (PUBMED:21482369). Additionally, single incision sling (SIS) procedures, such as the MiniArc, have also been found to be safe and highly effective for urodynamic stress incontinence in women with low MUCP, with an objective cure rate of 82.1% at a mean follow-up of 18.4 months (PUBMED:26927242). Furthermore, a study on the transobturator sling (TOT) with or without concomitant prolapse surgery demonstrated effective correction of stress urinary incontinence (SUI) in patients with urodynamic stress incontinence (USI) 12 months postoperatively, with significant changes in MUCP (PUBMED:29956423). These findings suggest that transobturator suburethral sling procedures are a viable option for women with low MUCP suffering from urodynamic stress incontinence.
Instruction: Systematic searching for theory to inform systematic reviews: is it feasible? Abstracts: abstract_id: PUBMED:26095232 Systematic searching for theory to inform systematic reviews: is it feasible? Is it desirable? Background: In recognising the potential value of theory in understanding how interventions work comes a challenge - how to make identification of theory less haphazard? Objectives: To explore the feasibility of systematic identification of theory. Method: We searched PubMed for published reviews (1998-2012) that had explicitly sought to identify theory. Systematic searching may be characterised by a structured question, methodological filters and an itemised search procedure. We constructed a template (BeHEMoTh - Behaviour of interest; Health context; Exclusions; Models or Theories) for use when systematically identifying theory. The authors tested the template within two systematic reviews. Results: Of 34 systematic reviews, only 12 reviews (35%) reported a method for identifying theory. Nineteen did not specify how they identified studies containing theory. Data were unavailable for three reviews. Candidate terms include concept(s)/conceptual, framework(s), model(s), and theory/theories/theoretical. Information professionals must overcome inadequate reporting and the use of theory out of context. The review team faces an additional concern in lack of 'theory fidelity'. Conclusions: Based on experience with two systematic reviews, the BeHEMoTh template and procedure offers a feasible and useful approach for identification of theory. Applications include realist synthesis, framework synthesis or review of complex interventions. The procedure requires rigorous evaluation. abstract_id: PUBMED:37928121 How do search systems impact systematic searching? A qualitative study. Objective: Systematic reviews and other evidence synthesis projects require systematic search methods. Search systems require several essential attributes to support systematic searching; however, many systems used in evidence synthesis fail to meet one or more of these requirements. I undertook a qualitative study to examine the effects of these limitations on systematic searching and how searchers select information sources for evidence synthesis projects. Methods: Qualitative data were collected from interviews with twelve systematic searchers. Data were analyzed using reflexive thematic analysis. Results: I used thematic analysis to identify two key themes relating to search systems: systems shape search processes, and systematic searching occurs within the information market. Many systems required for systematic reviews, in particular sources of unpublished studies, are not designed for systematic searching. Participants described various workarounds for the limitations they encounter in these systems. Economic factors influence searchers' selection of sources to search, as well as the degree to which vendors prioritize these users. Conclusion: Interviews with systematic searchers suggest priorities for improving search systems, and barriers to improvement that must be overcome. Vendors must understand the unique requirements of systematic searching and recognize systematic searchers as a distinct group of users. Better interfaces and improved functionality will result in more efficient evidence synthesis. abstract_id: PUBMED:29284538 Use of programme theory to understand the differential effects of interventions across socio-economic groups in systematic reviews-a systematic methodology review. Background: Systematic review guidance recommends the use of programme theory to inform considerations of if and how healthcare interventions may work differently across socio-economic status (SES) groups. This study aimed to address the lack of detail on how reviewers operationalise this in practice. Methods: A methodological systematic review was undertaken to assess if, how and the extent to which systematic reviewers operationalise the guidance on the use of programme theory in considerations of socio-economic inequalities in health. Multiple databases were searched from January 2013 to May 2016. Studies were included if they were systematic reviews assessing the effectiveness of an intervention and included data on SES. Two reviewers independently screened all studies, undertook quality assessment and extracted data. A narrative approach to synthesis was adopted. Results: A total of 37 systematic reviews were included, 10 of which were explicit in the use of terminology for 'programme theory'. Twenty-nine studies used programme theory to inform both their a priori assumptions and explain their review findings. Of these, 22 incorporated considerations of both what and how interventions do/do not work in SES groups to both predict and explain their review findings. Thirteen studies acknowledged 24 unique theoretical references to support their assumptions of what or how interventions may have different effects in SES groups. Most reviewers used supplementary evidence to support their considerations of differential effectiveness. The majority of authors outlined a programme theory in the "Introduction" and "Discussion" sections of the review to inform their assumptions or provide explanations of what or how interventions may result in differential effects within or across SES groups. About a third of reviews used programme theory to inform the review analysis and/or synthesis. Few authors used programme theory to inform their inclusion criteria, data extraction or quality assessment. Twenty-one studies tested their a priori programme theory. Conclusions: The use of programme theory to inform considerations of if, what and how interventions lead to differential effects on health in different SES groups in the systematic review process is not yet widely adopted, is used implicitly, is often fragmented and is not implemented in a systematic way. abstract_id: PUBMED:27846867 Exploring issues in the conduct of website searching and other online sources for systematic reviews: how can we be systematic? Websites and online resources outside academic bibliographic databases can be significant sources for identifying literature, though there are challenges in searching and managing the results. These are pertinent to systematic reviews that are underpinned by principles of transparency, accountability and reproducibility. We consider how the conduct of searching these resources can be compatible with the principles of a systematic search. We present an approach to address some of the challenges. This is particularly relevant when websites are relied upon to identify important literature for a review. We recommend considering the process as three stages and having a considered rationale and sufficient recordkeeping at each stage that balances transparency with practicality of purpose. Advances in technology and recommendations for website providers are briefly discussed. abstract_id: PUBMED:30793445 Examining the theory-effectiveness hypothesis: A systematic review of systematic reviews. Purpose: Health interventions based on theory may be more effective than those that are not. This review of reviews synthesizes all published randomized controlled trial (RCT) meta-analytic evidence from the last decade to examine whether theory-based interventions were found to be associated with more effective adult health behaviour change interventions. Methods: Systematic reviews including meta-analyses were identified by searching Medline, CINAHL, PsycINFO, and CDSR. A narrative synthesis was used to summarize and analyse the evidence. Only reviews including RCTs of health behaviour change interventions with adults aged 18+ published from 2007 to 2017 were included. Results: Of 8,659 articles, nine systematic reviews met inclusion criteria. The majority of reviews (n = 8) suggested no increased effectiveness for theory-based compared to non-theory-based interventions for effectiveness of outcomes relating to health behaviour. Less than half of the RCTs included in the reviews reported the use of theory (85/183). Two reviews suggested interventions based on control theory, motivational interviewing, or self-determination theory were associated with greater effectiveness for physical activity and/or dietary interventions and outcomes. Methodological and reporting issues limit the conclusions. Conclusions: Theory-based interventions as currently operationalized in systematic reviews were not found to be more effective than non-theory-based interventions. Methodological and reporting issues at study and review level may not reflect the true utility of theory use within health behaviour interventions. The promotion of theory use may benefit from using a multifaceted argument, rather than a narrow focus of increased effectiveness. Statement of contribution What is already known on this subject? Theory use is regularly promoted by claiming that it will lead to more effective behaviour change interventions. Theory use has been frequently linked to effectiveness within systematic reviews of behaviour change interventions. The theory-effectiveness hypothesis has not been systematically examined at the systematic review level. What does this study add? Theory use as operationalized by systematic review authors was not associated with increased effectiveness within systematic reviews examining randomized controlled trials of behaviour change interventions in adults. Interventions based on control theory, motivational interviewing, or self-determination theory were associated with greater effectiveness for physical activity and/or dietary interventions and outcomes. Theory use should be promoted using a multifaceted argument, and assertions for increased effectiveness of theory-based interventions should only be used in domains where specific evidence exists to support this claim. abstract_id: PUBMED:29065246 A review of the reporting of web searching to identify studies for Cochrane systematic reviews. The literature searches that are used to identify studies for inclusion in a systematic review should be comprehensively reported. This ensures that the literature searches are transparent and reproducible, which is important for assessing the strengths and weaknesses of a systematic review and re-running the literature searches when conducting an update review. Web searching using search engines and the websites of topically relevant organisations is sometimes used as a supplementary literature search method. Previous research has shown that the reporting of web searching in systematic reviews often lacks important details and is thus not transparent or reproducible. Useful details to report about web searching include the name of the search engine or website, the URL, the date searched, the search strategy, and the number of results. This study reviews the reporting of web searching to identify studies for Cochrane systematic reviews published in the 6-month period August 2016 to January 2017 (n = 423). Of these reviews, 61 reviews reported using web searching using a search engine or website as a literature search method. In the majority of reviews, the reporting of web searching was found to lack essential detail for ensuring transparency and reproducibility, such as the search terms. Recommendations are made on how to improve the reporting of web searching in Cochrane systematic reviews. abstract_id: PUBMED:27145932 Searching for qualitative research for inclusion in systematic reviews: a structured methodological review. Background: Qualitative systematic reviews or qualitative evidence syntheses (QES) are increasingly recognised as a way to enhance the value of systematic reviews (SRs) of clinical trials. They can explain the mechanisms by which interventions, evaluated within trials, might achieve their effect. They can investigate differences in effects between different population groups. They can identify which outcomes are most important to patients, carers, health professionals and other stakeholders. QES can explore the impact of acceptance, feasibility, meaningfulness and implementation-related factors within a real world setting and thus contribute to the design and further refinement of future interventions. To produce valid, reliable and meaningful QES requires systematic identification of relevant qualitative evidence. Although the methodologies of QES, including methods for information retrieval, are well-documented, little empirical evidence exists to inform their conduct and reporting. Methods: This structured methodological overview examines papers on searching for qualitative research identified from the Cochrane Qualitative and Implementation Methods Group Methodology Register and from citation searches of 15 key papers. Results: A single reviewer reviewed 1299 references. Papers reporting methodological guidance, use of innovative methodologies or empirical studies of retrieval methods were categorised under eight topical headings: overviews and methodological guidance, sampling, sources, structured questions, search procedures, search strategies and filters, supplementary strategies and standards. Conclusions: This structured overview presents a contemporaneous view of information retrieval for qualitative research and identifies a future research agenda. This review concludes that poor empirical evidence underpins current information practice in information retrieval of qualitative research. A trend towards improved transparency of search methods and further evaluation of key search procedures offers the prospect of rapid development of search methods. abstract_id: PUBMED:26052848 Searching for grey literature for systematic reviews: challenges and benefits. There is ongoing interest in including grey literature in systematic reviews. Including grey literature can broaden the scope to more relevant studies, thereby providing a more complete view of available evidence. Searching for grey literature can be challenging despite greater access through the Internet, search engines and online bibliographic databases. There are a number of publications that list sources for finding grey literature in systematic reviews. However, there is scant information about how searches for grey literature are executed and how it is included in the review process. This level of detail is important to ensure that reviews follow explicit methodology to be systematic, transparent and reproducible. The purpose of this paper is to provide a detailed account of one systematic review team's experience in searching for grey literature and including it throughout the review. We provide a brief overview of grey literature before describing our search and review approach. We also discuss the benefits and challenges of including grey literature in our systematic review, as well as the strengths and limitations to our approach. Detailed information about incorporating grey literature in reviews is important in advancing methodology as review teams adapt and build upon the approaches described. abstract_id: PUBMED:35289476 A systematic review case study of urgent and emergency care configuration found citation searching of Web of Science and Google Scholar of similar value. Background: Supplementary search methods, including citation searching, are essential if systematic reviews are to avoid producing biased conclusions. Little evidence exists on how to prioritise databases for citation searching or to establish whether using multiple sources is beneficial. Objectives: A systematic review examining urgent and emergency care reconfiguration was used to investigate the utility of citation searching on Web of Science (WOS) and/or Google Scholar (GS). Methods: This case study investigated numbers of studies, additional studies and unique studies retrieved from both sources. In addition, the time to search, the ease of adding references to reference management software and obtaining abstracts of studies for screening are briefly considered. Results: WOS retrieved 62 references after deduplication of the results, 52 being additional references not retrieved during the database searching. GS retrieved 134 unique references with 63 additional references. WOS and GS retrieved the same three additional included studies. WOS was less time intensive to search given the facility to restrict to English language papers and availability of abstracts. Conclusions: In a single systematic review case study, citation searching was required to identify all included studies. Citation searching on WOS is more efficient, where a subscription is available. Both databases identified the same studies but GS required additional time to remove non-English language studies and locate abstracts. abstract_id: PUBMED:36260090 Paperfetcher: A tool to automate handsearching and citation searching for systematic reviews. Systematic reviews are vital instruments for researchers to understand broad trends in a field and synthesize evidence on the effectiveness of interventions in addressing specific issues. The quality of a systematic review depends critically on having comprehensively surveyed all relevant literature on the review topic. In addition to database searching, handsearching is an important supplementary technique that helps increase the likelihood of identifying all relevant studies in a literature search. Traditional handsearching requires reviewers to manually browse through a curated list of field-specific journals and conference proceedings to find articles relevant to the review topic. This manual process is not only time-consuming, laborious, costly, and error-prone due to human fatigue, but it also lacks replicability due to its cumbersome manual nature. To address these issues, this paper presents a free and open-source Python package and an accompanying web-app, Paperfetcher, to automate the retrieval of article metadata for handsearching. With Paperfetcher's assistance, researchers can retrieve article metadata from designated journals within a specified time frame in just a few clicks. In addition to handsearching, it also incorporates a beta version of citation searching in both forward and backward directions. Paperfetcher has an easy-to-use interface, which allows researchers to download the metadata of retrieved studies as a list of DOIs or as an RIS file to facilitate seamless import into systematic review screening software. To the best of our knowledge, Paperfetcher is the first tool to automate handsearching with high usability and a multi-disciplinary focus. Answer: Based on the provided abstracts, systematic searching for theory to inform systematic reviews appears to be feasible but comes with certain challenges. The abstract from PUBMED:26095232 discusses the development of a template (BeHEMoTh - Behaviour of interest; Health context; Exclusions; Models or Theories) to aid in the systematic identification of theory within systematic reviews. The authors found that only 35% of the systematic reviews they examined reported a method for identifying theory, indicating that the process is not yet standardized. However, their experience suggests that the BeHEMoTh template offers a feasible and useful approach for the identification of theory, although it requires rigorous evaluation. The feasibility of systematic searching for theory is further supported by the abstract from PUBMED:29284538, which indicates that systematic reviewers do use program theory to inform their considerations of socio-economic inequalities in health. However, the use of program theory is not yet widely adopted, is often implicit, fragmented, and not implemented in a systematic way. The abstract from PUBMED:30793445 examines the theory-effectiveness hypothesis and finds that theory-based interventions, as operationalized in systematic reviews, were not found to be more effective than non-theory-based interventions. This suggests that while it is feasible to search for theory, the way theory is used and reported in systematic reviews may need improvement. Overall, these abstracts suggest that while systematic searching for theory to inform systematic reviews is feasible, there are significant challenges in standardizing the process, ensuring transparency and rigor, and evaluating the impact of theory on the effectiveness of interventions. Further research and development of methodologies are needed to improve the systematic identification and application of theory in systematic reviews.
Instruction: Is a high maintenance dose of clopidogrel suitable for overcoming clopidogrel resistance in patients? Abstracts: abstract_id: PUBMED:25893489 Is a high maintenance dose of clopidogrel suitable for overcoming clopidogrel resistance in patients? Background: A double maintenance dose of clopidogrel at 150 mg daily has been suggested as an effective alternative treatment for patients who have clopidogrel resistance. Objective: To determine if a double maintenance dose of clopidogrel can overcome the low drug response rate observed in patients who have clopidogrel resistance while on a 75 mg daily standard maintenance dose of clopidogrel. Methods: A retrospective analysis was conducted in South Korean patients who underwent a platelet function test and received a double maintenance dose of clopidogrel at a secondary medical institution between January 2011 and June 2012. The primary endpoint was to assess clopidogrel response using an adenosine diphosphate test after a double maintenance dose of clopidogrel. The secondary endpoint was the presence of factors that could affect response to clopidogrel. Results: Of 389 patients identified, 77 patients were eligible for this study. Values from the adenosine diphosphate test decreased significantly in 63 patients (82%) after a double maintenance dose of clopidogrel (p &lt; 0.001). A total of 37 patients (48%) overcame clopidogrel resistance. Concurrent disease appeared to be a contributory factor in clopidogrel resistance. Conclusion: A double maintenance dose of clopidogrel at 150 mg daily was associated with a reduction in adenosine diphosphate-induced platelet aggregation in South Korean patients who previously exhibited clopidogrel resistance. abstract_id: PUBMED:25613665 Meta-analysis appraising high maintenance dose clopidogrel in patients who underwent percutaneous coronary intervention with and without high on-clopidogrel platelet reactivity. The CURRENT-OASIS 7 (Clopidogrel and Aspirin Optimal Dose Usage to Reduce Recurrent Events-Seventh Organization to Assess Strategies in Ischemic Symptoms) trial showed that a 7-day 150-mg maintenance dose (MD) clopidogrel could reduce cardiovascular events in subgroup patients who underwent percutaneous coronary intervention (PCI) compared with the 75 mg/day regimen, although whether prolonging the high MD clopidogrel (≥150 mg) treatment period to at least 4 weeks can reduce major adverse cardiac events in the patients who underwent PCI with and without high on-clopidogrel platelet reactivity (HPR) is still controversial. We searched Pubmed, Embase, and Cochrane Library from inception until September 2014 for randomized controlled trials that compared high versus standard MD clopidogrel in patients who underwent PCI. Seventeen trials involving 4,822 patients who underwent PCI included 2,879 patients who were allocated to the "HPR patients" subgroup and 1,943 to the "native patients" subgroup without paying attention to the clopidogrel reactivity before randomization. Compared with the standard therapy, the high MD clopidogrel was associated with a significant reduction in the risk of major adverse cardiac events (odds ratio [OR] 0.52, 95% confidence interval [CI] 0.39 to 0.71, p &lt;0.0001) in patients who underwent PCI. The HPR patients subgroup was also benefited from such high MD treatment (OR 0.54, 95% CI 0.38 to 0.77, p = 0.0007). The observed benefits were mainly attributed to treatment-associated reduction in stent thrombosis (OR 0.43, 95% CI 0.23 to 0.78, p = 0.006) and target vessel revascularization (OR 0.38, 95% CI 0.20 to 0.74, p = 0.004). There was no difference in the rate of major/minor bleeding event between the high and standard MD group (OR 0.80, 95% CI 0.56 to 1.13, p = 0.21). In conclusion, the efficacy and safety of at least 4 weeks' high MD clopidogrel is greater than that of standard therapy for patients who underwent PCI with and without HPR. abstract_id: PUBMED:24194947 High-maintenance-dose clopidogrel in patients undergoing percutaneous coronary intervention: a systematic review and meta-analysis. Background: Despite routine use of clopidogrel, adverse cardiovascular events recur among some patients undergoing percutaneous coronary intervention (PCI). To optimize antiplatelet therapies, we performed a meta-analysis to quantify the efficacy of high versus standard-maintenance-dose clopidogrel in these patients. Methods: Randomized controlled trials (RCTs) comparing high (&gt;75 mg) and standard maintenance doses of clopidogrel in patients undergoing PCI were included. The primary efficacy and safety end-points were major adverse cardiovascular/cerebrovascular events (MACE/MACCE) and major bleeding. The secondary end-points were other ischemic and bleeding adverse effects. The pooled odds ratio (OR) for each outcome was estimated. Results: 14 RCTs with 4424 patients were included. Compared with standard-maintenance-dose clopidogrel, high-maintenance-dose clopidogrel significantly reduced the incidence of MACE/MACCE (OR 0.60; 95% CI 0.43 to 0.83), stent thrombosis (OR 0.56; 95% CI 0.32 to 0.99) and target vessel revascularization (OR 0.38; 95% CI 0.20 to 0.74), without significant decrease of the risk of cardiovascular death (OR 0.92; 95% CI 0.74 to 1.13) and myocardial infarction (OR 0.83; 95% CI 0.51 to 1.33). For safety outcomes, it did not significantly increase the risk of major bleeding (OR 0.73; 95% CI 0.41 to 1.32), minor bleeding (OR 1.29; 95% CI 1.00 to 1.66) and any bleeding (OR 1.14; 95% CI 0.91 to 1.43). Conclusion: High-maintenance-dose clopidogrel reduces the recurrence of most ischemic events in patients post-PCI without increasing the risk of bleeding complications. abstract_id: PUBMED:19324253 Randomized comparison of adjunctive cilostazol versus high maintenance dose clopidogrel in patients with high post-treatment platelet reactivity: results of the ACCEL-RESISTANCE (Adjunctive Cilostazol Versus High Maintenance Dose Clopidogrel in Patients With Clopidogrel Resistance) randomized study. Objectives: The purpose of this study was to determine the impact of adjunctive cilostazol in patients with high post-treatment platelet reactivity (HPPR) undergoing coronary stenting. Background: Although addition of cilostazol to dual antiplatelet therapy enhances adenosine diphosphate (ADP)-induced platelet inhibition, it is unknown whether adjunctive cilostazol can reduce HPPR. Methods: Sixty patients with HPPR after a 300-mg loading dose of clopidogrel were enrolled. HPPR was defined as maximal platelet aggregation (Agg(max)) &gt;50% with 5 micromol/l ADP. Patients were randomly assigned to receive either adjunctive cilostazol (triple group; n = 30) or high maintenance dose (MD) clopidogrel (high-MD group; n = 30). Platelet function was assessed at baseline and after 30 days with conventional aggregometry and the VerifyNow assay. Results: Baseline platelet function measurements were similar in both groups. After 30 days, significantly fewer patients in the triple versus high-MD group had HPPR (3.3% vs. 26.7%, p = 0.012). Percent inhibitions of 5 micromol/l ADP-induced Agg(max) and late platelet aggregation (Agg(late)) were significantly greater in the triple versus high-MD group (51.1 +/- 22.5% vs. 28.0 +/- 18.5%, p &lt; 0.001, and 70.9 +/- 27.3% vs. 45.3 +/- 23.4%, p &lt; 0.001, respectively). Percent inhibitions of 20 micromol/l ADP-induced Agg(max) and Agg(late) were consistently greater in the triple versus high-MD group. Percent change of P2Y12 reaction units demonstrated a higher antiplatelet effect in the triple versus high-MD group (39.6 +/- 24.1% vs. 23.1 +/- 29.9%, p = 0.022). Conclusions: Adjunctive cilostazol reduces the rate of HPPR and intensifies platelet inhibition as compared with a high-MD clopidogrel of 150 mg/day. abstract_id: PUBMED:20104932 High maintenance dosage of clopidogrel is associated with a reduced risk of stent thrombosis in clopidogrel-resistant patients. Background: Stent thrombosis remains an important complication after stent implantation, despite the use of dual antiplatelet therapy with aspirin (acetylsalicylic acid) and clopidogrel. Several studies have shown an increased risk of thrombotic events in patients with resistance to clopidogrel. Some recent studies have suggested that a higher clopidogrel maintenance dosage could enhance ex vivo platelet inhibition and thereby overcome resistance to clopidogrel. Objectives: To investigate whether a higher clopidogrel maintenance dosage is associated with a reduced risk of stent thrombosis after percutaneous coronary intervention (PCI) in clopidogrel-resistant patients and to evaluate the frequency of hemorrhagic accidents that could be associated with a high clopidogrel maintenance dosage. Methods: An observational study was performed in 52 consecutive clopidogrel-resistant patients (resistance defined according to adenosine diphosphate-induced platelet aggregation assessment) who underwent a PCI with stenting at a tertiary referral center (Toulouse University Hospital, France). All patients received a clopidogrel loading dose of 300 mg, then 32 patients received a clopidogrel maintenance dosage of 75 mg/day (patients admitted between 2004 and 2005) and 20 patients received 150 mg/day (patients admitted in 2006). We compared the occurrence of definite stent thrombosis and hemorrhagic accidents between these two groups, using a regression model. Results: Among the patients treated with clopidogrel 75 mg/day, 26 (81.3%) had definite stent thrombosis versus seven (35.0%) treated with 150 mg/day (adjusted relative risk [RR] 2.46; 95% CI 1.63, 2.76; p = 0.002). The risk of major adverse cardiac events (MACE) was also significantly lower in patients treated with 150 mg/day (adjusted RR 2.63; 95% CI 1.82, 2.82; p = 0.001). There was no significant difference between the two groups regarding hemorrhagic accidents. Conclusion: Our data suggest that a high maintenance dosage of clopidogrel (150 mg/day) is associated with a reduced risk of definite stent thrombosis and MACE compared with a maintenance dosage of 75 mg/day. The frequency of hemorrhagic accidents was similar between the two groups, underlining a positive benefit-risk ratio of this strategy in clopidogrel-resistant patients. These findings deserve confirmation in a prospective, well conducted study. abstract_id: PUBMED:21239075 The EFFect of hIgh-dose ClopIdogrel treatmENT in patients with clopidogrel resistance (the EFFICIENT trial). Objectives: The aim of this study was to evaluate the effect of high-dose clopidogrel continuation treatment on the development of MACCE after elective PCI in patients with clopidogrel resistance. Methods: The study group consisted of 192 patients. Of these, 98 participants without resistance served as the control group (Group 1) and received 75 mg/day clopidogrel for 1 month. Ninety-four patients with resistance were randomly divided into two groups: 47 patients in the standard-dose group (Group 2) received 75 mg/day continuation therapy, whereas 47 patients in the high-dose group (Group 3) received 150 mg/day continuation therapy for 1 month. Clopidogrel resistance was evaluated with the VerifyNow P2Y12 test. Patients with a platelet inhibition value lower than 40% were classified as resistant. Results: During the 6-month follow-up for MACCE, the event-rate in Group 2 was significantly higher than both Groups 1 and 3 (Group 1 vs Group 2; p=0.019, Group 1 vs Group 3; p=0.82, Group 2 vs Group 3; p=0.045). Total bleeding rate in all groups were similar (Group 1 vs Group 2; p=0.54, Group 1 vs Group 3; p=0.27, Group 2 vs Group 3; p=0.16). The rate of NACE was similar in all groups (Group 1 vs Group 2; p=0.08, Group 1 vs Group 3; p=0.50, Group 2 vs Group 3; p=0.39). Conclusion: In patients who underwent elective PCI and had clopidogrel resistance, high-dose clopidogrel continuation therapy was more efficient in preventing MACCE than the standard dose. High-dose continuation therapy did not increase the risk of bleeding complication (The EFFICIENT Trial; ClinicalTrials.gov number: NCT01032668). abstract_id: PUBMED:27594816 Potent and Orally Bioavailable Antiplatelet Agent, PLD-301, with the Potential of Overcoming Clopidogrel Resistance. PLD-301, a phosphate prodrug of clopidogrel thiolactone discovered by Prelude Pharmaceuticals with the aim to overcome clopidogrel resistance, was evaluated for its in vivo inhibitory effect on ADP-induced platelet aggregation in rats. The potency of PLD-301 was similar to that of prasugrel, but much higher than that of clopidogrel. The results of pharmacokinetic analysis showed that the oral bioavailability of clopidogrel thiolactone converted from PLD-301 was 4- to 5-fold higher than that of the one converted from clopidogrel, suggesting that in comparison with clopidogrel, lower doses of PLD-301 could be used clinically. In summary, PLD-301 presents a potent and orally bioavailable antiplatelet agent that might have some advantages over clopidogrel, such as overcoming clopidogrel resistance for CYP2C19-allele loss-of-function carriers, and lowering dose-related toxicity due to a much lower effective dose. abstract_id: PUBMED:18312754 Functional effects of high clopidogrel maintenance dosing in patients with inadequate platelet inhibition on standard dose treatment. Updated guidelines on percutaneous coronary intervention recommend increasing the dose of clopidogrel to 150 mg in high-risk patients if &lt;50% platelet inhibition is demonstrated. However, to date, the functional impact of this recommendation has been poorly explored. The aim of this study was to assess the functional implications associated with the use of clopidogrel 150 mg/day in patients with inadequate platelet inhibition while receiving standard 75 mg/day maintenance treatment. Patients with diabetes mellitus have a higher prevalence of inadequate clopidogrel-induced antiplatelet effects and stent thrombosis compared with those without diabetes and were selected for this analysis. Platelet inhibition was assessed using the VerifyNow P2Y12 assay in patients with type 2 diabetes receiving dual-antiplatelet therapy. Patients (n = 17) with &lt;50% platelet inhibition were treated with clopidogrel 150 mg/day for 1 month. Adenosine diphosphate-induced aggregation and the P2Y12 reactivity ratio were also assessed. Platelet function profiles were compared with that of a control group (n = 17) with &gt;or=50% inhibition. Platelet inhibition increased from 27.1 +/- 12% to 40.6 +/- 18% in patients treated with clopidogrel 150 mg/day (p = 0.009; primary end point). All other functional measures also showed enhanced clopidogrel-induced antiplatelet effects. The degree of platelet inhibition achieved after treatment with clopidogrel 150 mg/day varied broadly, and only 35% of patients yielded a degree of platelet inhibition &gt;or=50%. Increasing the dose in patients with inadequate response to clopidogrel did not reach the same degree of antiplatelet effects as those achieved in patients with adequate response while receiving 75 mg/day. In conclusion, the use of a 150 mg maintenance dose of clopidogrel in patients with type 2 diabetes with &lt;50% platelet inhibition is associated with enhanced antiplatelet effects. However, the antiplatelet effects achieved are nonuniform, and a considerable number of patients persist with inadequate platelet inhibition. abstract_id: PUBMED:25886999 High maintenance dose of clopidogrel in patients with high on-treatment platelet reactivity after a percutaneous coronary intervention: a meta-analysis. Objective: High on-treatment platelet reactivity (HTPR) has been linked to cardiovascular (CV) events after a percutaneous coronary intervention. There have been some controversies on whether a high maintenance dose (MD) of clopidogrel is effective for HTPR patients. Thus, we carried out a meta-analysis to assess the efficacy and safety of a high MD of clopidogrel in patients with HTPR. Methods: Searches of PubMed (from 1966 to May 2014), EMBASE (from 1974 to May 2014), and the Cochrane Library (2 May 2014) were performed. All randomized-controlled trials assessing the efficacy and safety of a high MD of clopidogrel in patients with HTPR were included. Results: A total of eight randomized-controlled trials including 3865 patients were included for analysis. In patients with HTPR, high-dose clopidogrel significantly reduced the risk of major adverse CV events or major adverse cardiac and cerebrovascular events [risk ratio (RR) 0.59; 95% confidence interval (CI) 0.39-0.88], stent thrombosis (RR 0.43; 95% CI 0.20-0.92), and target vessel revascularization (RR 0.31; 95% CI 0.10-0.93), without increasing major bleeding (RR 0.75; 95% CI 0.43-1.31) compared with standard-dose clopidogrel. Conclusion: A high MD of clopidogrel may be a feasible and readily available treatment to lower the risk of recurrent CV events in patients with HTPR after undergoing percutaneous coronary intervention, especially in HTPR patients with coronary artery disease and chronic kidney disease. abstract_id: PUBMED:21546102 A high maintenance dose increases the inhibitory response to clopidogrel in patients with high on-treatment residual platelet reactivity. Background: Patients with high on-treatment residual adenosine diphosphate-inducible platelet reactivity (HRPR) despite clopidogrel therapy are at an increased risk for adverse events after coronary stenting. A higher maintenance dose of clopidogrel may increase the inhibitory response in these patients. Methods: We randomly assigned 46 patients with HRPR in at least one of three platelet reactivity tests to 75 mg vs. 150 mg clopidogrel for 3 months after angioplasty and stenting for cardiovascular disease. Platelet reactivity was assessed by the VerifyNow P2Y12 assay, the vasodilator-stimulated phosphoprotein (VASP) phosphorylation assay, and multiple electrode aggregometry (MEA) 24h and 3 months after the intervention. Results: Baseline platelet reactivity data did not differ significantly between treatment groups (all p &gt; 0.2). At 3 months, platelet reactivities by the VerifyNow P2Y12 assay, the VASP assay, and MEA were 262 ± 74 P2Y12 reaction units (PRU), 59.5 ± 21.3%, and 46 ± 19 aggregation units (AU) in the standard dose group vs. 190 ± 78 PRU, 36.1 ± 16.8%, and 26 ± 16 AU in the high maintenance dose group (all p ≤ 0.003). Further, HRPR was significantly less frequent in patients assigned to 150 mg clopidogrel compared to patients receiving 75 mg clopidogrel per day (33% vs. 87%; p &lt; 0.001). Conclusion: A high maintenance dose increases the antiplatelet effects of clopidogrel in patients with HRPR after angioplasty and stenting for cardiovascular disease. However, it needs to be shown that the higher dosage is associated with a beneficial clinical outcome in these patients. Answer: Yes, a high maintenance dose of clopidogrel is suitable for overcoming clopidogrel resistance in patients. Several studies have shown that increasing the maintenance dose of clopidogrel can improve platelet inhibition in patients who exhibit resistance to the standard dose. A retrospective analysis conducted in South Korean patients demonstrated that a double maintenance dose of clopidogrel at 150 mg daily was associated with a significant reduction in adenosine diphosphate-induced platelet aggregation in patients who previously exhibited clopidogrel resistance (PUBMED:25893489). Additionally, a meta-analysis found that high maintenance dose clopidogrel was associated with a significant reduction in the risk of major adverse cardiac events in patients who underwent percutaneous coronary intervention (PCI), including those with high on-clopidogrel platelet reactivity (PUBMED:25613665). Another systematic review and meta-analysis also supported the efficacy of high-maintenance-dose clopidogrel in reducing the recurrence of ischemic events in patients post-PCI without increasing the risk of bleeding complications (PUBMED:24194947). Furthermore, a randomized study comparing adjunctive cilostazol versus high maintenance dose clopidogrel found that adjunctive cilostazol reduces the rate of high post-treatment platelet reactivity and intensifies platelet inhibition compared with a high maintenance dose of 150 mg/day (PUBMED:19324253). An observational study indicated that a high maintenance dosage of clopidogrel (150 mg/day) is associated with a reduced risk of definite stent thrombosis and major adverse cardiac events compared with a maintenance dosage of 75 mg/day, without a significant increase in hemorrhagic accidents (PUBMED:20104932). The EFFICIENT trial also concluded that in patients with clopidogrel resistance, high-dose clopidogrel continuation therapy was more efficient in preventing major adverse cardiac and cerebrovascular events (MACCE) than the standard dose, without increasing the risk of bleeding complications (PUBMED:21239075). In summary, the evidence suggests that a high maintenance dose of clopidogrel can be an effective strategy to overcome clopidogrel resistance and reduce the risk of thrombotic events in patients, particularly those undergoing PCI, without significantly increasing the risk of bleeding.
Instruction: Early verbal fluency decline after STN implantation: is it a cognitive microlesion effect? Abstracts: abstract_id: PUBMED:22846795 Early verbal fluency decline after STN implantation: is it a cognitive microlesion effect? Backgrounds: Worsening of verbal fluency is reported after subthalamic nucleus deep brain stimulation in Parkinson's disease. It is postulated that these changes could reflect microlesion consecutive to the surgical procedure itself. Methods: We evaluated verbal fluency, in 26 patients (mean age, 57.9±8.5 years; mean disease duration, 11.4±3.5 years) both before surgery (baseline) and, after surgery respectively the third day (T3), the tenth day (T10) just after STN implantation before turning on the stimulation and at six months (T180). Results: Number of total words and switches was significantly reduced at T3 and T10, while average cluster size was unchanged. Repeated post-operative neuropsychological testing demonstrated reliable improvement from T3 to T180 on verbal fluency. Conclusion: This study provides evidence of transient verbal fluency decline consecutive to a microlesion effect. Further studies needed to determine a putative relationship between early and long-term verbal fluency impairment. abstract_id: PUBMED:25125047 Does early verbal fluency decline after STN implantation predict long-term cognitive outcome after STN-DBS in Parkinson's disease? Backgrounds: An early and transient verbal fluency (VF) decline and impairment in frontal executive function, suggesting a cognitive microlesion effect may influence the cognitive repercussions related to subthalamic nucleus deep brain stimulation (STN-DBS). Methods: Neuropsychological tests including semantic and phonemic verbal fluency were administered both before surgery (baseline), the third day after surgery (T3), at six months (T180), and at an endpoint multiple years after surgery (Tyears). Results: Twenty-four patients (mean age, 63.5 ± 9.5 years; mean disease duration, 12 ± 5.8 years) were included. Both semantic and phonemic VF decreased significantly in the acute post-operative period (44.4 ± 28.2% and 34.3 ± 33.4%, respectively) and remained low at 6 months compared to pre-operative levels (decrease of 3.4 ± 47.8% and 10.8 ± 32.1%) (P &lt; 0.05). Regression analysis showed phonemic VF to be an independent factor of decreased phonemic VF at six months. Age was the only independent predictive factor for incident Parkinson's disease dementia (PDD) (F (4,19)=3.4, P&lt;0.03). Conclusion: An acute post-operative decline in phonemic VF can be predictive of a long-term phonemic VF deficit. The severity of this cognitive lesion effect does not predict the development of dementia which appears to be disease-related. abstract_id: PUBMED:25374271 Decline in verbal fluency after subthalamic nucleus deep brain stimulation in Parkinson's disease: a microlesion effect of the electrode trajectory? Background: Decline in verbal fluency (VF) is frequently reported after chronic deep brain stimulation (DBS) of the subthalamic nucleus (STN) in Parkinson disease (PD). Objective: We investigated whether the trajectory of the implanted electrode correlate with the VF decline 6 months after surgery. Methods: We retrospectively analysed 59 PD patients (mean age, 61.9 ± 7; mean disease duration, 13 ± 4.6) who underwent bilateral STN-DBS. The percentage of VF decline 6 months after STN-DBS in the on-drug/on-stimulation condition was determined in respect of the preoperative on-drug condition. The patients were categorised into two groups (decline and stable) for each VF. Cortical entry angles, intersection with deep grey nuclei (caudate, thalamic or pallidum), and anatomical extent of the STN affected by the electrode pathway, were compared between groups. Results: A significant decline of both semantic and phonemic VF was found after surgery, respectively 14.9% ± 22.1 (P &lt; 0.05) and 14.2% ± 30.3 (P &lt; 0.05). Patients who declined in semantic VF (n = 44) had a left trajectory with a more anterior cortical entry point (56 ± 53 versus 60 ± 55 degree, P = 0.01) passing less frequently trough the thalamus (P = 0.03). Conclusions: Microlesion of left brain regions may contribute to subtle cognitive impairment following STN-DBS in PD. abstract_id: PUBMED:20362061 Patient-specific analysis of the relationship between the volume of tissue activated during DBS and verbal fluency. Deep brain stimulation (DBS) for the treatment of advanced Parkinson's disease involves implantation of a lead with four small contacts usually within the subthalamic nucleus (STN) or globus pallidus internus (GPi). While generally safe from a cognitive standpoint, STN DBS has been commonly associated with a decrease in the speeded production of words, a skill referred to as verbal fluency. Virtually all studies comparing presurgical to postsurgical verbal fluency performance have detected a decrease with DBS. The decline may be attributable in part to the surgical procedures, yet the relative contributions of stimulation effects are not known. In the present study, we used patient-specific DBS computer models to investigate the effects of stimulation on verbal fluency performance. Specifically, we investigated relationships of the volume and locus of activated STN tissue to verbal fluency outcome. Stimulation of different electrode contacts within the STN did not affect total verbal fluency scores. However, models of activation revealed subtle relationships between the locus and volume of activated tissue and verbal fluency performance. At ventral contacts, more tissue activation inside the STN was associated with decreased letter fluency performance. At optimal contacts, more tissue activation within the STN was associated with improved letter fluency performance. These findings suggest subtle effects of stimulation on verbal fluency performance, consistent with the functional nonmotor subregions/somatotopy of the STN. abstract_id: PUBMED:30779251 White matter tracts lesions and decline of verbal fluency after deep brain stimulation in Parkinson's disease. Decline of verbal fluency (VF) performance is one of the most systematically reported neuropsychological adverse effects after subthalamic nucleus deep brain stimulation (STN-DBS). It has been suggested that this worsening of VF may be related to a microlesion due to the electrode trajectories. We describe the disruption of surrounding white matter tracts following electrode implantation in Parkinson's disease (PD) patients with STN-DBS and assess whether damage of fiber pathways is associated with VF impairment after surgery. We retrospectively analyzed 48 PD patients undergoing bilateral STN DBS. The lesion mask along the electrode trajectory transformed into the MNI 152 coordinate system, was compared with white matter tract atlas in Tractotron software, which provides a probability and proportion of fibers disconnection. Combining tract- and atlas-based analysis reveals that the trajectory of the electrodes intersected successively with the frontal aslant tract, anterior segment of arcuate tract, the long segment of arcuate tract, the inferior longitudinal fasciculus, the superior longitudinal fasciculus, the anterior thalamic radiation, and the fronto striatal tract. We found no association between the proportion fiber disconnection and the severity of VF impairment 6 months after surgery. Our findings demonstrated that microstructural injury associated with electrode trajectories involved white matter bundles implicated in VF networks. abstract_id: PUBMED:30363586 The Verbal Fluency Decline After Deep Brain Stimulation in Parkinson's Disease: Is There an Influence of Age? Background: DBS is commonly used to treat Parkinson's disease (PD). DBS is not considered to cause major cognitive side effects, but some research groups have reported that it can cause decreased verbal fluency. The influence of age on DBS cognitive outcome is unclear. We investigated the possible influence of patients' age, level of education, disease duration, disease progression, depression, and levodopa equivalent dose (LED) on verbal fluency performance in patients with PD who underwent DBS of the subthalamic nucleus (STN-DBS). In this article, we investigated the influence of demographic and clinical parameters, especially age, on cognitive performance post-DBS in PD patients. Methods: Forty-three patients with PD and without major psychiatric illness (according to Diagnostic and Statistical Manual of Mental Disroders, Fourth Edition) were enrolled in the study. Median age was 64.0 years (range, 46-77). In 21 patients, the indication for DBS was established on clinical grounds in keeping with international guidelines; these patients underwent STN-DBS, and the remaining 22 did not. Cognitive performance in both groups was assessed by standard neuropsychological test batteries at baseline and after median follow-up of 7 months. Results: A statistically significant decline in the semantic category of verbal fluency task was found in the STN-DBS group (P &lt; 0.01). Linear regression model revealed an influence of age (P &lt; 0.01) and disease duration (P &lt; 0.01) in relation to this decline. Conclusions: This study confirms previous findings that verbal fluency declines after STN-DBS in PD patients in comparison to PD patients without DBS. This decline is related to age and disease duration. abstract_id: PUBMED:26831827 Verbal Fluency in Parkinson's Patients with and without Bilateral Deep Brain Stimulation of the Subthalamic Nucleus: A Meta-analysis. Objectives: Patients with Parkinson's disease often experience significant decline in verbal fluency over time; however, deep brain stimulation of the subthalamic nucleus (STN-DBS) is also associated with post-surgical declines in verbal fluency. The purpose of this study was to determine if Parkinson's patients who have undergone bilateral STN-DBS have greater impairment in verbal fluency compared to Parkinson's patients treated by medication only. Methods: A literature search yielded over 140 articles and 10 articles met inclusion criteria. A total of 439 patients with Parkinson's disease who underwent bilateral STN-DBS and 392 non-surgical patients were included. Cohen's d, a measure of effect size, was calculated using a random effects model to compare post-treatment verbal fluency in patients with Parkinson's disease who underwent STN-DBS versus those in the non-surgical comparison group. Results: The random effects model demonstrated a medium effect size for letter fluency (d=-0.47) and a small effect size for category fluency (d=-0.31), indicating individuals with bilateral STN-DBS had significantly worse verbal fluency performance than the non-surgical comparison group. Conclusions: Individuals with Parkinson's disease who have undergone bilateral STN-DBS experience greater deficits in letter and category verbal fluency compared to a non-surgical group. abstract_id: PUBMED:30687215 Quantitative EEG and Verbal Fluency in DBS Patients: Comparison of Stimulator-On and -Off Conditions. Introduction: Deep brain stimulation of the subthalamic nucleus (STN-DBS) ameliorates motor function in patients with Parkinson's disease and allows reducing dopaminergic therapy. Beside effects on motor function STN-DBS influences many non-motor symptoms, among which decline of verbal fluency test performance is most consistently reported. The surgical procedure itself is the likely cause of this decline, while the influence of the electrical stimulation is still controversial. STN-DBS also produces widespread changes of cortical activity as visualized by quantitative EEG. The present study aims to link an alteration in verbal fluency performance by electrical stimulation of the STN to alterations in quantitative EEG. Methods: Sixteen patients with STN-DBS were included. All patients had a high density EEG recording (256 channels) while testing verbal fluency in the stimulator on/off situation. The phonemic, semantic, alternating phonemic and semantic fluency was tested (Regensburger Wortflüssigkeits-Test). Results: On the group level, stimulation of STN did not alter verbal fluency performance. EEG frequency analysis showed an increase of relative alpha2 (10-13 Hz) and beta (13-30 Hz) power in the parieto-occipital region (p ≤ 0.01). On the individual level, changes of verbal fluency induced by stimulation of the STN were disparate and correlated inversely with delta power in the left temporal lobe (p &lt; 0.05). Conclusion: STN stimulation does not alter verbal fluency performance in a systematic way at group level. However, when in individual patients an alteration of verbal fluency performance is produced by electrical stimulation of the STN, it correlates inversely with left temporal delta power. abstract_id: PUBMED:34678718 Anterior lead location predicts verbal fluency decline following STN-DBS in Parkinson's disease. Introduction: Verbal fluency (VF) decline is a well-documented cognitive effect of Deep Brain Stimulation of the subthalamic nucleus (STN-DBS) in patients with Parkinson's disease (PD). This decline may be associated with disruption to left-sided frontostriatal circuitry involving the anteroventral non-motor area of the STN. While recent studies have examined the impact of lead location in relation to functional STN subdivisions on VF outcomes, results have been mixed and methods have been limited by atlas-based location mapping. Methods: Participants included 59 individuals with PD who underwent bilateral STN-DBS. Each participant's active contact location was determined in an atlas-independent fashion, relative to their individual MR-visualized STN midpoint. Multiple linear regression was used to examine lead location in each direction as a predictor of phonemic and semantic VF decline, controlling for demographic and disease variables. Results: More anterior lead locations relative to the STN midpoint in the left hemisphere predicted greater phonemic VF decline (B = -2.34, B SE = 1.08, β = -0.29, sr2 = 0.08). Lead location was not a significant predictor of semantic VF decline. Conclusion: Using an individualized atlas-independent approach, present findings suggest that more anterior stimulation of the left STN may uniquely contribute to post-DBS VF decline. This is consistent with models in which the anterior STN represents a "non-motor" functional subdivision with connections to frontal regions, e.g., the left dorsal prefrontal cortex. Future studies should investigate the effect of DBS lead trajectory on VF outcomes. abstract_id: PUBMED:28548549 Semantic verbal fluency impairment is detectable in patients with subjective cognitive decline. Patients with subjective cognitive decline (SCD) are at higher risk for conversion to dementia due to Alzheimer's disease (AD). Semantic verbal fluency (SVF) seems to be impaired in the early stages of AD. The goal of the present study was to identify the discriminative potential of verbal fluency (VF) in patients with SCD to show if very early signs of cognitive decline may be detected in SCD. We examined 93 normal controls (NC) and 61 participants with SCD. Each participant was administered a comprehensive neuropsychological battery. All participants underwent tests of VF: phonemic verbal fluency (PVF), letters K and P and SVF (animals and vegetables categories). In addition to the total score, two 30-second intervals, and clustering and switching indices in SVF were evaluated. SCD generated fewer words in the total score and 30- to 60-second interval in vegetables category and they performed more switches in animals category. There was no significant difference between the SCD and the NC groups in all other VF measures. Quantitative measures of SVF (a decreased number of vegetables) as well as qualitative measures were detected in SCD group and could be considered as an early neuropsychological marker of subtle cognitive impairment. Answer: The decline in verbal fluency (VF) observed after subthalamic nucleus (STN) deep brain stimulation (DBS) in Parkinson's disease (PD) patients is reported to be a potential cognitive microlesion effect. Studies have shown that there is a significant reduction in the number of total words and switches in verbal fluency tasks shortly after STN implantation, which suggests a transient decline due to the surgical procedure itself (PUBMED:22846795). This early post-operative decline, particularly in phonemic verbal fluency, has been found to predict long-term phonemic VF deficits, although it does not predict the development of dementia, which appears to be more closely related to the disease progression (PUBMED:25125047). Further research indicates that the trajectory of the implanted electrode, especially when it involves the left brain regions, may contribute to the VF decline observed six months after surgery (PUBMED:25374271). Additionally, patient-specific analysis has revealed that the volume and locus of activated tissue during DBS can subtly affect VF performance, with more activation inside the STN being associated with decreased letter fluency performance (PUBMED:20362061). Moreover, the disruption of surrounding white matter tracts following electrode implantation has been described, although no direct association was found between the proportion of fiber disconnection and the severity of VF impairment six months after surgery (PUBMED:30779251). Age and disease duration have also been identified as factors influencing the decline in VF after STN-DBS (PUBMED:30363586). A meta-analysis has confirmed that patients with PD who have undergone bilateral STN-DBS experience greater deficits in letter and category verbal fluency compared to a non-surgical group (PUBMED:26831827). However, on an individual level, changes in VF induced by STN stimulation were disparate and did not alter VF performance in a systematic way at the group level (PUBMED:30687215). Lastly, more anterior lead locations relative to the STN midpoint in the left hemisphere have been associated with greater phonemic VF decline, suggesting that anterior stimulation of the left STN may contribute to post-DBS VF decline (PUBMED:34678718). In summary, the evidence suggests that early VF decline after STN implantation in PD patients is likely a cognitive microlesion effect, influenced by factors such as electrode trajectory, stimulation parameters, and individual patient characteristics such as age and disease duration.
Instruction: Can carbon monoxide prevent infection-mediated preterm birth in a mouse model? Abstracts: abstract_id: PUBMED:23531020 Can carbon monoxide prevent infection-mediated preterm birth in a mouse model? Problem: Preterm birth is frequently caused by intrauterine infection and inflammation. Recent studies have demonstrated that carbon monoxide (CO), which is produced endogenously, has potent anti-inflammatory properties. Whether or not CO can prevent infection-mediated preterm birth is unknown. Methods: Mice were assigned to one of four groups: sham infection, sham infection + CO, infection, or infection + CO. Infections were established by intra-uterine injection of Escherichia coli on day 14 of pregnancy. Animals received daily i.p. injections of 1 mL CO-saturated lactated ringers solution (LRS) or LRS alone beginning on the morning of surgery. Gestational age at delivery and litter characteristics was noted. In second experiment, animals were sacrificed 24 hrs post-surgery and tissues were harvested for cytokine analyses. Results: Escherichia coli intrauterine infection increased the number of animals delivering preterm. This effect was significantly ameliorated by CO-LRS. CO-treatment also increased litter size and weights of the surviving offspring. Cytokines in the amniotic fluid and the placenta were increased by E. coli exposure, but CO had no detectible effect on E. coli-stimulated cytokine production. No effects of CO were detected in sham-infected animals. Conclusion: Supplemental CO improves pregnancy outcome after intrauterine infection and may function at a point downstream of, or through pathways independent of, induction of proinflammatory cytokines. abstract_id: PUBMED:24731730 Carbon monoxide attenuates bacteria-induced Endothelin-1 expression in second trimester placental explants. Introduction: The pro-inflammatory mediator and potent vasoconstrictor Endothelin-1 (ET-1) is known to be expressed in the placenta. We have recently demonstrated that very low, non-toxic doses of carbon monoxide (CO), prevented infection-induced preterm birth in mice. However the effect(s) of CO on human gestational tissues is yet to be fully explored. We hypothesize that CO will have a protective role against inflammation-induced E. coli by down-regulating the ET axis in placental explants. Methods: Twenty placentas from elective termination of pregnancy in the second trimester were analyzed with or without exposure to heat killed E. coli over the course of 30 h. Placental ET-1, along with its biologically inactive precursor Big ET-1, and Endothelin Converting Enzyme-1 (ECE-1, responsible for the cleavage of Big ET-1 to ET-1), were analyzed by ELISA. Gene expression for ET-1 (EDN1), ECE-1 and the ETA receptor (EDNRA) were analyzed using qPCR. Localization of ET-1 expression was also demonstrated using immunohistochemistry. Results: E. coli significantly increased ET-1 transcription and secretion of BIG ET-1 and ET-1 in a time dependant manner which was ameliorated when exposed to CO at later time points. In the presence of CO, mRNA levels of ECE-1 were significantly reduced at 3 and 24 h, while EDNRA was significantly reduced at 6 and 18 h. Conclusions: Up-regulation of ET-1 production in human placenta in the setting of infection can be attenuated by low doses of CO. Our results further explore the anti-inflammatory and regulatory mechanism(s) of CO on the ET axis components at the maternal fetal interface. abstract_id: PUBMED:22971054 Effect of carbon monoxide on bacteria-stimulated cytokine production by placental explants. Problem: Preterm birth is frequently caused by an inflammatory response to ascending infections of the reproductive tract. Carbon monoxide (CO) has potent anti-inflammatory properties at subtoxic concentrations. Whether or not CO can modulate inflammatory responses by placental tissues is unclear. Methods: Placental explant cultures were incubated with heat-killed Escherichia coli or Ureaplasma parvum in the presence or absence of 250 ppm CO for 24 hr. Concentrations of cytokines relative viability of the cultures were quantified. Results: Escherichia coli- and U. parvum-stimulated IL-1β production was significantly inhibited by CO supplementation. Escherichia coli-stimulated, but not U. parvum-stimulated, IFN-γ production was inhibited by CO. While CO inhibited PGE(2) production by unstimulated cells, no effects on bacteria-stimulated prostaglandin production were detected. CO had no effect on basal or E. coli-stimulated TNF-α production but enhanced TNF-α production by cultures stimulated with U. parvum. In addition, CO tended to improve the viability of the placental cultures. Conclusions: Low concentrations of CO tended to reduce proinflammatory cytokines and to promote the production of anti-inflammatory cytokines in a pathogen-specific manner. These properties suggest that CO may be useful for promoting a pro-pregnancy cytokine milieu by placental explants and may reduce the consequences of intrauterine infections. abstract_id: PUBMED:23929879 Does carbon monoxide inhibit proinflammatory cytokine production by fetal membranes? Aim: Infection-induced inflammation is a common cause of preterm birth. Pharmacologic inhibition of proinflammatory cytokines improves pregnancy outcome in animal models but there are no universally effective therapies for preterm birth in women. Carbon monoxide (CO) has anti-inflammatory properties at low concentrations but its effects on reproductive tissues is unclear. Therefore, we studied the effect of supplemental CO on the production of cytokines associated with preterm birth by fetal membranes. Methods: Cross-sections of whole fetal membranes, isolated choriodecidua, and isolated amnion were prepared using tissues collected from women who had normal vaginal deliveries at term. Tissues were placed in an organ explant culture system and stimulated with up to 10(8) CFU/mL Escherichia coli. Cultures were incubated under room air or room air+250 ppm CO for 18 h and cytokine concentrations in conditioned medium were quantified by ELISA. Results: CO inhibited IL-1β and TNF-α (P≤0.001) production by cultures stimulated with 10(7) CFU/mL bacteria but had no detectable effect on IL-10 by full-thickness membranes. Although CO also tended to reduce TNF-α production (P=0.053), no effect of CO was detected for IL-10 or IL-1β for membranes stimulated with 10(8) CFU/mL E. coli. TNF-α, but not IL-1β or IL-10 production, was inhibited by CO for choriodecidual cultures stimulated with 10(7) or 10(8) CFU/mL E. coli (P&lt;0.001). IL-1β production was significantly inhibited by CO for amnion cultures stimulated with 10(7) (P=0.002) and 10(8) (P=0.017) CFU/mL E. coli. Exposure to bacteria had no effect on TNF-α or IL-10 production but CO tended to increase IL-10 production by amnion cultures stimulated with 10(8) CFU/mL E. coli (P=0.037). Conclusions: These results suggest that CO may help promote an anti-inflammatory environment during intrauterine infections by inhibiting TNF-α and IL-1β production. abstract_id: PUBMED:20039863 Can sulfasalazine prevent infection-mediated pre-term birth in a murine model? Problem: Sulfasalazine (SASP) blocks activation of nuclear factor-kappa B (NF-kappaB) in gestational tissues in vitro- one of the earliest signals in the inflammatory response. We hypothesized that the administration of SASP would reduce the rate of infection-mediated pre-term birth in a murine model. Method: of study CD-1 mice (n = 40) were assigned on gestational day (gd) 14.5 to 1 of 3 treatments: (1) Sham infection and vehicle; (2) 10(4) CFU Escherichia coli and vehicle; or (3) 10(4) CFU E. coli and SASP (150 mg/Kg daily). Mice were observed twice daily and deliveries prior to gd 18.5 were considered pre-term. Results: Significantly more mice delivered prior to gd 18.5 when infected with 10(4) CFU E. coli than sham-infected mice (P &lt; 0.001) and this effect was significantly reduced in mice also treated with SASP (P = 0.002). SASP also tended to increase litter size (P = 0.060) and significantly increased weight of pups born to dams with intrauterine infections (P = 0.001). Conclusion: SASP reduced rates of pre-term delivery and improved pregnancy outcomes for mice infected with 10(4) CFU E. coli. This suggests that SASP has the potential to play a role in strategies to prevent pre-term birth in women. abstract_id: PUBMED:24486323 Mouse model of intrauterine inflammation: sex-specific differences in long-term neurologic and immune sequelae. Preterm infants, especially those that are exposed to prenatal intrauterine infection or inflammation, are at a major risk for adverse neurological outcomes, including cognitive, motor and behavioral disabilities. We have previously shown in a mouse model that there is an acute fetal brain insult associated with intrauterine inflammation. The objectives of this study were: (1) to elucidate long-term (into adolescence and adulthood) neurological outcomes by assessing neurobehavioral development, MRI, immunohistochemistry and flow cytometry of cells of immune origin and (2) to determine whether there are any sex-specific differences in brain development associated with intrauterine inflammation. Our results have shown that prenatal exposure appeared to lead to changes in MRI and behavior patterns throughout the neonatal period and during adulthood. Furthermore, we observed chronic brain inflammation in the offspring, with persistence of microglial activation and increased numbers of macrophages in the brain, ultimately resulting in neuronal loss. Moreover, our study highlights the sex-specific differences in long-term sequelae. This study, while extending the growing literature of adverse neurologic outcomes following exposure to inflammation during early development, presents novel findings in the context of intrauterine inflammation. abstract_id: PUBMED:30761111 Fetal-Derived MyD88 Signaling Contributes to Poor Pregnancy Outcomes During Gestational Malaria. Placental malaria (PM) remains a severe public health problem in areas of high malaria transmission. Despite the efforts to prevent infection poor outcomes in Plasmodium endemic areas, there is still a considerable number of preterm births and newborns with low birth weight resulting from PM. Although local inflammation triggered in response to malaria is considered crucial in inducing placental damage, little is known about the differential influence of maternal and fetal immune responses to the disease progression. Therefore, using a PM mouse model, we sought to determine the contribution of maternal and fetal innate immune responses to PM development. For this, we conducted a series of cross-breeding experiments between mice that had differential expression of the MyD88 adaptor protein to obtain mother and correspondent fetuses with distinct genetic backgrounds. By evaluating fetal weight and placental vascular spaces, we have shown that the expression of MyD88 in fetal tissue has a significant impact on PM outcomes. Our results highlighted the existence of a distinct contribution of maternal and fetal immune responses to PM onset. Thus, contributing to the understanding of how inflammatory processes lead to the dysregulation of placental homeostasis ultimately impairing fetal development. abstract_id: PUBMED:37491927 Intrauterine colonization with Gardnerella vaginalis and Mobiluncus mulieris induces maternal inflammation but not preterm birth in a mouse model. Problem: Preterm birth (PTB) remains a leading cause of childhood mortality. Recent studies demonstrate that the risk of spontaneous PTB (sPTB) is increased in individuals with Lactobacillus-deficient vaginal microbial communities. One proposed mechanism is that vaginal microbes ascend through the cervix, colonize the uterus, and activate inflammatory pathways leading to sPTB. This study assessed whether intrauterine colonization with either Gardnerella vaginalis and Mobiluncus mulieris alone is sufficient to induce maternal-fetal inflammation and induce sPTB. Method Of Study: C56/B6J mice, on embryonic day 15, received intrauterine inoculation of saline or 108 colony-forming units of G. vaginalis (n = 30), M. mulieris (n = 17), or Lactobacillus crispatus (n = 16). Dams were either monitored for maternal morbidity and sPTB or sacrificed 6 h post-infusion for analysis of bacterial growth and cytokine/chemokine expression in maternal and fetal tissues. Results: Six hours following intrauterine inoculation with G. vaginalis, M. mulieris, or L. crispatus, live bacteria were observed in both blood and amniotic fluid, and a potent immune response was identified in the uterus and maternal serum. In contrast, only a limited immune response was identified in the amniotic fluid and the fetus after intrauterine inoculation. High bacterial load (108 CFU/animal) of G. vaginalis was associated with maternal morbidity and mortality but not sPTB. Intrauterine infusion with L. crispatus or M. mulieris at 108 CFU/animal did not induce sPTB, alter pup viability, litter size, or maternal mortality. Conclusions: Despite inducing an immune response, intrauterine infusion of live G. vaginalis or M. mulieris is not sufficient to induce sPTB in our mouse model. These results suggest that ascension of common vaginal microbes into the uterine cavity alone is not causative for sPTB. abstract_id: PUBMED:27417089 Loop-Mediated Isothermal Amplification Targeting Actin DNA of Trichomonas vaginalis. Trichomoniasis caused by Trichomonas vaginalis is a common sexually transmitted disease. Its association with several health problems, including preterm birth, pelvic inflammatory disease, cervical cancer, and transmission of human immunodeficiency virus, emphasizes the importance of improved access to early and accurate detection of T. vaginalis. In this study, a rapid and efficient loop-mediated isothermal amplification-based method for the detection of T. vaginalis was developed and validated, using vaginal swab specimens from subjects suspected to have trichomoniasis. The LAMP assay targeting the actin gene was highly sensitive with detection limits of 1 trichomonad and 1 pg of T. vaginalis DNA per reaction, and specifically amplified the target gene only from T. vaginalis. Validation of this assay showed that it had the highest sensitivity and better agreement with PCR (used as the gold standard) compared to microscopy and multiplex PCR. This study showed that the LAMP assay, targeting the actin gene, could be used to diagnose early infections of T. vaginalis. Thus, we have provided an alternative molecular diagnostic tool and a point-of-care test that may help to prevent trichomoniasis transmission and associated complications. abstract_id: PUBMED:20069051 Pretreatment with pancaspase inhibitor (Z-VAD-FMK) delays but does not prevent intraperitoneal heat-killed group B Streptococcus-induced preterm delivery in a pregnant mouse model. Caspases and apoptosis are thought to play a role in infection-associated preterm-delivery. We have shown that in vitro treatment with pancaspase inhibitor Z-VAD-FMK protects trophoblasts from microbial antigen-induced apoptosis. Objective. To examine whether in vivo administration of Z-VAD-FMK would prevent infection-induced preterm-delivery. Methods. We injected 14.5 day-pregnant-mice with heat-killed group B streptococcus (HK-GBS). Apoptosis within placentas and membranes was assessed by TUNEL staining. Calpain expression and caspase-3 activation were assessed by immunohistochemistry. Preterm-delivery was defined as expulsion of a fetus within 48 hours after injection. Results. Intrauterine (i.u.) or intraperitoneal (i.p.) HK-GBS injection led to preterm-delivery and induced apoptosis in placentas and membranes at 14 hours. The expression of calpain, a caspase-independent inducer of apoptosis, was increased in placenta. Treatment with the specific caspase inhibitor Z-VAD-FMK (i.p.) prior to HK-GBS (i.p.) delayed but did not prevent preterm-delivery. Conclusion. Caspase-dependent apoptosis appears to play a role in the timing but not the occurrence of GBS-induced preterm delivery in the mouse. Answer: Yes, carbon monoxide (CO) can prevent infection-mediated preterm birth in a mouse model. Studies have shown that CO, which is produced endogenously and has potent anti-inflammatory properties, can improve pregnancy outcomes after intrauterine infection. In a mouse model, intrauterine infection with Escherichia coli increased the number of animals delivering preterm, and this effect was significantly ameliorated by CO treatment. CO treatment also increased litter size and weights of the surviving offspring. Although CO did not have a detectable effect on E. coli-stimulated cytokine production in the amniotic fluid and the placenta, it may function at a point downstream of, or through pathways independent of, the induction of proinflammatory cytokines (PUBMED:23531020). Additionally, CO has been shown to attenuate bacteria-induced Endothelin-1 (ET-1) expression in second-trimester placental explants. ET-1 is a pro-inflammatory mediator and potent vasoconstrictor expressed in the placenta. CO exposure led to reduced transcription and secretion of ET-1 and its precursor Big ET-1 in the presence of E. coli, suggesting that CO has a protective role against inflammation-induced ET-1 upregulation in the human placenta (PUBMED:24731730). Moreover, CO supplementation has been found to inhibit proinflammatory cytokine production by placental explants stimulated with bacteria, suggesting that CO may promote a pro-pregnancy cytokine milieu and reduce the consequences of intrauterine infections (PUBMED:22971054). Similarly, CO has been shown to inhibit the production of proinflammatory cytokines by fetal membranes, which could help promote an anti-inflammatory environment during intrauterine infections (PUBMED:23929879). In summary, the evidence from these studies indicates that CO can prevent infection-mediated preterm birth in a mouse model by modulating inflammatory responses and improving pregnancy outcomes.
Instruction: Do perceptions of risk and quality of life affect use of hormone replacement therapy by postmenopausal women? Abstracts: abstract_id: PUBMED:12949027 Do perceptions of risk and quality of life affect use of hormone replacement therapy by postmenopausal women? Background: Although the understanding of the health impact of hormone replacement therapy (HRT) is incomplete, even less is known about the attitudes, perceptions, and motivations of women faced with the decision to use HRT. The purpose of this study was to evaluate the relation between HRT use and women's perceptions of the risk and benefits associated with HRT use. Methods: A written questionnaire was administered to 387 women, aged 45 years and older, responding to a health plan invitation for free bone mineral density screening. Women were asked to estimate the lifetime probability of developing breast cancer, uterine cancer, osteoporosis, and myocardial infarction when taking HRT and when not taking HRT. Women rated their quality of life in their current state of health, with breast cancer, with uterine cancer, with osteoporosis, and after myocardial infarction. Results: HRT users perceived a greater risk reduction using HRT compared with HRT nonusers for osteoporosis (-34.9% vs -17.8%, P &lt;.001) and myocardial infarction (-20.7% vs -8.4%, P &lt;.001). HRT nonusers perceived a greater risk increase using HRT compared with HRT nonusers for breast cancer (16.5% vs 3.3%, P &lt;.001) and uterine cancer (9.2% vs 0.6%, P =.004). HRT users estimated a greater quality-of-life reduction compared with HRT nonusers for osteoporosis (-31.0 vs -24.5, P =.006). Conclusions: Regardless of whether they used HRT, women in this study overestimated their risk for all four diseases. HRT users perceived greater benefit and less risk using HRT than nonusers. The results of our study show that continuing efforts are needed to help women understand the risks and benefits of HRT. abstract_id: PUBMED:18568785 Factors influencing women's quality of life in the later half of life. Background: Among older women in East Asia, and Taiwan in particular, there is little research on quality of life and the health care they receive to address the symptoms of menopause. This study evaluated factors which influence quality of life among post middle-age women in Taiwan. Methods: This cross-sectional study recruited 1250 women between 43 and 77 years of age during the year 2002. The factors investigated were demographics, menstruation status, menopausal symptoms, osteoporosis status, and use of hormone replacement therapy (HRT). SF-36 was used to assess the health-related quality of life of these women. Correlation, multiple regression and path analysis were used to test for direct and indirect relationships among the variables. Results: There are statistical significances between menopause symptoms and quality of life across different age groups. Path analysis shows a direct positive effect of HRT and a direct negative effect of climacteric symptoms on both physical and mental components of quality of life. Age, marital status, education and osteoporosis also have direct and indirect effects, some positive and others negative, on the components of quality of life. Conclusions: When developing programs to enhance health in post middle-age women, consideration should be given to symptom relief as well as quality of life. abstract_id: PUBMED:26638154 Evaluation of the prevalence, type, severity, and risk factors of urinary incontinence and its impact on quality of life among women in Turkey. Introduction And Hypothesis: Our purpose was to determine prevalence, type, and risk factors of urinary incontinence (UI) and their impacts on quality of life (QoL) of women in Turkey. Methods: This cross-sectional study was performed on 150 women aged 18-80 years at the Yildirim Beyazit University Hospital's Gynecology Outpatient Clinic in Turkey between May 2013 and September 2013. Data were collected using an individual information form and an incontinence QoL questionnaire (I-QOL). Following data distribution, we used the Mann-Whitney U test, Bonferroni-corrected Kruskal-Walis H test, logistic regression analysis, Fisher's exact test, and the chi-square test. Results: Mean age of the study population was 48.7 ± 14.3 years and UI prevalence 86.7 %. The distribution of UI types was 37.7 % stress incontinence (SUI), 3.1 % urge (UUI), and 59.2 % mixed (MUI). I-QOL general average was 56.7 ± 23.28 (min 22, max 110). Most women had experienced UI for at least 5 continuous years and reported a negative impact on QoL; 43.2 % of incontinence women had not received medical therapy. Postmenopause, uterine prolapsus, episiotomy, use of hormone replacement therapy (HRT), smoking, caffeine intake, family history of UI, macrosomia, and multiparity were risk factors for UI (p &lt; 0.05). Conclusion: In this study, the prevalence of UI in women was substantial, and UI had a significantly negative impact on all aspects of QoL. However, these women had not sought medical help for the problem. Therefore, health professionals should query women of all ages about symptoms of this prevalent condition and offer treatment if it is detected. abstract_id: PUBMED:19811242 Quality of life and sexuality issues in aging women. Quality of life may decrease after menopause. Hormone replacement therapy remains the first-line and most effective treatment for menopausal symptoms and improvement of low quality of life due to estrogen deficiency. The decrease of health-related quality of life in women suffering from cardiovascular disease may be superimposed on the decrease of quality of life induced by menopause itself. Postmenopausal women with acute cardiovascular disease have a significantly higher probability of death than men of the same age. Quality of life predicts long-term mortality. A myocardial infarction does not automatically interdict sexual activity. The Princeton guidelines classify patients suffering from cardiovascular diseases in three categories. Most patients belong to the low-risk category. In general, these patients can be safely encouraged to initiate or resume sexual activity or to receive treatment for sexual dysfunction. Patients at intermediate (or indeterminate) levels of risk should further receive cardiologic evaluation to be classified into either the low- or high-risk group. Patients in the high-risk category have to be stabilized by specific treatment for their cardiac condition before resumption of sexual activity, or initiation of treatment for sexual dysfunction. abstract_id: PUBMED:28118068 Quality of life in climacteric women. Health-related quality of life (HRQoL) refers to the effects of an individual's physical state on all aspects of psychosocial functioning. For postmenopausal women, HRQoL is the only global criterion that is decisive for their daily well-being. Symptoms experienced during menopause and sociodemographic characteristics affect quality of life in postmenopausal women. In younger, symptomatic, postmenopausal women, HRQoL may be significantly diminished. However, quality of life after menopause is influenced by many additional, non-menopausal factors. In the last decades, more specific symptom lists or other questionnaires have been developed. Such scales would qualify as standardized or disease-specific by fulfilling four criteria: (1) they have been constructed on the basis of a factor analysis; (2) they consist of several subscales, each measuring a different aspect of a specific symptomatology; (3) the scales possess sound psychometric properties; and (4) they have been standardized using adequate populations of women. A variety of instruments currently dominating international practice are here reviewed. Therapeutic approaches that treat climacteric symptoms and all measures ameliorating unfavorable non-hormonal factors could improve HRQoL among postmenopausal women. This includes partnership and sexual counseling as well as psychosocial measures. Menopausal hormone therapy (MHT) may reverse this deterioration of HRQoL if it is due to postmenopausal estrogen deficiency. On the contrary, when MHT is prescribed to asymptomatic younger and older postmenopausal women, no gain in HRQoL can be obtained. abstract_id: PUBMED:12792296 Influences of hormone replacement therapy on postmenopausal women's health perceptions. Objective: To assess the beliefs of climacteric women regarding their health, menopause, and hormone replacement therapy (HRT). Design: Medical students asked to interview 526 healthy women, ranging from 40 to 64 years of age, between January and February of 2002. Of that number, 26 (4.9%) declined to participate in the interview. Thus, 500 women were interviewed about their beliefs and perceptions regarding their quality of life and health risks, as well as their opinions on menopause and HRT. Results: The mean age of the sample was 53.3 +/- 6.2 years; 83.4% were postmenopausal, and 18.8% were HRT users. Of the women interviewed, 38.6% believed that their health was good. Although 78.8% thought that cancer is the main cause of death, 64% of them considered themselves to be at high risk for cardiovascular disease and osteoporosis. Most (64%) believed that menopause deteriorates the quality of life and that it increases cardiovascular risk (52.4%) and osteoporosis (72.0%). The HRT users perceived that they had better health status (48.9% v 36.2%, P &lt; 0.02) and smaller cardiovascular risk (54.3% v 66.3%, P &lt; 0.04) than did the nonusers; however, they ignored the preventive effect of estrogens in osteoporosis. Conclusions: Women believe that menopause deteriorates their health. The HRT users perceived themselves to be healthier and to have a smaller risk for cardiovascular disease. abstract_id: PUBMED:12570037 Quality of life and menopause: the role of estrogen. The use of estrogen or hormone replacement therapy (ERT/HRT) in preventing disease in menopausal women has been well documented. Less attention has been paid to the menopausal symptoms that can impair the quality of life of menopausal women, such as hot flushes, sleep disorders, sexual dysfunction, and alterations in mood. Researchers have used a variety of methods to investigate these concerns. Decreases in ovarian hormones that occur with menopause have been implicated in these symptoms. Ovarian hormones affect the central nervous system and urogenital tissues directly via receptors for estrogen, progesterone, and androgens. Changes in the symptoms of menopause consequential to estrogen therapy reflect the effect of this therapy on these tissues. Evidence supporting the effectiveness of ERT/HRT in the treatment of symptoms affecting quality of life is growing and supports the use of ERT/HRT during menopause. Because the most dramatic hormonal changes associated with menopause are related to estrogen and because estrogen is usually coadministered with a progestogen in patients with an intact uterus, this review is focused primarily on ERT/HRT. Because androgen therapy may also improve quality of life by enhancing perimenopausal and postmenopausal sexual desire, function, and general well-being, a brief discussion of androgen supplementation of ERT/HRT is also included. The ideal doses and combinations of hormones must be determined on an individual basis, taking into consideration benefits, risks, and interactions of the different hormone therapies. abstract_id: PUBMED:22017297 Predictors of quality of life in peri- and postmenopausal Polish women living in Lublin Voivodeship. Objectives: The aim of this observational cross-sectional study was to establish the factors that determine the quality of life in a sample of peri- and postmenopausal women and to answer the question of whether the quality of life of these women is dependent on currently or previously received hormone replacement therapy (HRT). Methods: The research was carried out by means of a survey method, postal questionnaire technique. Three standardized questionnaires: WHOQOL-BREF, Women's Health Questionnaire (WHQ) and SF-36 were used as research tools. An original questionnaire was also used. The study comprised a representative sample of the female population aged 45-65 years living in Lublin Province. The sample size was 2143 women. The domains of quality of life established by the WHOQOL-BREF, WHQ and SF-36 questionnaires were treated as dependent variables, whereas the sociodemographic variables, data concerning the women's gynecological history, their state of health and whether they received HRT or not were treated as independent variables. Results: At multivariate analysis, self-assessment of the state of health as poor or fair, the presence of urinary incontinence, the presence of chronic diseases, self-assessment of living conditions as poor, self-assessment of financial situation as poor, eligibility for benefits (pensions) for the disabled, and lower education level represented the most important predictors of poor quality of life. HRT use had an independent impact on women's quality of life only in one quality-of-life domain - sleep problems in the WHQ. Current HRT users were characterized by a slightly lower risk of quality of life reduction when compared with past HRT users and women who never used HRT. Conclusions: Strong predictors of the worse quality of life established in the research make it possible to single out a group of women who need special attention in the process of undertaking preventive or curative steps. abstract_id: PUBMED:24443950 A multicentric study regarding the use of hormone therapy during female mid-age (REDLINC VI). Background: Menopausal hormone therapy (HT) has shown benefits for women; however, associated drawbacks (i.e. risks, costs, fears) have currently determined its low use. Objective: To determine the prevalence of current HT use among mid-aged women and describe the characteristics of those who have never used, have abandoned or are currently using HT. In addition, reasons for not using HT were analyzed. Method: This was a cross-sectional study that analyzed a total of 6731 otherwise healthy women (45-59 years old) of 15 cities in 11 Latin American countries. Participants were requested to fill out the Menopause Rating Scale (MRS) and a questionnaire containing sociodemographic data and items regarding the menopause and HT use. Results: The prevalence of current HT use was 12.5%. Oral HT (43.7%) was the most frequently used type of HT, followed by transdermal types (17.7%). The main factors related to the current use of HT included: positive perceptions regarding HT (odds ratio (OR) 11.53, 95% confidence interval (CI) 9.41-14.13), being postmenopausal (OR 3.47, 95% CI 2.75-4.36) and having a better socioeconomic level. A total of 48.8% of surveyed women had used HT in the past, but abandoned it due to symptom improvement or being unconcerned; fear of cancer or any other secondary effects were also reported but in less than 10%. Among women who had never used HT, 28% reported the lack of medical prescription as the main reason, followed by the absence of symptoms (27.8%). Among those reporting lack of prescription as the main reason for not using HT, 30.6% currently had severe menopausal symptoms (total MRS score &gt; 16); 19.5% of women were using alternative 'natural' therapies, with 35.1% of them displaying severe menopausal symptoms as compared to a 22.5% observed among current HT users. Conclusion: The use of HT has not regained the rates observed a decade ago. Positive perceptions regarding HT were related to a higher use. Lack of medical prescription was the main reason for not using HT among non-users, many of whom were currently displaying severe menopausal symptoms. abstract_id: PUBMED:10868610 Comparison of regimens containing oral micronized progesterone or medroxyprogesterone acetate on quality of life in postmenopausal women: a cross-sectional survey. A cross-sectional survey was conducted to examine quality of life (QOL) related to physiological, somatic, and vasomotor effects of changing progestogen treatment from medroxyprogesterone acetate (MPA) to micronized progesterone in postmenopausal women. Eligible women (n = 176) were currently using hormone replacement therapy (HRT) containing micronized progesterone for 1-6 months and had previously received HRT containing MPA. QOL was assessed via telephone interview using the Greene Climacteric Scale and the Women's Health Questionnaire. When compared with the MPA-containing regimen, women using micronized progesterone-containing HRT experienced significant improvement in vasomotor symptoms, somatic complaints, and anxiety and depressive symptoms. Women reported improved perceptions of their patterns of vaginal bleeding and control of menopausal symptoms while on the micronized progesterone-containing regimen. Approximately 80% of women reported overall satisfaction with the micronized progesterone-containing regimen. A micronized progesterone-containing HRT regimen offers the potential for improved QOL as measured by improvement of menopause-associated symptoms. Answer: Yes, perceptions of risk and quality of life do affect the use of hormone replacement therapy (HRT) by postmenopausal women. Studies have shown that women's attitudes and perceptions towards HRT can influence their decision to use it. For instance, HRT users perceive a greater risk reduction for osteoporosis and myocardial infarction when using HRT compared to nonusers, and they also estimate a greater quality-of-life reduction for osteoporosis (PUBMED:12949027). Additionally, quality of life among postmenopausal women is influenced by menopausal symptoms, and HRT has been shown to have a direct positive effect on both physical and mental components of quality of life (PUBMED:18568785). Furthermore, women who use HRT perceive themselves to be healthier and to have a smaller risk for cardiovascular disease compared to nonusers (PUBMED:12792296). The use of HRT is also associated with the treatment of menopausal symptoms that can impair the quality of life, such as hot flushes, sleep disorders, sexual dysfunction, and mood alterations (PUBMED:12570037). Moreover, current HRT users have been characterized by a slightly lower risk of quality of life reduction compared with past HRT users and women who never used HRT (PUBMED:22017297). The prevalence of HRT use is also influenced by women's perceptions, with positive perceptions regarding HRT being related to a higher use. However, a lack of medical prescription has been reported as the main reason for not using HRT among non-users, many of whom were currently displaying severe menopausal symptoms (PUBMED:24443950). Lastly, a change in the type of progestogen in HRT from medroxyprogesterone acetate to micronized progesterone has been associated with improved quality of life in terms of vasomotor symptoms, somatic complaints, and anxiety and depressive symptoms (PUBMED:10868610).
Instruction: Does balance or motor impairment of limbs discriminate the ambulatory status of stroke survivors? Abstracts: abstract_id: PUBMED:12649653 Does balance or motor impairment of limbs discriminate the ambulatory status of stroke survivors? Objective: This study was performed to determine if ambulatory function is governed by motor impairment of limbs or balance ability in subjects with hemiplegia caused by stroke. Design: Seven patients who walked with physical assistance (FIM 4) after stroke and 13 who walked independently with assistive devices (FIM 6) were compared with 13 healthy subjects. Motor impairment of limbs was evaluated with the Fugl-Meyer Assessment. The Berg Balance Scale and limit of stability test of the Smart Balance Master were used to evaluate balance ability. Results: The FIM 6 group and the controls were best differentiated by motor impairment of the paretic limbs and limit of stability in the backward direction. Motor impairment of the upper limb and limit of stability in direction toward the paretic side separated the FIM 4 from the FIM 6 group. Upper limb motor impairment and the Berg Balance Scale consistently separated the three subject groups. Conclusions: Motor impairment in the paretic upper limb and balance dysfunction should be addressed in treatments working toward independent ambulation. abstract_id: PUBMED:37026840 Does Spasticity Correlate With Motor Impairment in the Upper and Lower Limbs in Ambulatory Chronic Stroke Survivors? Objective: This study aimed to explore correlations between spasticity and motor impairments in the upper and lower limbs in ambulatory chronic stroke survivors. Design: We performed clinical assessments in 28 ambulatory chronic stroke survivors with spastic hemiplegia (female: 12; male: 16; mean ages = 57.8 ± 11.8 yrs; 76 ± 45 mos after stroke). Results: In the upper limb, spasticity index and Fugl-Meyer Motor Assessment showed a significant correlation. Spasticity index for the upper limb showed a significant negative correlation with handgrip strength of the affected side ( r = -0.4, P = 0.035) while Fugl-Meyer Motor Assessment for the upper limb had a significant positive correlation ( r = 0.77, P &lt; 0.001). In the LL, no correlation was found between SI_LL and FMA_LL. There was a significant and high correlation between timed up and go test and gait speed ( r = 0.93, P &lt; 0.001). Gait speed was positively correlated with Spasticity index for the lower limb ( r = 0.48, P = 0.01), and negatively correlated with Fugl-Meyer Motor Assessment for the lower limb ( r = -0.57, P = 0.002). Age and time since stroke showed no association in analyses for both upper limb and lower limb. Conclusions: Spasticity has a negative correlation on motor impairment in the upper limb but not in the lower limb. Motor impairment was significantly correlated with grip strength in the upper limb and gait performance in the lower limb of ambulatory stroke survivors. abstract_id: PUBMED:30902097 Does severity of motor impairment affect reactive adaptation and fall-risk in chronic stroke survivors? Background: A single-session of slip-perturbation training has shown to induce long-term fall risk reduction in older adults. Considering the spectrum of motor impairments and deficits in reactive balance after a cortical stroke, we aimed to determine if chronic stroke survivors could acquire and retain reactive adaptations to large slip-like perturbations and if these adaptations were dependent on severity of motor impairment. Methods: Twenty-six chronic stroke participants were categorized into high and low-functioning groups based on their Chedoke-McMaster-Assessment scores. All participants received a pre-training, slip-like stance perturbation at level-III (highest intensity/acceleration) followed by 11 perturbations at a lower intensity (level-II). If in early phase, participants experienced &gt; 3/5 falls, they were trained at a still lower intensity (level-I). Post-training, immediate scaling and short-term retention at 3 weeks post-training was examined. Perturbation outcome and post-slip center-of-mass (COM) stability was analyzed. Results: On the pre-training trial, 60% of high and 100% of low-functioning participants fell. High-functioning group tolerated and adapted at training-intensity level-II but low-functioning group were trained at level-I (all had &gt; 3 falls on level-II). At respective training intensities, both groups significantly lowered fall incidence from 1st through 11th trials, with improved post-slip stability and anterior shift in COM position, resulting from increased compensatory step length. Both groups demonstrated immediate scaling and short-term retention of the acquired stability control. Conclusion: Chronic stroke survivors are able to acquire and retain adaptive reactive balance skills to reduce fall risk. Although similar adaptation was demonstrated by both groups, the low-functioning group might require greater dosage with gradual increment in training intensity. abstract_id: PUBMED:36826546 Effect of Core Exercises on Motor Function Recovery in Stroke Survivors with Very Severe Motor Impairment. Paresis of the upper and lower limbs is a typical issue in stroke survivors. This study aims to determine whether core exercises help stroke survivors with very severe motor impairment recover their motor function. This study employed a within-subjects design. Eleven hemiparetic stroke patients with very severe motor impairment (FMA score &lt; 35) and ages ranging from 24 to 52 years old were enrolled in this study. All participants engaged in supervised core exercise training twice a week for 12 weeks. The main outcome measures were Fugl-Meyer Assessment Lower Extremity (FMA-LE) and Fugl-Meyer Assessment Upper Extremity (FMA-UE), which were measured before training and at intervals of four weeks during training. Repeated measures ANOVA was used to analyze the effect of core exercises on motor function performance and lower extremity motor function and upper extremity motor function recovery. There were significant differences in the mean scores for motor function performance, lower extremity motor function, and upper extremity motor function throughout the four time points. A post-hoc pairwise comparison using the Bonferroni correction revealed that mean scores significantly increased and were statistically different between the initial assessment and follow-up assessments four, eight, and twelve weeks later. This study suggests that 12 weeks of core exercise training is effective for improving motor function recovery in patients with very severe motor impairment. abstract_id: PUBMED:28637127 Clinical utility of the modified trunk impairment scale for stroke survivors. Objective: The present study aimed to determine the discriminant power of the modified Trunk Impairment Scale (mTIS) in stroke survivors versus healthy adults. Design: Cross-sectional. Setting: Inpatient rehabilitation center. Participants: Fifty-five subjects with stroke and 29 healthy adults. Methods: Subjects were examined using the mTIS, Berg Balance Scale, and Timed Up and Go test for balance; 5-m Walk Test and Functional Ambulation Category for gait; Fugl-Meyer Assessment for motor function; Postural Assessment Scale for Stroke-Trunk Control and Trunk Control Test for trunk control; and Modified Barthel Index for activities of daily living performance. Results: The mTIS results differed significantly between stroke survivors and healthy adults (p &lt; 0.001). In addition, mTIS scores were significantly correlated with the Berg Balance Scale (r = 0.82), Timed Up and Go test (r = -0.70), 5-m Walk Test (r = 0.73), Functional Ambulation Category (r = 0.54), Fugl-Meyer Assessment (r = 0.37-0.80), Postural Assessment Scale for Stroke-Trunk Control and Trunk Control Test (r = 0.55-0.63), and Modified Barthel Index score (r = 0.56) results (p &lt; 0.05-0.01). The mTIS also showed 66% influence on the Berg Balance Scale, 49% on the Timed Up and Go test, 53% on the 5-m Walk Test, 28% on the Functional Ambulation Category, 12% on the Fugl-Meyer Assessment-upper extremity, 64% on the Fugl-Meyer Assessment-lower extremity, and 30% on the Modified Barthel Index. The cutoff value of the mTIS for the Modified Barthel Index classification was &gt;10.5 points, while the area under the curve had a moderate accuracy of 73%. Conclusion: The mTIS can be used to examine the degree of trunk control or the level of trunk impairment, which is seen as a prerequisite for balance, gait, motor function, and activities of daily living performance in stroke survivors. Implications for Rehabilitation The modified Trunk Impairment Scale can be used as an assessment tool to classify the degree of trunk control or its level of impairment in stroke survivors. The modified Trunk Impairment Scale may have a favorable correlation with assessing physical functions such as balance, gait, motor function, and ADL in stroke survivors. abstract_id: PUBMED:36743203 Assessment of Lower Limb Motor Function, Ambulation, and Balance After Stroke. Restoration of ambulation is important for stroke patients. Valid and reliable methods are required for the assessment of lower limb functional status. We reviewed the psychometric properties of methods employed to assess lower extremity motor function, ambulation, and balance, with a focus on stroke patients. We define "motor function" as the ability to produce bodily movements when the brain, motor neurons, and muscles interact. "Ambulation" is defined as the ability to walk with or without a personal assistive device, and "balance" as the ability to maintain stability (without falling) during various physical activities. The Motricity Index and Fugl-Meyer Assessment of Lower Extremities assess the motor function of the lower limbs. The Functional Ambulation Category, 10-m Walk Test, and 6-minute Walk Test assess ambulation. The Berg Balance Scale, Timed Up and Go Test, Functional Reach Test, and Trunk Impairment Scale explore balance. All these tests exhibit high-level validity and have good inter-rater and test-retest reliabilities. However, only 3 methods have been formally translated into Korean. The methods discussed here can be used for standardized assessment, personalized goal setting, rehabilitation planning, and estimation of therapeutic efficacy. abstract_id: PUBMED:29718777 Effects of a 12-month task-specific balance training on the balance status of stroke survivors with and without cognitive impairments in Selected Hospitals in Nnewi, Anambra State, Nigeria. Background: Stroke results in varying levels of physical disabilities that may adversely impact balance with increased tendency to falls. This may intensify with cognitive impairments (CI), and impede functional recovery. Therefore, task-specific balance training (TSBT), which presents versatile task-specific training options that matches varied individual needs, was explored as a beneficial rehabilitation regime for stroke survivors with and without CI. It was hypothesized that there will be no significant difference in the balance control measures in stroke survivors with and without CI after a 12-month TSBT. Objective: To determine if TSBT will have comparable beneficial effects on the balance control status of sub-acute ischemic stroke survivors with CI and without CI. Methods: One hundred of 143 available sub-acute first ever ischemic stroke survivors were recruited using convenience sampling technique in a quasi-experimental study. They were later assigned into the cognitive impaired group (CIG) and non-cognitive impaired group (NCIG), respectively, based on the baseline presence or absence of CI, after screening with the mini-mental examination (MMSE) tool. With the help of four trained research assistants, TSBT was applied to each group, thrice times a week, 60 mins per session, for 12 months. Their balance was measured as Bergs Balance scores (BBS) at baseline, 4th, 8th, and 12th month intervals. Data were analyzed statistically using Kruskal Wallis test, and repeated measure ANOVA, at p &lt; 0.05. Results: There was significant improvement across time points in the balance control of CIG with large effect size of 0.69 after 12 months of TSBT. There was also significant improvement across time points in the balance control of NCIG with large effect size of 0.544 after 12 months of TSBT. There was no significant difference between the improvement in CIG and NCIG after 8th and 12th months of TSBT. Conclusions: Within the groups, a 12-month TSBT intervention significantly improved balance control, respectively, but with broader effects in the CIG than NCIG. Importantly, though between-group comparison at baseline revealed significantly impaired balance control in the CIG than NCIG, these differences were not significant at the 8th month and non-existent at the 12th month of TSBT intervention. These results underscore the robustness of TSBT to evenly address specific balance deficits of stroke survivors with and without CI within a long-term rehabilitation plan as was hypothesized. abstract_id: PUBMED:35720692 Upper Limbs Muscle Co-contraction Changes Correlated With the Impairment of the Corticospinal Tract in Stroke Survivors: Preliminary Evidence From Electromyography and Motor-Evoked Potential. Objective: Increased muscle co-contraction of the agonist and antagonist muscles during voluntary movement is commonly observed in the upper limbs of stroke survivors. Much remain to be understood about the underlying mechanism. The aim of the study is to investigate the correlation between increased muscle co-contraction and the function of the corticospinal tract (CST). Methods: Nine stroke survivors and nine age-matched healthy individuals were recruited. All the participants were instructed to perform isometric maximal voluntary contraction (MVC) and horizontal task which consist of sponge grasp, horizontal transportation, and sponge release. We recorded electromyography (EMG) activities from four muscle groups during the MVC test and horizontal task in the upper limbs of stroke survivors. The muscle groups consist of extensor digitorum (ED), flexor digitorum (FD), triceps brachii (TRI), and biceps brachii (BIC). The root mean square (RMS) of EMG was applied to assess the muscle activation during horizontal task. We adopted a co-contraction index (CI) to evaluate the degree of muscle co-contraction. CST function was evaluated by the motor-evoked potential (MEP) parameters, including resting motor threshold, amplitude, latency, and central motor conduction time. We employed correlation analysis to probe the association between CI and MEP parameters. Results: The RMS, CI, and MEP parameters on the affected side showed significant difference compared with the unaffected side of stroke survivors and the healthy group. The result of correlation analysis showed that CI was significantly correlated with MEP parameters in stroke survivors. Conclusion: There existed increased muscle co-contraction and impairment in CST functionality on the affected side of stroke survivors. The increased muscle co-contraction was correlated with the impairment of the CST. Intervention that could improve the excitability of the CST may contribute to the recovery of muscle discoordination in the upper limbs of stroke survivors. abstract_id: PUBMED:31570211 Body weight support-Tai Chi footwork for balance of stroke survivors with fear of falling: A pilot randomized controlled trial. Background And Purpose: Balance impairment is the predominant risk factor for falls in stroke survivors. This study examined the effects of body weight support-Tai Chi (BWS-TC) footwork on balance control among stroke survivors with fear of falling (FOF). Materials And Methods: Twenty-eight stroke survivors with FOF were randomly allocated to either control or BWS-TC groups. Those in BWS-TC underwent Tai Chi training for 12 weeks. Outcomes were assessed in all participants by evaluation of the limits of stability test, modified clinical test of sensory integration of balance, fall risk index, and Fugl-Meyer assessment of lower limbs at baseline and 12 weeks. Results: The BWS-TC group displayed significant enhancement in dynamic control and vestibular and somatosensory integration. Conclusion: BWS-TC may enhance dynamic control and sensory integration of balance and reduce the risk of fall in stroke survivors with FOF. abstract_id: PUBMED:30205998 Prevalence of Balance Impairment Among Stroke Survivors Undergoing Neurorehabilitation in Nigeria. Background: Poststroke balance impairment adversely affects stroke outcomes and addressing the impairment is expected to constitute an important focus of neurorehabilitation. Aims: To examine the prevalence and factors associated with balance impairment after stroke. Methods: Ninety-five stroke survivors undergoing neurorehabilitation at 2 government hospitals in Northern Nigeria participated in this cross-sectional study. Berg Balance Scale (BBS) was used to assess the presence of balance impairment (BBS score of 0-20). Prevalence of balance impairment was presented as frequency and percentage while demographic and stroke-related determinants of balance impairments were identified using logistic regression analysis. Results: Thirty-five (36.8%) stroke survivors had balance impairment, and age, gender, and poststroke duration were statistically significant determinants. Stroke survivors aged less than 40 years (odds ratio [OR] = .14 [confidence interval [CI] = .20-.94]) and 40-59 years (OR = .23 [CI = .06-.81]) had a lower likelihood of having balance impairment compared to stroke survivors aged 60 years and above. Similarly, males had a lower likelihood of having balance impairment (OR = 1.60 [CI = .05-.55]) compared to females while those in the acute/subacute phase of stroke had a 7-fold likelihood of having balance impairment (OR = 7.74 [CI = 2.63-22.79]) compared to those with chronic stroke. Conclusions: Poststroke balance impairment appears to be significantly influenced by stroke survivors' age, gender, and poststroke duration. Hence, these variables should be considered when planning rehabilitation strategies for improving balance after stroke. Answer: Yes, balance and motor impairment of limbs do discriminate the ambulatory status of stroke survivors. Studies have shown that motor impairment in the paretic upper limb and balance dysfunction are important factors that should be addressed in treatments working toward independent ambulation (PUBMED:12649653). Additionally, spasticity has been found to have a negative correlation with motor impairment in the upper limb but not in the lower limb, and motor impairment was significantly correlated with grip strength in the upper limb and gait performance in the lower limb of ambulatory stroke survivors (PUBMED:37026840). Furthermore, the severity of motor impairment affects reactive adaptation and fall-risk in chronic stroke survivors, indicating that those with more severe motor impairments may require a greater dosage of training with a gradual increment in intensity (PUBMED:30902097). Core exercises have also been shown to be effective for improving motor function recovery in patients with very severe motor impairment (PUBMED:36826546). The modified Trunk Impairment Scale (mTIS) can be used to examine the degree of trunk control or the level of trunk impairment, which is seen as a prerequisite for balance, gait, motor function, and activities of daily living performance in stroke survivors (PUBMED:28637127). Lastly, the prevalence of balance impairment among stroke survivors undergoing neurorehabilitation is significant, and factors such as age, gender, and poststroke duration are determinants of balance impairment, which in turn can influence ambulatory status (PUBMED:30205998).
Instruction: Ranking trauma center quality: can past performance predict future performance? Abstracts: abstract_id: PUBMED:24368643 Ranking trauma center quality: can past performance predict future performance? Objective: To explore whether trauma center quality metrics based on historical data can reliably predict future trauma center performance. Background: The goal of the American College of Surgeons Trauma Quality Improvement Program is to create a new paradigm in which high-quality trauma centers can serve as learning laboratories to identify best practices. This approach assumes that trauma quality reporting can reliably identify high-quality centers using historical data. Methods: We performed a retrospective observational study on 122,408 patients in 22 level I and level II trauma centers in Pennsylvania. We tested the ability of the Trauma Mortality Prediction Model to predict future hospital performance based on historical data. Results: Patients admitted to the lowest performance hospital quintile had a 2-fold higher odds of mortality than patients admitted to the best performance hospital quintile using either 2-year-old data [adjusted odds ratio (AOR): 2.11; 95% confidence interval (CI): 1.36-3.27; P &lt; 0.001] or 3-year-old data (AOR: 2.12; 95% CI: 1.34-3.21; P &lt; 0.001). There was a trend toward increased mortality using 5-year-old data (AOR: 1.70; 95% CI: 0.98-2.95; P = 0.059). The correlation between hospital observed-to-expected mortality ratios in 2009 and 2007 demonstrated moderate agreement (intraclass correlation coefficient = 0.56; 95% CI: 0.22-0.77). The intraclass correlation coefficients for observed-to-expected mortality ratios obtained using 2009 data and 3-, 4-, or 5-year-old data were not significantly different from zero. Conclusions: Trauma center quality based on historical data is associated with subsequent patient outcomes. Patients currently admitted to trauma centers that are classified as low-quality centers using 2- to 5-year-old data are more likely to die than patients admitted to high-quality centers. However, although the future performance of individual trauma centers can be predicted using performance metrics based on 2-year-old data, the performance of individual centers cannot be predicted using data that are 3 years or older. abstract_id: PUBMED:23492970 Evaluating trauma center structural performance: The experience of a Canadian provincial trauma system. Background: Indicators of structure, process, and outcome are required to evaluate the performance of trauma centers to improve the quality and efficiency of care. While periodic external accreditation visits are part of most trauma systems, a quantitative indicator of structural performance has yet to be proposed. The objective of this study was to develop and validate a trauma center structural performance indicator using accreditation report data. Materials And Methods: Analyses were based on accreditation reports completed during on-site visits in the Quebec trauma system (1994-2005). Qualitative report data was retrospectively transposed onto an evaluation grid and the weighted average of grid items was used to quantify performance. The indicator of structural performance was evaluated in terms of test-retest reliability (kappa statistic), discrimination between centers (coefficient of variation), content validity (correlation with accreditation decision, designation level, and patient volume) and forecasting (correlation between visits performed in 1994-1999 and 1998-2005). Results: Kappa statistics were &gt;0.8 for 66 of the 73 (90%) grid items. Mean structural performance score over 59 trauma centers was 47.4 (95% CI: 43.6-51.1). Two centers were flagged as outliers and the coefficient of variation was 31.2% (95% CI: 25.5% to 37.6%), showing good discrimination. Correlation coefficients of associations with accreditation decision, designation level, and volume were all statistically significant (r = 0.61, -0.40, and 0.24, respectively). No correlation was observed over time (r = 0.03). Conclusion: This study demonstrates the feasibility of quantifying trauma center structural performance using accreditation reports. The proposed performance indicator shows good test-retest reliability, between-center discrimination, and construct validity. The observed variability in structural performance across centers and over-time underlines the importance of evaluating structural performance in trauma systems at regular intervals to drive quality improvement efforts. abstract_id: PUBMED:23723617 Evaluating trauma center process performance in an integrated trauma system with registry data. Background: The evaluation of trauma center performance implies the use of indicators that evaluate clinical processes. Despite the availability of routinely collected clinical data in most trauma systems, quality improvement efforts are often limited to hospital-based audit of adverse patient outcomes. Objective: To identify and evaluate a series of process performance indicators (PPI) that can be calculated using routinely collected trauma registry data. Materials And Methods: PPI were identified using a review of published literature, trauma system documentation, and expert consensus. Data from the 59 trauma centers of the Quebec trauma system (1999, 2006; N = 99,444) were used to calculate estimates of conformity to each PPI for each trauma center. Outliers were identified by comparing each center to the global mean. PPI were evaluated in terms of discrimination (between-center variance), construct validity (correlation with designation level and patient volume), and forecasting (correlation over time). Results: Fifteen PPI were retained. Global proportions of conformity ranged between 6% for reduction of a major dislocation within 1 h and 97% for therapeutic laparotomy. Between-center variance was statistically significant for 13 PPI. Five PPI were significantly associated with designation level, 7 were associated with volume, and 11 were correlated over time. Conclusion: In our trauma system, results suggest that a series of 15 PPI supported by literature review or expert opinion can be calculated using routinely collected trauma registry data. We have provided evidence of their discrimination, construct validity, and forecasting properties. The between-center variance observed in this study highlights the importance of evaluating process performance in integrated trauma systems. abstract_id: PUBMED:18784577 Ranking of trauma center performance: the bare essentials. Background: Evaluation of trauma center performance has been limited to comparisons of observed versus expected mortality using trauma and injury severity score methodology. Few studies have focused on identifying top performers. In part, this is due to the perceived need for extensive data required to adequately risk adjust. We set out to identify the patient and injury-related factors that most affect case-mix across centers and thus are most likely to alter assessments of hospital performance. Methods: One hundred ninety trauma centers contributing data to the National Trauma Databank (NTDB) during 2004 to 2005 were used for hospital rankings (n = 169,929 patients). Trauma centers were ranked by crude mortality. We then added variables [injury severity score {ISS}, systolic blood pressure {SBP}, mechanism, age, gender, comorbidities, body region abbreviated injury scale {AIS}] singly to a risk-adjustment model to obtain adjusted probability of death. Trauma centers were then ranked again. The variable that affected rankings the greatest was kept and the process was repeated in an iterative fashion until the incremental change in ranks was minimal. Results: ISS accounted for the most variation in mortality rates across trauma centers, shown by the large rank change with addition of ISS to the model. Specifically, when ISS was taken into consideration, 92% of trauma centers changed their rank by &gt;/=3 and almost half their quartile rank by at least 1. In lesser order of importance, age, SBP, head AIS, mechanism, gender, and abdominal AIS were relevant to adjust for case mix. Conclusions: Trauma center rankings are affected by few parameters, reflecting their relationship to mortality and their relative frequencies. Complex risk adjustment methodology is not required to address differences in case mix. Data abstraction for the purpose of comparing trauma center performance should focus on ensuring that at minimum, these variables are collected with a high degree of accuracy. abstract_id: PUBMED:35026442 The Effects of COVID-19 Pandemic on Trauma Registry and Performance Improvement Operations and Workforce Nationwide: A Survey of Trauma Center Association of America Members. Background: Trauma Centers integrate Trauma Registrars and Performance Improvement Nurses to drive quality care. Delays in their duties could have negative impacts on outcomes and performance. We aim to investigate the impact of COVID-19 pandemic on Trauma Center operations by assessing performance of trauma registry and performance improvement processes across the United States. Methods: A cross-sectional study was performed utilizing data from two anonymous questionnaires distributed to Trauma Center Association of America members. Descriptive statistics, Fisher's Exact Test, and multivariable logistic regression were performed with statistical significance defined as P &lt; 0.05. Results: Of 90.2% (83) of Trauma Registrars and 85.9% (67) of Performance Improvement personnel reported that their Trauma Centers have treated COVID-19 patients. Among trauma registrars, respondents did not significantly differ in the current status of completing registry cases (P&gt; 0.05), during COVID-19 compared to prior (P&gt; 0.05), or adjusted odds of COVID-19 delaying completion of entries (P&gt; 0.05). Having &gt;2 Performance Improvement Nurses was significantly associated with improved performance during the COVID-19 pandemic (P= 0.03) whereas working at a Trauma Center which treats adults-only or mixed patient population (adult and pediatric) was associated with being 1-3 months behind in closing of performance improvement cases (P= 0.02). Conclusions: The negative impact of COVID-19 on Trauma Registrars and Performance Improvement Nurses has been minimal. Adequate staffing/experience seem to mitigate delays and decreased performance. Implementation of expanded staffing, improved training, and evidenced-based revision of Trauma Center logistics may help mitigate future disruptions relating to COVID-19 and allow Trauma Centers to recover and improve their operations. abstract_id: PUBMED:35232439 Designing and conducting initial application of a performance assessment model for in-hospital trauma care. Background: Trauma is a major cause of death worldwide, especially in Low and Middle-Income Countries (LMIC). The increase in health care costs and the differences in the quality of provided services indicates the need for trauma care evaluation. This study was done to develop and use a performance assessment model for in-hospital trauma care focusing on traffic injures. Methods: This multi-method study was conducted in three main phases of determining indicators, model development, and model application. Trauma care performance indicators were extracted through literature review and confirmed using a two-round Delphi survey and experts' perspectives. Two focus group discussions and 16 semi-structured interviews were conducted to design the prototype. In the next step, components and the final form of the model were confirmed following pre-determined factors, including importance and necessity, simplicity, clarity, and relevance. Finally, the model was tested by applying it in a trauma center. Results: A total of 50 trauma care indicators were approved after reviewing the literature and obtaining the experts' views. The final model consisted of six components of assessment level, teams, methods, scheduling, frequency, and data source. The model application revealed problems of a selected trauma center in terms of information recording, patient deposition, some clinical services, waiting time for deposit, recording medical errors and complications, patient follow-up, and patient satisfaction. Conclusion: Performance assessment with an appropriate model can identify deficiencies and failures of services provided in trauma centers. Understanding the current situation is one of the main requirements for designing any quality improvement programs. abstract_id: PUBMED:36027606 How does malingered PTSD affects continuous performance task performance? The purpose of this study was to determine how malingered PTSD behavior affects the performance of a continuous performance task (CPT). An analog trauma group, two malingering groups (with or without educational intervention), and a control group were organized according to simulation design. During the CPT, the numbers of errors and response time indicators along with post-error slowing (PES) and recovery (PER) process were measured. Results are as follows: First, the analog trauma group showed deficits of response inhibition and a higher level of PES compared to the control group. Second, malingered PTSD caused a significant number of errors, inconsistent performance, and no PES. Third, there was a significantly more impaired and inconsistent performance in the low level of knowledge of disability. Finally, a discriminant accuracy of more than 90% appeared in the discriminant analysis of all group comparison conditions. Taken together, the results of this study show that post-error behavior indicators are affected by malingered PTSD, and differences according to the degree of knowledge of PTSD can also be confirmed. These results are expected to be used as basic data for the development of tasks for the detection of malingerers in clinical scenes in the future. abstract_id: PUBMED:28391723 The impact of patellar tendinopathy on sports and work performance in active athletes. Greater insight into sports and work performance of athletes with patellar tendinopathy (PT) will help establish the severity of this common overuse injury. Primary aim of this study is to investigate the impact of PT on sports and work performance. Seventy seven active athletes with PT (50 males; age 28.1 ± 8.2 years; Victorian Institute of Sports Assessment Patella 56.4 ± 12.3) participated in this survey. Sports performance, work ability and work productivity were assessed using the Oslo Sports Trauma Research Center overuse injury questionnaire, the single-item Work Ability Index and the Quantity and Quality questionnaire, respectively. Reduced sports performance was reported by 55% of the participants; 16% reported reduced work ability and 36% decreased work productivity, with 23% and 58%, respectively, for physically demanding work. This study shows that the impact of PT on sports and work performance is substantial and stresses the importance of developing preventive measures. abstract_id: PUBMED:19912295 Outcome evaluation in glioblastoma patients using different ranking scores: KPS, GOS, mRS and MRC. Patient performance is an overall accepted independent prognostic factor in glioblastoma patients. Its estimation is essential for treatment planning, follow-up and clinical trials. Patient performance is mostly determined by usage of the Karnofsky Performance Score (KPS) for cancer patients. However, several other ranking scores have been developed specifically for patients with neurological diseases: Glasgow Outcome Score (GOS) for trauma patients, modified Ranking Score for stroke patients and Medical Research Council brain prognostic index (MRC) for brain tumour patients. The aims of this study were: (1) to compare these four performance scores in their ability to determine patient survival; and (2) to compare the prognostic value of performance with that of other prognostic factors. Univariate and multivariate survival analysis was used. Survival analysis revealed a high correlation to survival for all four scores. The maximum derivation of the curves was shown for the MRC and GOS. Performance had more clinical impact in determining patient survival than age and tumour resection. Differential treatment planning may need the formation of more than two patient groups. This was possible with the MRC, as well as the GOS and KPS. Forming more than three patient groups was not effective with any score. abstract_id: PUBMED:25097664 Abdominal injuries in a major Scandinavian trauma center - performance assessment over an 8 year period. Introduction: Damage control surgery and damage control resuscitation have reduced mortality in patients with severe abdominal injuries. The shift towards non-operative management in haemodynamically stable patients suffering blunt abdominal trauma has further contributed to the improved results. However, in many countries, low volume of trauma cases and limited exposure to trauma laparotomies constitute a threat to trauma competence. The aim of this study was to evaluate the institutional patient volume and performance for patients with abdominal injuries over an eight-year period. Methods: Data from 955 consecutive trauma patients admitted in Oslo University Hospital Ulleval with abdominal injuries during the eight-year period 2002-2009 were retrospectively explored. A separate analysis was performed on all trauma patients undergoing laparotomy during the same period, whether abdominal injuries were identified or not. Variable life-adjusted display (VLAD) was used in order to describe risk-adjusted survival trends throughout the period and the patients admitted before (Period 1) and after (Period 2) the institution of a formal Trauma Service (2005) were compared. Results: There was a steady increase in admitted patients with abdominal injuries, while the number of patients undergoing laparotomy was constant exposing the surgical trauma team leaders to an average of 8 trauma laparotomies per year. No increase in missed injuries or failures of non-operative management was detected. Unadjusted mortality rates decreased from period 1 to period 2 for all patients with abdominal injuries as well as for the patients undergoing laparotomy. However, this apparent decrease was not confirmed as significant in TRISS-based analysis of risk-adjusted mortality. VLAD demonstrated a steady performance throughout the study period. Conclusion: Even in a high volume trauma center the exposure to abdominal injuries and trauma laparotomies is limited. Due to increasing NOM, an increasing number of patients with abdominal injuries was not accompanied by an increase in number of laparotomies. However, we have demonstrated a stable performance throughout the study period as visualized by VLAD without an increase in missed injuries or failures of NOM. Answer: The question of whether past performance of trauma centers can predict future performance has been explored in several studies. According to a retrospective observational study on patients in Pennsylvania, trauma center quality based on historical data is associated with subsequent patient outcomes. Specifically, patients admitted to trauma centers classified as low-quality using 2- to 5-year-old data had higher odds of mortality compared to those admitted to high-quality centers. However, the ability to predict future performance of individual trauma centers using data that are 3 years or older was not supported, indicating that performance metrics based on 2-year-old data could predict future performance, but not those based on older data (PUBMED:24368643). In Quebec, a study developed and validated a trauma center structural performance indicator using accreditation report data. The indicator showed good test-retest reliability, between-center discrimination, and construct validity. However, no correlation was observed over time, suggesting that structural performance may vary and should be evaluated at regular intervals to drive quality improvement efforts (PUBMED:23492970). Another study in the Quebec trauma system identified and evaluated a series of process performance indicators (PPI) that could be calculated using routinely collected trauma registry data. The study found significant between-center variance for most PPIs, and several PPIs were associated with designation level, volume, and correlated over time, highlighting the importance of evaluating process performance in integrated trauma systems (PUBMED:23723617). A study using data from the National Trauma Databank (NTDB) found that trauma center rankings are affected by a few parameters, such as injury severity score (ISS), age, and systolic blood pressure (SBP), which reflect their relationship to mortality and their relative frequencies. This suggests that complex risk adjustment methodology is not required to address differences in case mix, and accurate data collection of these variables is crucial for comparing trauma center performance (PUBMED:18784577). In summary, past performance of trauma centers can be indicative of future performance, particularly when using recent data (up to 2 years old). Regular evaluation and updating of performance metrics are necessary to ensure the reliability of predictions and continuous quality improvement in trauma care.
Instruction: Are Alcohol Taxation and Pricing Policies Regressive? Abstracts: abstract_id: PUBMED:26719379 Are Alcohol Taxation and Pricing Policies Regressive? Product-Level Effects of a Specific Tax and a Minimum Unit Price for Alcohol. Aims: To compare estimated effects of two policy alternatives, (i) a minimum unit price (MUP) for alcohol and (ii) specific (per-unit) taxation, upon current product prices, per capita spending (A$), and per capita consumption by income quintile, consumption quintile and product type. Methods: Estimation of baseline spending and consumption, and modelling policy-to-price and price-to-consumption effects of policy changes using scanner data from a panel of demographically representative Australian households that includes product-level details of their off-trade alcohol spending (n = 885; total observations = 12,505). Robustness checks include alternative price elasticities, tax rates, minimum price thresholds and tax pass-through rates. Results: Current alcohol taxes and alternative taxation and pricing policies are not highly regressive. Any regressive effects are small and concentrated among heavy consumers. The lowest-income consumers currently spend a larger proportion of income (2.3%) on alcohol taxes than the highest-income consumers (0.3%), but the mean amount is small in magnitude [A$5.50 per week (95%CI: 5.18-5.88)]. Both a MUP and specific taxation will have some regressive effects, but the effects are limited, as they are greatest for the heaviest consumers, irrespective of income. Among the policy alternatives, a MUP is more effective in reducing consumption than specific taxation, especially for consumers in the lowest-income quintile: an estimated mean per capita reduction of 11.9 standard drinks per week (95%CI: 11.3-12.6). Conclusion: Policies that increase the cost of the cheapest alcohol can be effective in reducing alcohol consumption, without having highly regressive effects. abstract_id: PUBMED:28910991 Pricing as a means of controlling alcohol consumption. Background: Reducing the affordability of alcohol, by increasing its price, is the most effective strategy for controlling alcohol consumption and reducing harm. Sources Of Data: We review meta-analyses and systematic reviews of alcohol tax/price effects from the past decade, and recent evaluations of tax/price policies in the UK, Canada and Australia. Areas Of Agreement: While the magnitudes of price effects vary by sub-group and alcoholic beverage type, it has been consistently shown that price increases lead to reductions in alcohol consumption. Areas Of Controversy: There remains, however, a lack of consensus on the most appropriate taxation and pricing policy in many countries because of concerns about effects by different consumption level and income level and disagreement on policy design between parts of the alcoholic beverage industries. Growing Points: Recent developments in the research highlight the importance of obtaining accurate alcohol price data, reducing bias in estimating price responsiveness, and examining the impact on the heaviest drinkers. Areas Timely For Developing Research: There is a need for further research focusing on the substitution effects of taxation and pricing policies, estimation of the true tax pass-through rates, and empirical analysis of the supply-side response (from alcohol producers and retailers) to various alcohol pricing strategies. abstract_id: PUBMED:31148313 Comparing alcohol taxation throughout the European Union. Background And Aims: The World Health Organization recommends increasing alcohol taxes as a 'best-buy' approach to reducing alcohol consumption and improving population health. Alcohol may be taxed based on sales value, product volume or alcohol content; however, duty structures and rates vary, both among countries and between beverage types. From a public health perspective, the best duty structure links taxation level to alcohol content, keeps pace with inflation and avoids substantial disparities between different beverage types. This data note compares current alcohol duty structures and levels throughout the 28 European Union (EU) Member States and how these vary by alcohol content, and also considers implications for public health. Design And Setting: Descriptive analysis using administrative data, European Union, July 2018. Measurements: Beverage-specific alcohol duty rates per UK alcohol unit (8 g ethanol) in pounds sterling at a range of different alcoholic strengths. Findings: Only 50% of Member States levy any duty on wine and several levy duty on spirits and beer at or close to the EU minimum level. There is at least a 10-fold difference in the effective duty rate per unit between the highest- and lowest-duty countries for each beverage type. Duty rates for beer and spirits stay constant with strength in the majority of countries, while rates for wine and cider generally fall as strength increases. Duty rates are generally higher for spirits than other beverage types and are generally lowest in eastern Europe and highest in Finland, Sweden, Ireland and the United Kingdom. Conclusions: Different European Union countries enact very different alcohol taxation policies, despite a partially restrictive legal framework. There is only limited evidence that alcohol duties are designed to minimize public health harms by ensuring that drinks containing more alcohol are taxed at higher rates. Instead, tax rates appear to reflect national alcohol production and consumption patterns. abstract_id: PUBMED:32529975 How many alcohol-attributable deaths and hospital admissions could be prevented by alternative pricing and taxation policies? Modelling impacts on alcohol consumption, revenues and related harms in Canada. Introduction: In 2017, Canada increased alcohol excise taxes for the first time in over three decades. In this article, we describe a model to estimate various effects of additional tax and price policies that are predicted to improve health outcomes. Methods: We obtained alcohol sales and taxation data for 2016/17 for all Canadian jurisdictions from Statistics Canada and product-level sales data for British Columbia. We modelled effects of alternative price and tax policies - revenue-neutral taxes, inflation-adjusted taxes and minimum unit prices (MUPs) - on consumption, revenues and harms. We used published price elasticities to estimate impacts on consumption and revenue and the International Model for Alcohol Harms and Policies (InterMAHP) to estimate impacts on alcohol-attributable mortality and morbidity. Results: Other things being equal, revenue-neutral alcohol volumetric taxes (AVT) would have minimal influence on overall alcohol consumption and related harms. Inflation-adjusted AVT would result in 3.83% less consumption, 329 fewer deaths and 3762 fewer hospital admissions. A MUP of $1.75 per standard drink (equal to 17.05mL ethanol) would have reduced consumption by 8.68% in 2016, which in turn would have reduced the number of deaths by 732 and the number of hospitalizations by 8329 that year. Indexing alcohol excise taxes between 1991/92 and 2016/17 would have resulted in the federal government gaining approximately $10.97 billion. We estimated this could have prevented 4000-5400 deaths and 43 000-56 000 hospitalizations. Conclusion: Improved public health outcomes would be made possible by (1) increasing alcohol excise tax rates across all beverages to compensate for past failures to index rates, and (2) setting a MUP of at least $1.75 per standard drink. While reducing alcohol-caused harms, these tax policies would have the added benefit of increasing federal government revenues. abstract_id: PUBMED:33651444 Alcohol policy and gender: a modelling study estimating gender-specific effects of alcohol pricing policies. Aims: To describe gender differences in alcohol consumption, purchasing preferences and alcohol-attributable harm. To model the effects of alcohol pricing policies on male and female consumption and hospitalizations. Design: Epidemiological simulation using the Sheffield Alcohol Policy Model version 4. Setting And Participants: Adults aged 18+ years, England. Interventions: Three alcohol pricing policies: 10% duty increase and minimum unit prices (MUP) of £0.50 and £0.70 per UK unit. Measures: Gender-specific baseline and key outcomes data: annual beverage-specific units of alcohol consumed and beverage-specific alcohol expenditure (household surveys). Alcohol-attributable hospital admissions (administrative data). Key model parameters: literature-based own- and cross-price elasticities for 10 beverage-by-location categories (e.g. off-trade beer). Sensitivity analysis with new gender-specific elasticities. Literature-based risk functions linking consumption and harm, gender-disaggregated where evidence was available. Population subgroups: 120 subgroups defined by gender (primary focus), age, deprivation quintile and baseline weekly consumption. Findings: Women consumed 59.7% of their alcohol as off-trade wine while men consumed 49.7% as beer. Women drinkers consumed fewer units annually than men (494 versus 895) and a smaller proportion of women were high-risk drinkers (4.8 versus 7.2%). Moderate drinking women had lower hospital admission rates than men (44 versus 547 per 100 000), but rates were similar for high-risk drinking women and men (14 294 versus 13 167 per 100 000). All three policies led to larger estimated reductions in consumption and admission rates among men than women. For example, a £0.50 MUP led to a 5.3% reduction in consumption and a 4.1% reduction in admissions for men but a 0.7% reduction in consumption and a 1.6% reduction in hospitalizations for women. Conclusion: Alcohol consumption, purchasing preferences and harm show strong gender patterns among adult drinkers in England. Alcohol pricing policies are estimated to be more effective at reducing consumption and harm for men than women. abstract_id: PUBMED:26530717 Pricing of alcohol in Canada: A comparison of provincial policies and harm-reduction opportunities. Introduction And Aims: Alcohol pricing is an effective prevention policy. This paper compares the 10 Canadian provinces on three research-based alcohol pricing policies-minimum pricing, pricing by alcohol content and maintaining prices relative to inflation. Design And Methods: The selection of these three policies was based on systematic reviews and seminal research papers. Provincial data for 2012 were obtained from Statistics Canada and relevant provincial ministries, subsequently sent to provincial authorities for verification, and then scored by team members. Results: All provinces, except for Alberta, have minimum prices for at least one beverage type sold in off-premise outlets. All provinces, except for British Columbia and Quebec, have separate (and higher) minimum pricing for on-premise establishments. Regarding pricing on alcohol content, western and central provinces typically scored higher than provinces in Eastern Canada. Generally, minimum prices were lower than the recommended $1.50 per standard drink for off-premise outlets and $3.00 per standard drink in on-premise venues. Seven of 10 provinces scored 60% or higher compared to the ideal on indexing prices to inflation. Prices for a representative basket of alcohol products in Ontario and Quebec have lagged significantly behind inflation since 2006. Discussion And Conclusions: While examples of evidence-based alcohol pricing policies can be found in every jurisdiction in Canada, significant inter-provincial variation leaves substantial unrealised potential for further reducing alcohol-related harm and costs. This comparative assessment of alcohol price policies provides clear indications of how individual provinces could adjust their pricing policies and practices to improve public health and safety. [Giesbrecht N, Wettlaufer A, Thomas G, Stockwell T, Thompson K, April N, Asbridge M, Cukier S, Mann R, McAllister J, Murie A, Pauley C, Plamondon L, Vallance K. Pricing of alcohol in Canada: A comparison of provincial policies and harm-reduction opportunities. Drug Alcohol Rev 2016;35:289-297]. abstract_id: PUBMED:36471145 Classifying alcohol control policies enacted between 2000 and 2020 in Poland and the Baltic countries to model potential impact. Aims: The study's aim is to identify and classify the most important alcohol control policies in the Baltic countries (Estonia, Latvia and Lithuania) and Poland between 2000 and 2020. Methods: Policy analysis of Baltic countries and Poland, predicting potential policy impact on alcohol consumption, all-cause mortality and alcohol-attributable hospitalizations was discussed. Results: All Baltic countries implemented stringent availability restrictions on off-premises trading hours and different degrees of taxation increases to reduce the affordability of alcoholic beverages, as well as various degrees of bans on alcohol marketing. In contrast, Poland implemented few excise taxation increases or availability restrictions and, in fact, reduced stipulations on prior marketing bans. Conclusions: This classification of alcohol control policies in the Baltic countries and Poland provides a basis for future modeling of the impact of implementing effective alcohol control policies (Baltic countries), as well as the effects of loosening such policies (Poland). abstract_id: PUBMED:36905242 Is minimum unit pricing for alcohol having the intended effects on alcohol consumption in Scotland? Background And Aims: The Scottish Government introduced minimum unit pricing (MUP) for alcohol on 1 May 2018. This means retailers in Scotland cannot sell alcohol to consumers for less than £0.50 per unit (1 UK unit = 8 g ethanol). The Government intended the policy to increase the price of cheap alcohol, cut alcohol consumption overall and particularly among those drinking at hazardous or harmful levels, and ultimately reduce alcohol-related harm. This paper aims to summarise and assess the evidence to date evaluating the impact of MUP on alcohol consumption and related behaviours in Scotland. Argument: Evidence from analyses of population-level sales data suggest, all else being equal, MUP reduced the volume of alcohol sold in Scotland by ~ 3.0% to 3.5%, with the largest reductions affecting cider and spirits sales. Analyses of two time series datasets on household-level alcohol purchasing and individual-level alcohol consumption suggest reductions in purchasing and consumption among those drinking at hazardous and harmful levels, but offer conflicting results for those drinking at the most harmful levels. These subgroup analyses are methodologically robust, but the underlying datasets have important limitations as they rely on non-random sampling strategies. Further studies identified no clear evidence of reduced alcohol consumption among those with alcohol dependence or those presenting to emergency departments and sexual health clinics, some evidence of increased financial strain among people with dependence and no evidence of wider negative outcomes arising from changes in alcohol consumption behaviours. Conclusions: Minimum unit pricing for alcohol in Scotland has led to reduced consumption, including among heavier drinkers. However, there is uncertainty regarding its impact on those at greatest risk and some limited evidence of negative outcomes, specifically financial strain, among people with alcohol dependence. abstract_id: PUBMED:31943464 Modelling the effects of alcohol pricing policies on alcohol consumption in subpopulations in Australia. Aims: To model the effects of a range of alcohol pricing policies on alcohol consumption in subpopulation groups (e.g. alcohol consumption pattern, and age and income groups) in Australia. Design: We used estimated price elasticities to model the effects of proposed pricing policies on consumption for 11 beverage categories among subpopulation groups. Setting: Australia. Participants: A total of 1789 adults (16+ years) who reported they purchased and consumed alcohol in the 2013 Australian International Alcohol Control Study, an adult population survey. Measurements: Mean and percentage changes in alcohol consumption were estimated for each scenario across subgroups. The policy scenarios evaluated included: (1) increasing the excise rate 10% for all off-premise beverages; (2) replacing the wine equalization tax with a volumetric excise rate equal to the current spirits tax rate; (3) applying a uniform excise tax rate to all beverages equal to the current sprits tax rate and a 10 or 20% increase in it; and(4) introducing a minimum unit price (MUP) on all beverages categories at $1.00, 1.30 or 1.50. Findings: The effects of different tax and MUP policies varied greatly across different subgroups. The effects of the MUP policy on alcohol consumption increased rapidly in the range from $1.00 to $1.50. Applying a uniform tax rate across all beverages equal to current spirits tax rate, or a 10 or 20% increase beyond that, could generate large reductions in overall alcohol consumption in Australia. Compared with the uniform tax rate with or without further tax increase, introducing a MUP at $1.30 or $1.50 could reduce consumption particularly among harmful drinkers and lower-income drinkers, with comparatively smaller impacts on moderate and higher-income drinkers. Conclusions: Both uniform excise tax and minimum unit price policies are predicted to reduce alcohol consumption in Australia. Minimum unit price policies are predicted to have a greater impact on drinking among harmful drinkers than moderate drinkers. abstract_id: PUBMED:26905063 Estimated Effects of Different Alcohol Taxation and Price Policies on Health Inequalities: A Mathematical Modelling Study. Introduction: While evidence that alcohol pricing policies reduce alcohol-related health harm is robust, and alcohol taxation increases are a WHO "best buy" intervention, there is a lack of research comparing the scale and distribution across society of health impacts arising from alternative tax and price policy options. The aim of this study is to test whether four common alcohol taxation and pricing strategies differ in their impact on health inequalities. Methods And Findings: An econometric epidemiological model was built with England 2014/2015 as the setting. Four pricing strategies implemented on top of the current tax were equalised to give the same 4.3% population-wide reduction in total alcohol-related mortality: current tax increase, a 13.4% all-product duty increase under the current UK system; a value-based tax, a 4.0% ad valorem tax based on product price; a strength-based tax, a volumetric tax of £0.22 per UK alcohol unit (= 8 g of ethanol); and minimum unit pricing, a minimum price threshold of £0.50 per unit, below which alcohol cannot be sold. Model inputs were calculated by combining data from representative household surveys on alcohol purchasing and consumption, administrative and healthcare data on 43 alcohol-attributable diseases, and published price elasticities and relative risk functions. Outcomes were annual per capita consumption, consumer spending, and alcohol-related deaths. Uncertainty was assessed via partial probabilistic sensitivity analysis (PSA) and scenario analysis. The pricing strategies differ as to how effects are distributed across the population, and, from a public health perspective, heavy drinkers in routine/manual occupations are a key group as they are at greatest risk of health harm from their drinking. Strength-based taxation and minimum unit pricing would have greater effects on mortality among drinkers in routine/manual occupations (particularly for heavy drinkers, where the estimated policy effects on mortality rates are as follows: current tax increase, -3.2%; value-based tax, -2.9%; strength-based tax, -6.1%; minimum unit pricing, -7.8%) and lesser impacts among drinkers in professional/managerial occupations (for heavy drinkers: current tax increase, -1.3%; value-based tax, -1.4%; strength-based tax, +0.2%; minimum unit pricing, +0.8%). Results from the PSA give slightly greater mean effects for both the routine/manual (current tax increase, -3.6% [95% uncertainty interval (UI) -6.1%, -0.6%]; value-based tax, -3.3% [UI -5.1%, -1.7%]; strength-based tax, -7.5% [UI -13.7%, -3.9%]; minimum unit pricing, -10.3% [UI -10.3%, -7.0%]) and professional/managerial occupation groups (current tax increase, -1.8% [UI -4.7%, +1.6%]; value-based tax, -1.9% [UI -3.6%, +0.4%]; strength-based tax, -0.8% [UI -6.9%, +4.0%]; minimum unit pricing, -0.7% [UI -5.6%, +3.6%]). Impacts of price changes on moderate drinkers were small regardless of income or socioeconomic group. Analysis of uncertainty shows that the relative effectiveness of the four policies is fairly stable, although uncertainty in the absolute scale of effects exists. Volumetric taxation and minimum unit pricing consistently outperform increasing the current tax or adding an ad valorem tax in terms of reducing mortality among the heaviest drinkers and reducing alcohol-related health inequalities (e.g., in the routine/manual occupation group, volumetric taxation reduces deaths more than increasing the current tax in 26 out of 30 probabilistic runs, minimum unit pricing reduces deaths more than volumetric tax in 21 out of 30 runs, and minimum unit pricing reduces deaths more than increasing the current tax in 30 out of 30 runs). Study limitations include reducing model complexity by not considering a largely ineffective ban on below-tax alcohol sales, special duty rates covering only small shares of the market, and the impact of tax fraud or retailer non-compliance with minimum unit prices. Conclusions: Our model estimates that, compared to tax increases under the current system or introducing taxation based on product value, alcohol-content-based taxation or minimum unit pricing would lead to larger reductions in health inequalities across income groups. We also estimate that alcohol-content-based taxation and minimum unit pricing would have the largest impact on harmful drinking, with minimal effects on those drinking in moderation. Answer: Based on the provided abstracts, alcohol taxation and pricing policies are not highly regressive. The study by Callinan et al. (PUBMED:26719379) found that while the lowest-income consumers spend a larger proportion of their income on alcohol taxes compared to the highest-income consumers, the mean amount is small in magnitude. Both a minimum unit price (MUP) and specific taxation have some regressive effects, but these are limited and greatest for the heaviest consumers, irrespective of income. Among the policy alternatives, a MUP is more effective in reducing consumption than specific taxation, especially for consumers in the lowest-income quintile. The abstract by Holmes et al. (PUBMED:26905063) also supports the idea that alcohol taxation and pricing policies are not necessarily regressive. It suggests that compared to tax increases under the current system or introducing taxation based on product value, alcohol-content-based taxation or minimum unit pricing would lead to larger reductions in health inequalities across income groups. These policies would have the largest impact on harmful drinking, with minimal effects on those drinking in moderation. Furthermore, the abstract by Jiang et al. (PUBMED:31943464) indicates that minimum unit pricing policies are predicted to have a greater impact on drinking among harmful drinkers than moderate drinkers, which suggests that such policies target those who are more likely to experience alcohol-related harm rather than being regressive towards low-income moderate drinkers. In summary, while there may be some regressive effects of alcohol taxation and pricing policies, the evidence suggests that these effects are limited and that such policies can be effective in reducing alcohol consumption and related harms without being highly regressive.
Instruction: Can adaptive threshold-based metabolic tumor volume (MTV) and lean body mass corrected standard uptake value (SUL) predict prognosis in head and neck cancer patients treated with definitive radiotherapy/chemoradiotherapy? Abstracts: abstract_id: PUBMED:26275933 Can adaptive threshold-based metabolic tumor volume (MTV) and lean body mass corrected standard uptake value (SUL) predict prognosis in head and neck cancer patients treated with definitive radiotherapy/chemoradiotherapy? Purpose: To evaluate the predictive value of adaptive threshold-based metabolic tumor volume (MTV), maximum standardized uptake value (SUVmax) and maximum lean body mass corrected SUV (SULmax) measured on pretreatment positron emission tomography and computed tomography (PET/CT) imaging in head and neck cancer patients treated with definitive radiotherapy/chemoradiotherapy. Materials And Methods: Pretreatment PET/CT of the 62 patients with locally advanced head and neck cancer who were treated consecutively between May 2010 and February 2013 were reviewed retrospectively. The maximum FDG uptake of the primary tumor was defined according to SUVmax and SULmax. Multiple threshold levels between 60% and 10% of the SUVmax and SULmax were tested with intervals of 5% to 10% in order to define the most suitable threshold value for the metabolic activity of each patient's tumor (adaptive threshold). MTV was calculated according to this value. We evaluated the relationship of mean values of MTV, SUVmax and SULmax with treatment response, local recurrence, distant metastasis and disease-related death. Receiver-operating characteristic (ROC) curve analysis was done to obtain optimal predictive cut-off values for MTV and SULmax which were found to have a predictive value. Local recurrence-free (LRFS), disease-free (DFS) and overall survival (OS) were examined according to these cut-offs. Results: Forty six patients had complete response, 15 had partial response, and 1 had stable disease 6 weeks after the completion of treatment. Median follow-up of the entire cohort was 18 months. Of 46 complete responders 10 had local recurrence, and of 16 partial or no responders 10 had local progression. Eighteen patients died. Adaptive threshold-based MTV had significant predictive value for treatment response (p=0.011), local recurrence/progression (p=0.050), and disease-related death (p=0.024). SULmax had a predictive value for local recurrence/progression (p=0.030). ROC curves analysis revealed a cut-off value of 14.00 mL for MTV and 10.15 for SULmax. Three-year LRFS and DFS rates were significantly lower in patients with MTV ≥ 14.00 mL (p=0.026, p=0.018 respectively), and SULmax≥10.15 (p=0.017, p=0.022 respectively). SULmax did not have a significant predictive value for OS whereas MTV had (p=0.025). Conclusion: Adaptive threshold-based MTV and SULmax could have a role in predicting local control and survival in head and neck cancer patients. abstract_id: PUBMED:32983984 Predictive Value of Diffusion, Glucose Metabolism Parameters of PET/MR in Patients With Head and Neck Squamous Cell Carcinoma Treated With Chemoradiotherapy. Purpose: This study aims to evaluate the predictive value of the pretreatment, metabolic, and diffusion parameters of a primary tumor assessed with PET/MR on patient clinical outcomes. Methods: Retrospective evaluation was performed using PET/MR image data sets acquired using the single tracer injection dual imaging of 68 histologically proven head and neck cancer patients 4 weeks before receiving definitive chemoradiotherapy (CRT). PET/MR was performed before the CRT and 12 weeks after the CRT for response evaluation. Image data (PET and MRI diffusion-weighted imaging [DWI]) was used to specify the maximum standard uptake value, the peak lean body mass corrected, SUVmax, the metabolic tumor volume, the total lesion glycolysis (SUVmax, SULpeak, MTV, and TLG), and the mean apparent diffusion coefficient (ADCmean) of the primary tumor. Based on the results of the therapeutic response evaluation, two patient subgroups were created: one with a viable tumor and another without. Metabolic and diffusion data, from the pretreatment PET/MR and the therapeutic response, were correlated using Spearman's correlation coefficient and Wilcoxon's test. Results: After completing the CRT, a viable residual tumor was detected in 36/68 (53%) cases, and 32/68 (47%) patients showed complete remission. However, no significant correlation was found between the pretreatment parameter, ADCmean (p = 0.88), and the therapeutic success. The PET parameters, SUVmax and SULpeak, MTV, and TLG (p = 0.032, p = 0.01, p &lt; 0.0001, p = 0.0004) were statistically significantly different between the two patient subgroups. Conclusion: This study found that MRI-based (ADCmean) data from FDG PET/MR pretreatment could not be used to predict therapeutic response although the PET parameters SUVmax, SULpeak, MTV, and TLG proved to be more useful; thus, their inclusion in risk stratification may also be of additional value. abstract_id: PUBMED:25043882 The relative prognostic utility of standardized uptake value, gross tumor volume, and metabolic tumor volume in oropharyngeal cancer patients treated with platinum based concurrent chemoradiation with a pre-treatment [(18)F] fluorodeoxyglucose positron emission tomography scan. Objectives: This study compared the relative prognostic utility of the Gross Tumor Volume (GTV), maximum Standardized Uptake Value (SUVmax), and Metabolic Tumor Volume (MTV) in a uniform cohort of oropharyngeal squamous cell carcinoma (OPSCC) patients treated with platinum-based concurrent chemoradiation therapy (CCRT). Methods And Materials: One-hundred OPSCC with a pretreatment [(18)F] fluorodeoxyglucose (FDG) positron emission tomography-computed tomography (PET-CT) were treated with CCRT. Kaplan-Meier curves and Cox proportional hazard models were generated. Results: When dichotomized by the median, a smaller MTV correlated with improved 5year locoregional control (LRC) (98.0% versus 87.0%, p=0.049), freedom from distant metastasis (FDM) (91.7% versus 65.0%, p=0.005), progression-free survival (PFS) (80.3% versus 56.7%, p=0.015), and overall survival (OS) (84.1% versus 57.8%, p=0.008), whereas a smaller GTV correlated with improved PFS (80.3% versus 57.4%, p=0.040) and OS (82.1% versus 60.1%, p=0.025). SUVmax failed to correlate with any outcome. On multivariate analysis, when adjusted for GTV, T-stage, and N-stage a smaller MTV remained independently correlated with improved FDM, PFS, and OS. GTV failed to reach significance in the multivariate model. Conclusions: A smaller MTV correlates with improved LRC, FDM, PFS, and OS in OPSCC patients undergoing platinum-based CCRT. abstract_id: PUBMED:25055289 FDG volumetric parameters and survival outcomes after definitive chemoradiotherapy in patients with recurrent head and neck squamous cell carcinoma. Objective: The purpose of this study was to establish the predictive value of (18)F-FDG parameters for overall survival in biopsy-proven recurrent head and neck squamous cell cancer (HNSCC) patients after definitive chemoradiotherapy. Materials And Methods: We conducted a retrospective study including 34 patients with HNSCC who had biopsy-proven recurrence between April 2004 and March 2012 and underwent FDG PET/CT at our institution at the time of recurrence. Maximum standardized uptake value (SUVmax), peak SUV (SUVpeak), metabolic tumor volume (MTV), and total lesion glycolysis (TLG) were measured. The primary outcome measure was overall survival. ROC analysis, univariate and multivariate Cox regression models, and Kaplan-Meir survival curves were performed. Results: In univariate analyses, human papillomavirus (HPV) status (p = 0.04), primary site recurrence of MTV (p = 0.03), metastasis of MTV (p = 0.02), metastasis of TLG (p = 0.02), total MTV (p = 0.002), and total TLG (p = 0.04) were significantly associated with overall survival outcome. Total MTV remained as significant independent prognostic factor when adjusted for all other covariates except for primary site recurrence SUVmax and SUVpeak and lymph node SUVmax and SUVpeak. There was a significant difference in time to survival between patients with total MTV above and below the 50th percentile (Mantel-Cox log-rank test, p = 0.05 and Gehan-Breslow-Wilcoxon test, p = 0.03) and the optimum threshold of 16.8 mL (Mantel-Cox log-rank test, p = 0.01 and Gehan-Breslow-Wilcoxon test, p = 0.01; hazard ratio [HR], 0.25). Conclusion: FDG PET/CT-based total MTV and clinical HPV status may be significant prognostic markers for overall survival of patients with recurrent HNSCC after definitive chemoradiotherapy. abstract_id: PUBMED:33306758 Predictive value of PET/CT based metabolic information in the modern 3D based radiotherapy treatment of head and neck can-cer patients - single institute study. Objective: The aim of the study was to evaluate the predictive value of pretreatment positron emission tomography (PET) standardized uptake value (SUVmax), standardized uptake value corrected for lean body mass (SULpeak) value, metabolic tumour volume (MTV) and total lesion glycolysis (TLG) parameters of the primary tumour assessed with PET/computed tomography (CT) in the clinical out-come in patients diagnosed with histopathologically confirmed head and neck squamous cell carcinoma. Materials And Methods: Retrospective evaluation was performed using PET/CT image datasets of 52 histologically proven head and neck cancer patients in 4 weeks' prior receiving definitive chemo-radiotherapy (CRT). Positron emission tomography /CT was performed before the CRT and 12 weeks after it for response evaluation. Image data was used for target volume delineation and for specify SUVmax, SULpeak, MTV and TLG parameters of the primary tumour. According to the results of the therapeutic response evaluation two patient subgroups were created in relation to the presence or absence of viable tumour. Metabolic data from pre-treatment PET/CT and therapeutic response were correlated using Kruskal-Wallis test. Results: After completion of the CRT in 24/52 (46%) cases viable residual tumour was detected on restaging PET/CT, while in 28/52 (54%) patients showed complete remission. For the therapeutic success prediction assessment, we could not find any significant correlation with pre-treatment SUVmax and SULpeak values (P&gt;0.44, P&gt;0.33). Total lesion glycolysis provided nearly significant difference (P=0.052) and MTV had shown significant difference (P=0.001) between the two patient subgroups statistically. Conclusion: Simple metabolic data (SUVmax and SULpeak) from pretreatment fluorine-18-fluorodeoxyglucose (18F-FDG) PET/CT were unable to predict therapeutic response, while volumetric information containing MTV and TLG parameters proved to be more useful, thus their inclusion to risk stratification may also have additional value. abstract_id: PUBMED:29948105 What is the prognostic impact of FDG PET in locally advanced head and neck squamous cell carcinoma treated with concomitant chemo-radiotherapy? A systematic review and meta-analysis. Purpose: Evidence is conflicting on the prognostic value of 18F-fluorodeoxyglucose (FDG) positron emission tomography (PET) in head and neck squamous cell carcinoma. The aim of our study was to determine the impact of semiquantitative and qualitative metabolic parameters on the outcome in patients managed with standard treatment for locally advanced disease. Methods: A systematic review of the literature was conducted. A meta-analysis was performed of studies providing estimates of relative risk (RR) for the association between semiquantitative metabolic parameters and efficacy outcome measures. Results: The analysis included 25 studies, for a total of 2,223 subjects. The most frequent primary tumour site was the oropharynx (1,150/2,223 patients, 51.7%). According to the available data, the majority of patients had stage III/IV disease (1,709/1,799, 94.9%; no information available in four studies) and were treated with standard concurrent chemoradiotherapy (1,562/2,009 patients, 77.7%; only one study without available information). A total of 11, 8 and 4 independent studies provided RR estimates for the association between baseline FDG PET metrics and overall survival (OS), progression-free survival (PFS) and locoregional control (LRC), respectively. High pretreatment metabolic tumour volume (MTV) was significantly associated with a worse OS (summary RR 1.86, 95% CI 1.08-3.21), PFS (summary RR 1.81, 95% CI 1.14-2.89) and LRC (summary RR 3.49, 95% CI 1.65-7.35). Given the large heterogeneity (I2 &gt; 50%) affecting the summary measures, no cumulative threshold for an unfavourable prognosis could be defined. No statistically significant association was found between SUVmax and any of the outcome measures. Conclusion: FDG PET has prognostic relevance in the context of locally advanced head and neck squamous cell carcinoma. Pretreatment MTV is the only metabolic variable with a significant impact on patient outcome. Because of the heterogeneity and the lack of standardized methodology, no definitive conclusions on optimal cut-off values can be drawn. abstract_id: PUBMED:22610386 Superior prognostic utility of gross and metabolic tumor volume compared to standardized uptake value using PET/CT in head and neck squamous cell carcinoma patients treated with intensity-modulated radiotherapy. Objective: To compare the prognostic utility of the 2-[(18)F] fluoro-2-deoxy-D: -glucose (FDG) maximum standardized uptake value (SUV(max)), primary gross tumor volume (GTV), and FDG metabolic tumor volume (MTV) for disease control and survival in patients with head and neck squamous cell carcinoma (HNSCC) undergoing intensity-modulated radiotherapy (IMRT). Methods: Between 2007 and 2011, 41 HNSCC patients who underwent a staging positron emission tomography with computed tomography and definitive IMRT were identified. Local (LC), nodal (NC), distant (DC), and overall (OC) control, overall survival (OS), and disease-free survival (DFS) were assessed using the Kaplan-Meier product-limit method. Results: With a median follow-up of 24.2 months (range 2.7-56.3 months) local, nodal, and distant recurrences were recorded in 10, 5, and 7 patients, respectively. The median SUV(max), GTV, and MTV were 15.8, 22.2 cc, and 7.2 cc, respectively. SUV(max) did not correlate with LC (p = 0.229) and OS (p = 0.661) when analyzed by median threshold. Patients with smaller GTVs (&lt;22.2 cc) demonstrated improved 2-year actuarial LC rates of 100 versus 56.4 % (p = 0.001) and OS rates of 94.4 versus 65.9 % (p = 0.045). Similarly, a smaller MTV (&lt;7.2 cc) correlated with improved 2-year actuarial LC rates of 100 versus 54.2 % (p &lt; 0.001) and OS rates of 94.7 versus 64.2 % (p = 0.04). Smaller GTV and MTV correlated with improved NC, DC, OC, and DFS, as well. Conclusion: GTV and MTV demonstrate superior prognostic utility as compared to SUV(max), with larger tumor volumes correlating with inferior local control and overall survival in HNSCC patients treated with definitive IMRT. abstract_id: PUBMED:35499622 Utility and limitations of metabolic parameters in head and neck cancer: finding a practical segmentation method. Purpose: Although metabolic tumor volume (MTV) and total lesion glycolysis (TLG) have shown good prognostic value in head and neck cancer (HNC), there are still many issues to resolve before their potential application in standard clinical practice. The purpose of this study was to compare the discrimination ability of two relevant segmentation methods in HNC and to evaluate the potential benefit of adding lymph nodes' metabolism (LNM) to the measurements. Methods: We retrospectively analyzed a recently published database of 62 patients with HNC treated with chemoradiotherapy. MTV and TLG were measured using an absolute threshold of SUV2.5. Comparison analysis with previously published background-level threshold (BLT) results was done through Concordance index (C-index) in eight prognostic models. Results: BLT obtained better C-index values in five out of the eight models. The addition of LNM improved C-index values in six of the prognostic models. Conclusion: We found a potential benefit in adding LNM to the main tumor measurements, as well as in using a BLT for MTV segmentation compared to the most commonly used SUV2.5 threshold. Despite its limitations, this study suggests a practical and simple manner to use these parameters in standard clinical practice, aiming to help elaborate a general consensus. abstract_id: PUBMED:28658110 The clinical outcome correlations between radiation dose and pretreatment metabolic tumor volume for radiotherapy in head and neck cancer: A retrospective analysis. This study was to investigate the clinical outcomes between radiation dose and pretreatment metabolic tumor volume (MTV) in patients with head and neck cancer treated with definitive chemoradiotherapy.Thirty-four patients received pretreatment F-fluorodeoxyglucose (F-FDG) positron emission tomography-computed tomography (PET/CT) were recruited for this study. The CT-based volume (gross tumor volume of the primary [GTVp]) and 4 types of MTVs were measured on the basis of either a maximal standardized uptake value (SUVmax) of 2.5 (MTV2.5), 3.0 (MTV3.0), or a fixed threshold of 40% (MTV40%), 50% (MTV50%). F-FDG PET-CT images before treatment, and data including response to treatment, local recurrence, death due to the cancer, disease-free survival (DFS) and primary relapse-free survival (PRFS), were collected for analysis.The Wilcoxon rank test showed that all values determined by the different delineation techniques were significantly different from the GTVp (P &lt; .05). Tumor volume and the homogeneity of target dose of MTV2.5, MTV3.0, MTV40%, and MTV50% were significantly different between the 2 groups of patients through treatment outcomes (P &lt; .05).The survival curves for DFS and PRFS demonstrated that the homogeneity of the target dose in MTVs was a good indicator. The homogeneity of target dose in the tumor is a potential indicator of DSF and PRFS in patients with head and neck cancer who underwent radiotherapy. abstract_id: PUBMED:28982918 Differential Prognostic Value of Metabolic Heterogeneity of Primary Tumor and Metastatic Lymph Nodes in Patients with Pharyngeal Cancer. Background/aim: We aimed to explore the prognostic value of metabolic heterogeneity of 18F-FDG uptake in chemoradiotherapy-treated pharyngeal cancer patients. Patients And Methods: This study included 52 consecutive patients with pharyngeal cancer who underwent 18F-FDG PET/CT before definitive chemoradiotherapy. The heterogeneity factor (HF) was defined as the derivative (dV/dT) of a volume-threshold function for primary tumors and metastatic lymph nodes. The relationships between clinical parameters and HFs of primary tumors (pHF) and metastatic lymph nodes (nHF) were analyzed. Results: The pHF (range=∓1.367 - -0.027; median=-0.152) was significantly correlated with the maximum standardized uptake value, metabolic tumor volume, and total lesion glycolysis. Induction chemotherapy response was not correlated with HF, whereas response to radiotherapy was significantly better in patients with high pHF (low heterogeneity). Consistently, the 2-year locoregional recurrence-free survival was significantly better in patients with high pHF (82.9% for pHF&gt;-0.152 vs. 30.5% for pHF&lt;-0.152, log-rank p=0.009). The nHF (range=-1.067 - -0.039; median=-0.160) was not correlated with response to radiotherapy and locoregional recurrences. Conclusion: pHF, but not nHF, was a significant predictor of response to radiotherapy and locoregional recurrence in pharyngeal cancer. Thus, HF use can prevent unnecessary treatment and surgical delays. Answer: Yes, adaptive threshold-based metabolic tumor volume (MTV) and lean body mass corrected standard uptake value (SUL) can predict prognosis in head and neck cancer patients treated with definitive radiotherapy/chemoradiotherapy. Studies have shown that these metabolic parameters measured on pretreatment positron emission tomography and computed tomography (PET/CT) imaging have significant predictive value for treatment response, local recurrence/progression, and disease-related death in these patients (PUBMED:26275933). For instance, one study found that adaptive threshold-based MTV had a significant predictive value for treatment response, local recurrence/progression, and disease-related death. SULmax also had a predictive value for local recurrence/progression. The study identified cut-off values for MTV and SULmax that were associated with lower local recurrence-free survival (LRFS), disease-free survival (DFS), and overall survival (OS) rates (PUBMED:26275933). Another study reported that pretreatment metabolic and diffusion parameters assessed with PET/MR, such as SUVmax, SULpeak, MTV, and total lesion glycolysis (TLG), were statistically significantly different between patient subgroups with and without viable tumors after chemoradiotherapy, suggesting their usefulness in predicting therapeutic response (PUBMED:32983984). Similarly, a study comparing the prognostic utility of gross tumor volume (GTV), SUVmax, and MTV in oropharyngeal squamous cell carcinoma (OPSCC) patients treated with platinum-based concurrent chemoradiation therapy (CCRT) found that a smaller MTV was independently correlated with improved outcomes, including locoregional control, freedom from distant metastasis, progression-free survival, and overall survival (PUBMED:25043882). Furthermore, a systematic review and meta-analysis concluded that high pretreatment MTV was significantly associated with worse overall survival, progression-free survival, and locoregional control in patients with locally advanced head and neck squamous cell carcinoma treated with standard concurrent chemoradiotherapy (PUBMED:29948105). In summary, these studies support the use of adaptive threshold-based MTV and SUL as prognostic indicators in head and neck cancer patients undergoing definitive radiotherapy/chemoradiotherapy.
Instruction: Modern palliative radiation treatment: do complexity and workload contribute to medical errors? Abstracts: abstract_id: PUBMED:22713835 Modern palliative radiation treatment: do complexity and workload contribute to medical errors? Purpose: To examine whether treatment workload and complexity associated with palliative radiation therapy contribute to medical errors. Methods And Materials: In the setting of a large academic health sciences center, patient scheduling and record and verification systems were used to identify patients starting radiation therapy. All records of radiation treatment courses delivered during a 3-month period were retrieved and divided into radical and palliative intent. "Same day consultation, planning and treatment" was used as a proxy for workload and "previous treatment" and "multiple sites" as surrogates for complexity. In addition, all planning and treatment discrepancies (errors and "near-misses") recorded during the same time frame were reviewed and analyzed. Results: There were 365 new patients treated with 485 courses of palliative radiation therapy. Of those patients, 128 (35%) were same-day consultation, simulation, and treatment patients; 166 (45%) patients had previous treatment; and 94 (26%) patients had treatment to multiple sites. Four near-misses and 4 errors occurred during the audit period, giving an error per course rate of 0.82%. In comparison, there were 10 near-misses and 5 errors associated with 1100 courses of radical treatment during the audit period. This translated into an error rate of 0.45% per course. An association was found between workload and complexity and increased palliative therapy error rates. Conclusions: Increased complexity and workload may have an impact on palliative radiation treatment discrepancies. This information may help guide the necessary recommendations for process improvement for patients who require palliative radiation therapy. abstract_id: PUBMED:27355277 Departmental Workload and Physician Errors in Radiation Oncology. Purpose: The purpose of this work was to evaluate measures of increased departmental workload in relation to the occurrence of physician-related errors and incidents reaching the patient in radiation oncology. Materials And Methods: All data were collected for the year 2013. Errors were defined as forms received by our departmental process improvement team; of these forms, only those relating to physicians were included in the study. Incidents were defined as serious errors reaching the patient requiring appropriate action; these were reported through a separate system. Workload measures included patient volumes and physician schedules and were obtained through departmental records for daily and monthly data. Errors and incidents were analyzed for relation with measures of workload using logistic regression modeling. Results: Ten incidents occurred in the year. The number of patients treated per day was a significant factor relating to incidents (P &lt; 0.003). However, the fraction of department physicians off-duty and the ratio of patients to physicians were not found to be significant factors relating to incidents. Ninety-one physician-related errors were identified, and the ratio of patients to physicians (rolling average) was a significant factor relating to errors (P &lt; 0.03). The number of patients and the fraction of physicians off-duty were not significant factors relating to errors.A rapid increase in patient treatment visits may be another factor leading to errors and incidents. All incidents and 58% of errors occurred in months where there was an increase in the average number of fields treated per day from the previous month; 6 of the 10 incidents occurred in August, which had the highest average increase at 26%. Conclusions: Increases in departmental workload, especially rapid changes, may lead to higher occurrence of errors and incidents in radiation oncology. When the department is busy, physician errors may be perpetuated owing to an overwhelmed departmental checks system, leading to incidents reaching the patient. Insights into workload and workflow will allow for the development of targeted approaches to preventing errors and incidents. abstract_id: PUBMED:36214028 Effects of nursing workload on medication administration errors: A quantitative study. Background: Medication administration errors by nurses form a high proportion of medical errors in medical institutions. Studies have shown that such errors are closely linked to nursing workload. Objective: To quantitatively explore the effects of different types of nursing workloads on different medication administration errors. Method: Three medical institutions were selected as the objects of error data collection based on the following criteria: the medical institution experience in error data collection, the complete range of medical departments, and the institution size. Error cases were self-reported from all nurses in all medical departments. The relationship between the error types and nursing workload types were quantitatively examined using partial least squares and structural equation modeling. Results: The study recorded 290 medication administration errors, and extracted four error types and nine nursing workload types. The workload type for each error type was also identified and the path coefficient was found to be between 0.087 to 0.416. Conclusion: This study confirmed the effect of workload on medication administration errors and determined a theoretical mechanism for this effect. Research results will provide the evidence for nursing managers to reduce workload and ensure quality in the nursing administration process. abstract_id: PUBMED:21155641 Medical errors and patient safety in palliative care: a review of current literature. Background: Recently, the discussion about medical errors and patient safety has gained scientific as well as public attention. Errors in medicine have been proven to be frequent and to carry enormous financial costs and moral consequences. We aimed to review the research on medical errors in palliative care and to screen relevant literature to appreciate the relevance of safety studies to the field. Methods: We performed a literature search using the database PubMed that cross-matched terms for palliative care with the words "errors" and "patient safety." Publications were classified according to type of study and kind of error, and empiric research results were extracted and critically assessed. Results: We found 44 articles concerning medical errors in palliative care, most of which were case studies. Of these 44 articles, 16 deal with palliative care errors as a key issue, referring mostly to symptom control (n = 13). Other examples are errors in communication, prognostication, and advance care planning. There are very few empirical studies, which are mostly retrospective observational studies. Discussion: Although patients in palliative care are more vulnerable to errors and their consequences, there is little theoretical or empirical research on the subject. We propose a specific definition for errors in palliative care and analyze the challenges of delineating, identifying and preventing errors in such key areas as prognostication, advance care planning and end-of-life decision-making. abstract_id: PUBMED:7624233 Workload and environmental factors in hospital medication errors. Nine hospital workload factors and seasonal changes in daylight and darkness were examined over a 5-year period in relation to nurse medication errors at a medical center in Anchorage, Alaska. Three workload factors, along with darkness, were found to be significant predictors of the risk of medication error. Errors increased with the number of patient days per month (OR/250 patient days = 1.61) and the number of shifts worked by temporary nursing staff (OR/10 shifts = 1.15); errors decreased with more overtime worked by permanent nursing staff members (OR/10 shifts = .85). Medication errors were 95% more likely in midwinter than in the fall, but the effect of increasing darkness was strongest; a 2-month delay was found between the level of darkness and the rate of errors. More than half of all medication errors occurred during the first 3 months of the year. abstract_id: PUBMED:33120412 Associations of physicians' prescribing experience, work hours, and workload with prescription errors. Objective: We aimed to assess associations of physician's work overload, successive work shifts, and work experience with physicians' risk to err. Materials And Methods: This large-scale study included physicians who prescribed at least 100 systemic medications at Sheba Medical Center during 2012-2017 in all acute care departments, excluding intensive care units. Presumed medication errors were flagged by a high-accuracy computerized decision support system that uses machine-learning algorithms to detect potential medication prescription errors. Physicians' successive work shifts (first or only shift, second, and third shifts), workload (assessed by the number of prescriptions during a shift) and work-experience, as well as a novel measurement of physicians' prescribing experience with a specific drug, were assessed per prescription. The risk to err was determined for various work conditions. Results: 1 652 896 medical orders were prescribed by 1066 physicians; The system flagged 3738 (0.23%) prescriptions as erroneous. Physicians were 8.2 times more likely to err during high than normal-low workload shifts (5.19% vs 0.63%, P &lt; .0001). Physicians on their third or second successive shift (compared to a first or single shift) were more likely to err (2.1%, 1.8%, and 0.88%, respectively, P &lt; .001). Lack of experience in prescribing a specific medication was associated with higher error rate (0.37% for the first 5 prescriptions vs 0.13% after over 40, P &lt; .001). Discussion: Longer hours and less experience in prescribing a specific medication increase risk of erroneous prescribing. Conclusion: Restricting successive shifts, reducing workload, increasing training and supervision, and implementing smart clinical decision support systems may help reduce prescription errors. abstract_id: PUBMED:29307863 Opioid errors in inpatient palliative care services: a retrospective review. Opioids are a high-risk medicine frequently used to manage palliative patients' cancer-related pain and other symptoms. Despite the high volume of opioid use in inpatient palliative care services, and the potential for patient harm, few studies have focused on opioid errors in this population. Objectives: To (i) identify the number of opioid errors reported by inpatient palliative care services, (ii) identify reported opioid error characteristics and (iii) determine the impact of opioid errors on palliative patient outcomes. Methods: A 24-month retrospective review of opioid errors reported in three inpatient palliative care services in one Australian state. Results: Of the 55 opioid errors identified, 84% reached the patient. Most errors involved morphine (35%) or hydromorphone (29%). Opioid administration errors accounted for 76% of reported opioid errors, largely due to omitted dose (33%) or wrong dose (24%) errors. Patients were more likely to receive a lower dose of opioid than ordered as a direct result of an opioid error (57%), with errors adversely impacting pain and/or symptom management in 42% of patients. Half (53%) of the affected patients required additional treatment and/or care as a direct consequence of the opioid error. Conclusion: This retrospective review has provided valuable insights into the patterns and impact of opioid errors in inpatient palliative care services. Iatrogenic harm related to opioid underdosing errors contributed to palliative patients' unrelieved pain. Better understanding the factors that contribute to opioid errors and the role of safety culture in the palliative care service context warrants further investigation. abstract_id: PUBMED:24890346 Relating physician's workload with errors during radiation therapy planning. Purpose: To relate subjective workload (WL) levels to errors for routine clinical tasks. Methods And Materials: Nine physicians (4 faculty and 5 residents) each performed 3 radiation therapy planning cases. The WL levels were subjectively assessed using National Aeronautics and Space Administration Task Load Index (NASA-TLX). Individual performance was assessed objectively based on the severity grade of errors. The relationship between the WL and performance was assessed via ordinal logistic regression. Results: There was an increased rate of severity grade of errors with increasing WL (P value = .02). As the majority of the higher NASA-TLX scores, and the majority of the performance errors were in the residents, our findings are likely most pertinent to radiation oncology centers with training programs. Conclusions: WL levels may be an important factor contributing to errors during radiation therapy planning tasks. abstract_id: PUBMED:30003644 The effect of workload on nurses' non-observance errors in medication administration processes: A cross-sectional study. Aim: This study, based on actual medical error cases involving nurses, sought to identify non-observance errors-defying the standard operating procedures-in medication administration processes, and clarify the relationship between nursing workload and such behaviours. Methods: Based on a cross-sectional survey, non-observance error cases were collected from three Japanese hospitals between January and December 2014, using self-reported data from participating nurses. Standard operating procedures and actual error content were compared to identify non-observance errors and workload. The statistical analysis was used to determine the relationship between non-observance error and workload. Results: A total of 637 error cases were found in administering medication, of which 163 (25.6%) were workload-related non-observance errors. Individual analysis of the 163 cases identified seven workload issues that caused non-observance error and six categories of non-observance errors. The relationship between workload and such errors was also clarified. Conclusion: Our findings clarify the influence of workload on non-observance errors and may also help identify adjacent areas for specific improvements. abstract_id: PUBMED:23140183 Errors in palliative care: kinds, causes, and consequences: a pilot survey of experiences and attitudes of palliative care professionals. Background: Medical errors have recently been recognized as a relevant concern in public health, and increasing research efforts have been made to find ways of improving patient safety. In palliative care, however, studies on errors are scant. Objective: Our aim was to gather pilot data concerning experiences and attitudes of palliative care professionals on this topic. Methods: We developed a questionnaire, which consists of questions on relevance, estimated frequency, kinds and severity of errors, their causes and consequences, and the way palliative care professionals handle them. The questionnaire was sent to all specialist palliative care institutions in the region of Bavaria, Germany (n=168; inhabitants 12.5 million) reaching a response rate of 42% (n=70). Results: Errors in palliative care were regarded as a highly relevant problem (median 8 on a 10-point numeric rating scale). Most respondents experienced a moderate frequency of errors (1-10 per 100 patients). Errors in communication were estimated to be more common than those in symptom control. The causes most often mentioned were deficits in communication or organization. Moral and psychological problems for the person committing the error were seen as more frequent than consequences for the patient. Ninety percent of respondents declared that they disclose errors to the harmed patient. For 78% of the professionals, the issue was not a part of their professional training. Conclusion: Professionals acknowledge errors-in particular errors in communication-to be a common and relevant problem in palliative care, one that has, however, been neglected in training and research. Answer: Yes, complexity and workload do contribute to medical errors in modern palliative radiation treatment. An association was found between increased complexity and workload and higher rates of palliative therapy errors. Specifically, the study identified that "same day consultation, planning and treatment" as a proxy for workload and "previous treatment" and "multiple sites" as surrogates for complexity were linked to an increased error rate in palliative radiation therapy (PUBMED:22713835). Additionally, another study found that increases in departmental workload, especially rapid changes, may lead to a higher occurrence of errors and incidents in radiation oncology (PUBMED:27355277). These findings suggest that when the department is busy, physician errors may be perpetuated due to an overwhelmed departmental checks system, leading to incidents reaching the patient. Therefore, it is evident that both complexity and workload are significant factors that can contribute to medical errors in the context of palliative radiation treatment.
Instruction: Response of the erythron and erythropoietin to autologous blood donations in paediatric subjects. Is erythropoietin supplement necessary? Abstracts: abstract_id: PUBMED:9269066 Response of the erythron and erythropoietin to autologous blood donations in paediatric subjects. Is erythropoietin supplement necessary? Objectives: This study was undertaken to evaluate the need for erythropoietin (Epo) therapy to augment autologous blood collection in adolescents undergoing spinal corrective surgery. Methods: We measured serum Epo and parameters of iron metabolism in 35 adolescents undergoing autologous blood collection for orthopaedic surgery. Ages of subjects ranged from 11 to 16 years (mean 15.5 years) with a female predominance. Generally, 10% of intravascular blood volume was collected once a week up to a total of three collections. Results: There was an average 2.5-fold rise in serum Epo over the period of blood collection. Epo increased immediately after blood collection. There was a 1.4-fold rise in reticulocyte count, consistent with the Epo response, and an average of 1.5 units of red blood cells (200 ml/unit) being produced over this period. Despite this there was an average fall of 2 g/dl (15%) in haemoglobin level. Serum ferritin and transferrin saturation also fell. Conclusions: Paediatric subjects are able to donate the required units of blood as they have a good Epo response to mild anaemia. The amount of blood donated did not exceed their total mobilisable iron and the iron supplement was adequate for red cell synthesis. abstract_id: PUBMED:11805645 Allogeneic transfusion requirements after autologous donations in posterior lumbar surgeries. Study Design: A retrospective study of blood transfusion practices after posterior lumbar spine surgery was performed. Objectives: To determine the overall use rate of autologous blood donations for different spine surgeries, and to identify the risk of requiring additional allogeneic blood transfusions. Summary Of Background Data: In an attempt to avoid allogeneic blood transfusions and its associated risks, patients frequently are asked to donate autologous blood before many elective spine surgeries. There is a lack of published data on the use rate for these autologous blood donations, and on their ability to prevent allogeneic blood exposure. Methods: A retrospective review of hospital charts and blood bank records was conducted on 191 consecutive patients who had undergone three categories of lumbar spine surgery: laminectomy alone, laminectomy with a noninstrumented posterolateral fusion, and laminectomy with an instrumented posterolateral fusion. Results: Nearly 80% of the autologous blood donated by patients who underwent simple laminectomies was wasted. However, the vast majority (70-90%) of patients who underwent fusion used their autologous blood. In the patients who underwent fusion, autologous blood donations decreased the risk of allogenic blood transfusions by 75% in noninstrumented fusions and 50% in instrumented fusions, as compared with the patients who elected not to donate blood before the fusion (P &lt; 0.05). A substantial number of patients who underwent instrumented fusions (nearly 40%) required additional allogeneic blood transfusions despite predonation of blood. Conclusions: Autologous blood donations are indeed advantageous in decreasing allogeneic blood usage of patients undergoing fusion, but additional methods of blood conservation (intraoperative salvage and preoperative erythropoietin) seem necessary to diminish the allogeneic blood requirements further, especially in those patients undergoing instrumented lumbar fusion. abstract_id: PUBMED:7485923 Erythropoietin therapy during frequent autologous blood donations. Dose-finding study Avoidance of homologous blood products and patients' demand for preoperative autologous blood donation programs are increasing. As many of these patients are older, with a compromised cardiovascular system and a slow response of the erythropoietic system when anemia occurs, the feasibility and benefit of autologous blood donation is often limited. Augmentation of preoperative blood donation by therapy with recombinant human erythropoietin (rHuEPO) has been described in animal models and in patients. METHODS. In a multicenter, controlled, randomized trial, 49 patients scheduled for orthopaedic or vascular surgery received 0 (control group, n = 9), 200 (n = 10), 300 (n = 11), 400 (n = 10) or 500 (n = 9) U/kg rHuEPO (Erypo, Cilag, Sulzbach, distributor Fresenius, Oberursel, Germany) subcutaneously twice a week for 3 weeks while every week 450 ml blood was collected. Iron sulphate 100 mg was prescribed orally twice a day. Patients were ineligible if they had uncontrolled hypertension, recent myocardial infarction, haematological disorders or a history of seizures. Blood donation had to be cancelled if the haematocrit was below 30%. RESULTS. There was a significant (ANOVA) drop of the haematocrit value only in the control group, and end-point values for haematocrit and haemoglobin were significantly elevated in the 400 and 500 U/kg groups compared with the control group (Table 9). DISCUSSION. The erythropoietic stimulus of phlebotomy for autologous blood donations is often not efficient enough to guarantee a constant haematocrit. Lowering of the preoperative haematocrit jeopardizes the aim of avoidance of homologous blood transfusions. rHuEPO increased the efficiency of autologous blood collections, as predonation haematocrit values could be preserved in the high-dosage groups. As a consequence, homologous transfusions could be avoided. However, there were broad interindividual differences in the erythropoietic response, possibly due to limitations in iron availability. Adverse effects of rHuEPO therapy, such as hypertension, thrombosis or neurologic disorders, are mostly reported in patients with terminal kidney failure. No such disturbances were observed in the present study. CONCLUSION. rHuEPO ameliorates the preoperative decrease of haemoglobin and haematocrit values due to autologous blood donations in a dose-related fashion. The individually adjusted dosage of rHuEPO and iron supplementation merits further investigation. abstract_id: PUBMED:3379725 Effect of repeated whole blood donations on serum immunoreactive erythropoietin levels in autologous donors. The effect of repeated phlebotomy on serum immunoreactive erythropoietin levels was studied prospectively in 69 autologous blood donors. At the time of the initial phlebotomy, 11 men (33%) and two women (6%) were anemic; during the course of blood donations, anemia (defined as a hematocrit less than 0.41 for men and less than 0.36 for women) developed in an additional 17 men (71%) and 14 women (45%). Although there was an increase in the level of serum immunoreactive erythropoietin with successive phlebotomies, the increase was not substantially out of the normal range. The lack of an erythropoietic response to repeated phlebotomies in association with the small increment in the serum erythropoietin level was not due to iron deficiency, since the level of red-cell free protoporphyrin did not increase in these patients. We conclude that within the hematocrit range permissible for autologous blood donation, the degree of anemia experienced is insufficient to initiate an adequate increase in erythropoietin production; as a consequence, mild anemia develops in a majority of donors, and the volume of blood donated is inadequate to meet their operative needs. abstract_id: PUBMED:1703846 Hematologic parameters in repeated autologous blood donation Up to six units of autologous blood can be provided for patients with heart surgery, hip joint replacement or scoliosis. This study was undertaken to evaluate hematological parameters in juvenile and elderly patients and the tolerance of 6 weeks preoperative autologous blood donations. We furthermore investigated the approximate "net blood gain" of the autologous procedure. For an optimal stimulation of erythropoiesis, under vigorous substitution of ferrous sulfate, the autologous donations should start as early as 4 to 6 weeks instead of 2-3 weeks prior to the scheduled surgery, even if only 3 units are required prospectively. The net Hb gain of the autologous procedure in 12-68 years old patients reached a mean of 141 g and 231 g Hb at 4 and 6 donations, respectively. This is equivalent to 2.5 and 4.1 homologous units of RBC (approximately 56g Hb each). Up to 6 units of autologous blood can easily be provided by employing "additive solutions" (PAGGS mannitol), avoiding tedious alternatives like "leap frog-techniques" or freezing of blood. abstract_id: PUBMED:10771374 A survey of autologous blood collection and transfusion in Japan in 1997. Background: In spite of the fact that autologous blood is safest for a patient to receive, it is not generally appreciated that adverse reactions during donation and transfusion may occur. This study was conducted to assess the state and the risk of autologous blood transfusion in Japan in 1997. Study Design And Methods: Results of a nation-wide questionnaire-based survey are presented. The questionnaire assessed the number of autologous blood donations, donation procedures, and the adverse reactions associated with donation, preservation, recombination erythropoietin administration and transfusion. Results: Between November 1996 and October 1997, 10,697,000 ml (or 53,485 units, 200 ml = 1 unit) prestorage blood donation were made by 14,200 patients (averages; 1.9 donations/patient, 753 ml/patient, 398 ml/donation). Of these, 87% were transfused to the patients and the remainder were discarded. Using hemodilution and blood salvage intra- or postoperatively some 2,540,000 ml of blood was collected and &gt; 70% of patient-donors received such blood. Adverse reactions were observed with 1.6% (428/26,905) of donations including 6 angina and 2 asthma attacks. There were 63 (0.2%) problems with 28,705 donations and 117 (0.5%) errors/problems reported for 24,929 units transfused; the most frequent problems were clotting on the units and breakage of the bags during storage. Hypotension using hemodilution (3.7%), coagulation (0.9%) or bacterial contamination (0.4%) using salvage were often observed. A 10-20 ml volume of autologous fresh-frozen plasma was transfused to the wrong recipient. Conclusion: Autologous blood transfusion accounts for at least 1.1% (2.8% estimated) of the red cell supply in Japan. Errors and adverse reactions are not infrequent in autologous blood programmes. By introducing systematic safety policies, we will be able to make autologous blood transfusion safer. abstract_id: PUBMED:10227763 Autologous blood donation preceding coronary artery bypass graft operation in a hemodialysis patient. A 67-year-old male hemodialysis patient with abdominal aortic aneurysm and triple vessel coronary heart disease required autologous blood donation because of his blood type of Rh(-) before cardiovascular surgery. We performed autologous red blood cell and plasma collection by the switch back method with recombinant human erythropoietin therapy during the 5 weeks before the operation. Autologous platelet collection was also made the day before the operation. These autologous blood donations were safely and successfully performed along with hemodialysis. There was some caution taken for these procedures. The ultrafiltration rate had to be adjusted for blood collection or blood transfusion during hemodialysis in order not to disturb fluid balance. It was necessary to monitor the hyperkalemia of the stored autologous packed red blood cells. For platelet collection, blood in the extracorporeal circuit had to be concentrated because of the presence of renal anemia. Coronary artery bypass graft was safely and successfully performed with the autologous blood only. abstract_id: PUBMED:1703840 Preoperative autologous blood donation This presentation shows our experiences with the preoperative autologous blood donation existing since 1987, 246 patients of the cardiothoracic surgery participated in this program. The preoperative concentration of hemoglobin was above 12g/dl 76.8% of the patients despite the frequent donations, 36.5% of the participants could be transfused with their own blood products. Further reduction of homologous blood transfusion could be achieved with a second preoperative plasmapheresis and the donation of erythropoietin. abstract_id: PUBMED:8782711 Iron homeostasis in preoperative autologous blood donation Objective: The role of iron metabolism, the value and the limits of oral as well as intravenous iron substitution in preoperative autologous blood donation are discussed according to the literature. Data Sources And Selection Criteria: The critical review of the German and English literature is based on a Medline backsearch covering the last 20 years. Results: The success of preoperative autologous blood donation substantially depends on the volume of whole-body iron and on the amount of storage iron which is available at the beginning of the donation phase. Since iron losses due to repeated blood donations within a few weeks cannot be replaced sufficiently by food, medical iron substitution seems to be appropriate. Nevertheless, hitherto neither oral nor intravenous iron substitution could be demonstrated as useful instruments for an improvement of erythropoietic response in non iron deficient patients after autologous blood donation. Merely, intravenous iron used in combination with recombinant erythropoietin seems to be an effective support for erythropoiesis during predeposit. Conclusions: At the moment intravenous iron medication in autologous blood donation should be restricted to well-established exceptional cases. The use of intravenous iron combined with erythropoietin seems to be justified to avoid ineffective erythropoieses and to achieve a dose reduction of recombinant erythropoietin. Since there are nearly no risks and a possible efficacy cannot be totally excluded, oral iron supplementation may be applied to patients who tolerate it well. Real iron deficiency has to be treated with iron application. Further clinical studies have to be done for a final validation of the efficacy of iron substitution in non iron deficient preoperative autologous blood donors. abstract_id: PUBMED:1504186 Autologous transfusion--organization and results of preoperative blood donation by heart surgery patients This study presents our preoperative autologous blood donation programme that is in use since 1987. 246 patients of cardiothoracic surgery participated in this program. 77% of all patients had preoperative concentrations of haemoglobin above 12g/dl despite frequent donations. 36.5% of patients were transfused exclusively with their own blood products. Reduction of homologous blood transfusion has been achieved with second preoperative plasmaphereses and more restricted indication for blood transfusion. More blood donation could be performed with application of erythropoietin resulting in more frequent preoperative blood donations. Answer: The response of the erythron and erythropoietin (Epo) to autologous blood donations in pediatric subjects has been studied to determine if Epo supplementation is necessary. In a study involving 35 adolescents undergoing autologous blood collection for orthopaedic surgery, there was an average 2.5-fold rise in serum Epo over the period of blood collection, with an immediate increase after blood collection. There was also a 1.4-fold rise in reticulocyte count, consistent with the Epo response, and an average of 1.5 units of red blood cells being produced over this period. Despite this, there was an average fall of 2 g/dl (15%) in hemoglobin level. Serum ferritin and transferrin saturation also fell. The study concluded that pediatric subjects are able to donate the required units of blood as they have a good Epo response to mild anemia, and the iron supplement was adequate for red cell synthesis, suggesting that Epo supplementation may not be necessary (PUBMED:9269066). However, another study on erythropoietin therapy during frequent autologous blood donations found that the erythropoietic stimulus of phlebotomy for autologous blood donations is often not efficient enough to guarantee a constant hematocrit. In this study, recombinant human erythropoietin (rHuEPO) increased the efficiency of autologous blood collections, as predonation hematocrit values could be preserved in the high-dosage groups, and homologous transfusions could be avoided. This suggests that Epo supplementation can ameliorate the preoperative decrease of hemoglobin and hematocrit values due to autologous blood donations in a dose-related fashion (PUBMED:7485923). In summary, while pediatric subjects have a good endogenous Epo response to autologous blood donation, the necessity of Epo supplementation may depend on the individual's ability to maintain adequate hematocrit and hemoglobin levels. In cases where the natural Epo response is not sufficient to prevent anemia or where higher volumes of blood are collected, Epo supplementation could be beneficial to preserve hematocrit levels and avoid the need for homologous blood transfusions.
Instruction: House dust mite avoidance measures improve peak flow and symptoms in patients with allergy but without asthma: a possible delay in the manifestation of clinical asthma? Abstracts: abstract_id: PUBMED:9314342 House dust mite avoidance measures improve peak flow and symptoms in patients with allergy but without asthma: a possible delay in the manifestation of clinical asthma? Background: Asthma caused by allergy to house dust mite is a growing problem. Patients with allergy who do not have asthma (yet) might develop asthma depending on exposure to precipitating factors. Objective: We sought to determine whether house dust mite avoidance measures have an effect on the development of asthma. Methods: Patients with allergy (n = 29) who had no diagnosis of asthma (FEV1 of 99.1% +/- 10.6% of predicted, peak flow variability of 5.21% +/- 3.41%, reversibility of FEV1 after 400 microg salbutamol of 3.92% +/- 3.75% according to the reference values) were randomly allocated (subjects blinded) to a treatment (n = 16) and a placebo group (n = 13). House dust mite avoidance treatment consisted of applying Acarosan (Allergopharma, J. Ganzer KG, Hamburg, Germany) (the placebo group used water) to the floors (living room, bedroom), and the use of covers for mattresses and bedding that were impermeable to house dust mite (the placebo group used cotton covers for mattresses only). We tested whether the intervention had an effect on peak flow parameters and asthma symptom scores during 6 weeks of treatment. Results: Significant improvements were seen in the treatment group in symptom scores (Borg score) for disturbed sleep, breathlessness, wheeze, and overall symptom score. Slight but statistically significant improvements in peak flow (morning, evening, and variability) were seen in the treatment group also. No significant changes were seen in the placebo group. Conclusions: Although this study is not long enough to study the development of asthma, the results indicates that house dust mite avoidance measures had an effect on peak flow parameters and asthma symptoms in patients with allergy but without asthma. These findings might implicate that a shift in developing clinically manifest asthma could be achieved with house dust mite avoidance measures. To give a better answer to whether preventing the development of asthma is possible, larger studies with a longer follow-up period are necessary. abstract_id: PUBMED:8190492 Effect of a mite-killing agent on house dust and on symptoms of house dust allergy Unlabelled: The efficacy of acaricid benzyl-benzoate (Acarosan) has been followed in an open study for one year in patients with house dust mite allergy; the clinical signs and the mite allergen level have been considered. Methods: the furniture (beds, upholstered pieces, carpets) of 17 house dust mite allergic patients suffering from bronchial asthma and/or allergic rhinitis has been investigated. The mite content of the dust gained from the furniture has been determined with the help of the semiquantitative Acarex test. This test has been done before the mite elimination and 3, 6, 9 and 12 months after it. The registered clinical signs of the patients: symptoms, drug consumption, expiratory peak flow values have been measured twice a day. Results: 9/17 beds have become free from mites, 6/17 beds have had less mites than before, 1/17 no change, 1/17 augmentation of mite content. The proportion of days free from complaints has been 26.8% at the beginning of the trial and 47.1% after 12 months, the drug consumption has diminished meanwhile. At the beginning of the trial 6 children had pathologic lability index based on peak-flow measurements, they improved significantly. The information about the mite content of the furniture gives help for the elimination measures. The chemical mite elimination reduces the mite content of the flat and results in clinical improvement of house dust mite allergic patients. abstract_id: PUBMED:3353895 Effect of house dust mite avoidance measures on adult atopic asthma. Twenty one adult patients with asthma, with positive skin test responses to the European house dust mite, Dermatophagoides pteronyssinus, were randomly allocated to a control group or to a group applying house dust mite avoidance measures. These included an initial application of liquid nitrogen to mattresses and bedroom carpets to kill the live house dust mite population. Histamine airway responsiveness, symptom scores, peak expiratory flow rates (PEF), and house dust mite numbers were determined during the two week pretrial and eight week trial periods. Nine patients in each group completed the study. By the end of the study there was a significant reduction in live mites in the "avoidance" group but not in the control group. The avoidance group showed a significant improvement in symptom scores measured on a linear analogue scale, in the number of hours each day spent wheezing (mean reduced from 8.6 to 4.5 hours), and in PEF (l/min) both in the morning (from 364 to 388) and in the evening (from 368 to 392). These changes were not found in the control group. The provocative concentration (PC) of histamine causing a 20% fall in FEV1 (PC20FEV1) had increased significantly in the avoidance group at eight weeks (from 0.58 to 2.3 mg/ml), whereas no change was seen in the control group (from 0.93 to 1.21 mg/ml). These results show that house dust mite avoidance, combined with initial killing of the mite by liquid nitrogen, diminishes airway responsiveness and improves asthma symptom control over an eight week period in adult asthmatic patients with house dust mite allergy. abstract_id: PUBMED:341952 House dust mite hyposensitization. A double-blind controlled trial of house dust mite hyposensitization was carried out in 14 patients with asthma, who were hypersensitive on skin testing to the house dust mite alone. Measurements were made, using a Wright's peak flowmeter, during the 15-month trial period. Precautions were taken in the home to reduce the mite population. At the end of the trial, no clinical improvement was noted subjectively or objectively, despite a reduced bronchial sensitivity to allergen in the treated group. The role of the house dust mite as a cause of asthma is discussed. abstract_id: PUBMED:29310755 Home Environmental Interventions for House Dust Mite. It has been 50 years since the dust mite was first appreciated to be a major source of allergen in house dust, and by extension a key trigger of allergic respiratory disease. Since that time a number of protein allergens have been identified and characterized, mainly from mite feces, and standardized mite extracts and IgE assays have been developed. Insights into the lifecycle of dust mites and aspects of mite allergen biology have shed light on the mechanisms that lead to respiratory disease and to the development of interventions that can minimize dust mite allergen exposure. It is now clear that dust mite allergy is a key contributor to asthma in many parts of the world, and that long-term avoidance can be effective for preventing sensitization and minimizing the development and severity of respiratory disease. Here, we discuss the evidence linking dust mites with respiratory disease, outline studies that support the efficacy of home environmental interventions, and highlight practical methods that have been shown to be effective as part of a multifaceted approach to dust mite avoidance. abstract_id: PUBMED:17359604 House dust mite allergen avoidance and self-management in allergic patients with asthma: randomised controlled trial. Background: The efficacy of bed covers that are impermeable to house dust mites has been disputed. Aim: The aim of the present study was to investigate whether the combination of 'house dust mite impermeable' covers and a self-management plan, based on peak flow values and symptoms, leads to reduced use of inhaled corticosteroids (ICS) than self-management alone. Design Of Study: Prospective, randomised, double blind, placebo-controlled trial. Setting: Primary care in a south-eastern region of the Netherlands. Method: Asthma patients aged between 16 and 60 years with a house dust mite allergy requiring ICS were randomised to intervention and placebo groups. They were trained to use a self-management plan based on peak flow and symptoms. After a 3-month training period, the intervention commenced using house dust mite impermeable and placebo bed covers. The follow-up period was 2 years. Primary outcome was the use of ICS; secondary outcomes were peak expiratory flow parameters, asthma control, and symptoms. Results: One hundred and twenty-six patients started the intervention with house dust mite impermeable or placebo bed covers. After 1 and 2 years, significant differences in allergen exposure were found between the intervention and control groups (P&lt;0.001). No significant difference between the intervention and control groups was found in the dose of ICS (P = 0.08), morning peak flow (P = 0.52), peak flow variability (P = 0.36), dyspnoea (P = 0.46), wheezing (P = 0.77), or coughing (P = 0.41). There was no difference in asthma control between the intervention and control groups. Conclusion: House dust mite impermeable bed covers combined with self-management do not lead to reduced use of ICS compared with self-management alone. abstract_id: PUBMED:9784442 House dust mite control measures in the management of asthma: meta-analysis. Objective: To determine whether patients with asthma who are sensitive to mites benefit from measures designed to reduce their exposure to house dust mite antigen in the home. Design: Meta-analysis of randomised trials that investigated the effects on asthma patients of chemical or physical measures to control mites, or both, in comparison with an untreated control group. All trials in any language were eligible for inclusion. Subjects: Patients with bronchial asthma as diagnosed by a doctor and sensitisation to mites as determined by skin prick testing, bronchial provocation testing, or serum assays for specific IgE antibodies. Main Outcome Measures: Number of patients whose allergic symptoms improved, improvement in asthma symptoms, improvement in peak expiratory flow rate. Outcomes measured on different scales were combined using the standardised effect size method (the difference in effect was divided by the standard deviation of the measurements). Results: 23 studies were included in the meta-analysis; 6 studies used chemical methods to reduce exposure to mites, 13 used physical methods, and 4 used a combination. Altogether, 41/113 patients exposed to treatment interventions improved compared with 38/117 in the control groups (odds ratio 1.20, 95% confidence interval 0.66 to 2.18). The standardised mean difference for improvement in asthma symptoms was -0.06 (95% confidence interval -0.54 to 0.41). For peak flow rate measured in the morning the standardised mean difference was -0.03 (-0.25 to 0.19). As measured in the original units this difference between the treatment and the control group corresponds to -3 l/min (95% confidence interval -25 l/min to 19 l/min). The results were similar in the subgroups of trials that reported successful reduction in exposure to mites or had long follow up times. Conclusion: Current chemical and physical methods aimed at reducing exposure to allergens from house dust mites seem to be ineffective and cannot be recommended as prophylactic treatment for asthma patients sensitive to mites. abstract_id: PUBMED:37119758 House dust mite allergy: The importance of house dust mite allergens for diagnosis and immunotherapy. House dust mite (HDM) allergy belongs to the most important allergies and affects approximately 65-130 million people worldwide. Additionally, untreated HDM allergy may lead to the development of severe disease manifestations such as atopic dermatitis or asthma. Diagnosis and immunotherapy of HDM allergic patients are well established but are often hampered by the use of mite extracts that are of bad quality and lack important allergens. The use of individual allergens seems to be a promising alternative to natural allergen extracts, since they represent well-defined components that can easily be produced and quantified. However, a thorough characterization of the individual allergens is required to determine their clinical relevance and to identify those allergens that are required for correct diagnosis of HDM allergy and for successful immunotherapy. This review gives an update on the individual HDM allergens and their benefits for diagnosis and immunotherapy of HDM allergic patients. abstract_id: PUBMED:10796618 House dust mite control measures for asthma. Background: The major allergen in house dust comes from mites. Chemical, physical and combined methods of reducing mite allergen levels are intended to reduce asthma symptoms in people who are sensitive to house dust mites. Objectives: The objective of this review was to assess the effects of reducing exposure to house dust mite antigens in the homes of mite-sensitive asthmatics. Search Strategy: We searched the Cochrane Airways Group trials register, checked reference lists of articles and hand-searched Respiration (1980 to 1996) and Clinical and Experimental Allergy (1980 to 1996). Selection Criteria: Randomised trials of mite control measures in asthmatic people known to be sensitive to house dust mites. Data Collection And Analysis: Two reviewers applied the trial inclusion criteria and extracted the data independently. One reviewer applied the trial quality assessment criteria. Study authors were contacted to clarify information. Main Results: Twenty-three trials were included, with four trials awaiting assessment. There was little difference in improvement of asthma between people in experimental groups compared to control groups (odds ratio 1.2, 95% confidence interval 0.66 to 2.18). Asthma symptom scores were also similar for the experimental and control groups (standardised mean difference -0. 06, 95% confidence interval -0.54 to 0.41). These scores showed a high degree of heterogeneity. No significant difference was noted for medication usage (standardised mean difference -0.14, 95% confidence interval -0.43 to 0.15). Peak flow in the morning showed no significant difference between the experimental and the control groups (standardised mean difference -0.03, 95% confidence interval -0.25 to 0.19). Reviewer's Conclusions: Current chemical and physical methods aimed at reducing exposure to house dust mite allergens seem to be ineffective and cannot be recommended as prophylaxis for mite sensitive asthmatics. abstract_id: PUBMED:11405979 House dust mite control measures for asthma. Background: The major allergen in house dust comes from mites. Chemical, physical and combined methods of reducing mite allergen levels are intended to reduce asthma symptoms in people who are sensitive to house dust mites. Objectives: The objective of this review was to assess the effects of reducing exposure to house dust mite antigens in the homes of mite-sensitive asthmatics. Search Strategy: We searched the Cochrane Airways Group trials register, checked reference lists of articles and hand-searched Respiration (1980 to 1996) and Clinical and Experimental Allergy (1980 to 1996). The Cochrane Library is searched every three months. Selection Criteria: Randomised trials of mite control measures vs placebo or no treatment in asthmatic people known to be sensitive to house dust mites. Data Collection And Analysis: Two reviewers applied the trial inclusion criteria, assessed their quality and extracted the data independently. Study authors were contacted to clarify information. Main Results: Twenty-nine trials (939 patients in the analyses) were included, with two trials awaiting assessment. There was little difference in improvement of asthma between patients in experimental groups compared to control groups (relative risk 1.04, 95% confidence interval (95%CI) 0.83 to 1.31). Asthma symptom scores were also similar for the experimental and control groups (standardised mean difference (SMD) -0.07, 95% CI -0.35 to 0.22), however there was significant heterogeneity between studies p=0.015. This appears to have been due, in part, to the parallel group studies of physical treatments. These three studies (107 patients) showed a significant reduction in symptoms; SMD -0.44 (95% CI -0.83, -0.06) with no heterogeneity between the trials. No significant difference was noted for medication usage (SMD -0.14, 95%CI -0.43 to 0.15). Peak flow in the morning showed no significant difference between the experimental and the control groups (SMD 0.04, 95%CI -0.13 to 0.21). Reviewer's Conclusions: Current chemical methods aimed at reducing exposure to house dust mite allergens seem to be ineffective and cannot be recommended as prophylaxis for mite sensitive asthmatics. Physical reduction methods may reduce asthma symptoms, but results of larger and more rigorous studies are required before any recommendations can be made concerning this approach. Answer: House dust mite avoidance measures have been shown to improve peak flow parameters and asthma symptoms in patients with allergy but without asthma, suggesting a potential delay in the manifestation of clinical asthma. In a study where patients with allergy but no diagnosis of asthma were randomly allocated to a treatment group and a placebo group, significant improvements were seen in the treatment group in symptom scores for disturbed sleep, breathlessness, wheeze, and overall symptom score, as well as slight but statistically significant improvements in peak flow (morning, evening, and variability). No significant changes were observed in the placebo group. These findings imply that house dust mite avoidance measures could potentially shift the development of clinically manifest asthma, although larger studies with a longer follow-up period are necessary to confirm this possibility (PUBMED:9314342). Other studies have also reported clinical improvements in patients with house dust mite allergy following mite avoidance measures. For instance, the use of a mite-killing agent, benzyl-benzoate (Acarosan), resulted in a reduction of mite content in the furniture of patients' homes, leading to an increase in days free from complaints and a decrease in drug consumption, as well as improvements in peak flow measurements (PUBMED:8190492). Similarly, adult patients with asthma and positive skin test responses to house dust mites showed significant improvements in symptom scores, hours spent wheezing, and peak expiratory flow rates after applying house dust mite avoidance measures, including the use of liquid nitrogen to kill live mites (PUBMED:3353895). However, it is important to note that not all studies have found house dust mite avoidance measures to be effective. A meta-analysis concluded that current chemical and physical methods aimed at reducing exposure to house dust mite allergens were ineffective and could not be recommended as prophylactic treatment for asthma patients sensitive to mites (PUBMED:9784442). Another review also reported little difference in improvement of asthma symptoms between experimental and control groups when applying mite control measures (PUBMED:11405979). In summary, while there is evidence to suggest that house dust mite avoidance measures can improve symptoms and peak flow in patients with allergy, the impact on the delay or prevention of clinical asthma requires further investigation.
Instruction: Black box warning: is ketorolac safe for use after cardiac surgery? Abstracts: abstract_id: PUBMED:24231193 Black box warning: is ketorolac safe for use after cardiac surgery? Objective: In 2005, after the identification of cardiovascular safety concerns with the use of nonsteroidal anti-inflammatory drugs (NSAIDs), the FDA issued a black box warning recommending against the use of NSAIDs following cardiac surgery. The goal of this study was to assess the postoperative safety of ketorolac, an intravenously administered NSAID, after cardiac surgery. Design: Retrospective observational study. Setting: Single center, regional hospital. Participants: A total of 1,309 cardiac surgical patients (78.1% coronary bypass, 28.0% valve) treated between 2006 and 2012. Interventions: A total of 488 of these patients received ketorolac for postoperative analgesia within 72 hours of surgery. Measurement And Main Results: Ketorolac-treated patients were younger, had better preoperative renal function, and underwent less complex operations compared with non-ketorolac patients. Ketorolac was administered, on average, 8.7 hours after surgery (mean doses: 3.1). Postoperative outcomes for ketorolac-treated patients were similar to those expected using Society of Thoracic Surgery database risk-adjusted outcomes. In unadjusted analysis, patients who received ketorolac had similar or better postoperative outcomes compared with patients who did not receive ketorolac, including gastrointestinal bleeding (1.2% v 1.3%; p = 1.0), renal failure requiring dialysis (0.4% v 3.0%; p = 0.001), perioperative myocardial infarction (1.0% v 0.6%; p = 0.51), stroke or transient ischemic attack (1.0% v 1.7%; p = 0.47), and death (0.4% v 5.8%; p&lt;0.0001). With adjustment in a multivariate model, treatment with ketorolac was not a predictor for adverse outcome in this cohort (odds ratio: 0.72; p = 0.23). Conclusions: Ketorolac appears to be well-tolerated for use when administered selectively after cardiac surgery. Although a black box warning exists, the data highlights the need for further research regarding its perioperative administration. abstract_id: PUBMED:34514869 Postoperative Use of Ketorolac Improves Pain Management and Decreases Narcotic Use Following Primary Cleft Palate Surgery. Objective: To study the efficacy and safety profile of ketorolac in cleft palate surgery. Design: Retrospective analysis of patients who underwent primary cleft palate surgery and received either postoperative ketorolac or opioids. Setting: Tertiary care children's hospital. Patients, Participants: Eighty-nine patients enrolled who were all younger than 36 months of age, not dependent on a gastrostomy tube, with no history of bleeding disorders, and had undergone their primary cleft palate procedure by one specific surgeon between January 2010 and June 2019. Interventions: n/a. Main Outcome Measure: Morphine equivalent dose (MED), Face, Legs, Activity, Cry, Consolability (FLACC) score, length of stay (LOS), total oral intake (mL), total oral intake/LOS, and postoperative adverse events between ketorolac and no ketorolac groups. Results: MED, FLACC score, and LOS were significantly lower in the ketorolac group compared to the no ketorolac group. One patient in the ketorolac group had a bleeding event. Conclusions: Use of ketorolac significantly decreased narcotic usage and pain scores as reported by the FLACC score. Moreover, postoperative bleeding was rare in both ketorolac and no ketorolac groups. abstract_id: PUBMED:34667691 Enhanced Recovery After Surgery Protocol for Lumbar Spinal Surgery With Regional Anesthesia: A Retrospective Review. Background In the USA, spinal fusion surgery incurs the highest hospital cost. Despite the recent advances in the application of enhanced recovery after surgery (ERAS) protocols in these surgeries, the efficacy of these protocols in improving the perioperative outcomes remains unclear. We conducted a retrospective review as a quality improvement (QI) project to analyze the efficacy of the ERAS protocol with intraoperative modified thoracolumbar interfascial plane (mTLIP) block to determine whether these interventions reduce the length of stay (LOS) and opioid requirements during the postoperative period. Methods Retrospective reviews of adult patients (&gt;18 yrs) who underwent elective lumbar spinal fusion or laminectomy at our institute were reviewed. Patients were administered oral gabapentin and acetaminophen preoperatively. Prior to incision, an mTLIP block was performed using liposomal bupivacaine. Intraoperatively, ketamine, ketorolac, and tranexamic acid were administered. Postoperative, pain control was treated with scheduled acetaminophen, ketorolac, and low-dose ketamine infusion. Hydromorphone and oxycodone were administered for breakthrough pain. Patients who underwent a similar procedure without ERAS protocol were chosen as controls to assess the efficacy of ERAS protocol. Data pertaining to patient demographics, operative and perioperative use of analgesics, LOS, 90-day readmissions, and morbidity were collected. Patients who underwent laminectomy and spinal fusion surgery were analyzed separately Results A total of 65 patients were identified; laminectomy (n- 24), spinal fusion surgery (n-41). In the laminectomy patients, treatment group (n-12) and the control group (n-12). Treatment group receiving the ERAS protocol with the regional anesthesia via the mTLIP (n= 12) opioid requirement was reduced by 51.42% [P = 0.03], and LOS was reduced by 2.04 days [P = 0.01] [0.75 days vs. 2.79 days]). In the spinal fusion patients, treatment group (n-15) and control group (n-26). Treatment group receiving the ERAS protocol with the use of regional anesthesia via the mTLIP group (n= 15), opioid requirement was reduced by 38.33% [P = 0.04]. No difference in LOS was observed at 5.4 days vs. 4.88 days (P = 0.28). Conclusion ERAS protocol in patients undergoing lumbar spinal surgery incorporated the use of regional anesthesia via the mTLIP block, we observed there is a statistically significant reduction in the LOS for lumbar laminectomy and a significant reduction in opioid administration for lumbar laminectomies and spinal fusion surgery. abstract_id: PUBMED:28493461 Ophthalmic nepafenac use in the Netherlands and Denmark. Purpose: To describe nepafenac use in the Netherlands and Denmark with reference to its approved indications. For context, we also describe the use of ketorolac and diclofenac. Methods: We identified users in the PHARMO Database Network (the Netherlands, 2008-2013) and the Danish national health registers (Denmark, 1994-2014). We described prevalence of cataract surgery and duration of use in patients with cataract surgery with and without diabetes. Results: In the Netherlands, 9530 nepafenac users (mean age, 71 years; 60% women) contributed 12 691 therapy episodes, of which 21% had a recently recorded cataract surgery. Of 2266 episodes in adult non-diabetic patients with cataract surgery, 60% had one bottle dispensed (treatment duration ≤21 days). Of 441 episodes in adult diabetic patients with cataract surgery, 90% had up to two bottles dispensed (≤60 days). Denmark had 60 403 nepafenac users (mean age, 72 years; 58% women) and 73 648 episodes (41% had recorded cataract surgery). Of 26 649 nepafenac episodes in adult non-diabetic patients with cataract surgery, 92% had one bottle dispensed. Of 3801 episodes in adult diabetic patients with cataract surgery, 99.8% had up to two bottles dispensed. Use patterns of nepafenac, ketorolac and diclofenac were roughly similar in the Netherlands, but not in Denmark. Conclusion: Less than half of therapy episodes were related to cataract surgery; around 90% of episodes with surgery were within the approved duration. Underrecording of ophthalmic conditions and procedures was a challenge in this study. abstract_id: PUBMED:26106831 Ketorolac Use and Postoperative Complications in Gastrointestinal Surgery. Objective: To study the association between ketorolac use and postoperative complications. Background: Nonsteroidal anti-inflammatory drugs may impair wound healing and increase the risk of anastomotic leak in colon surgery. Studies to date have been limited by sample size, inability to identify confounding, and a focus limited to colon surgery. Methods: Ketorolac use, reinterventions, emergency department (ED) visits, and readmissions in adults (≥ 18 years) undergoing gastrointestinal (GI) operations was assessed in a nationwide cohort using the MarketScan Database (2008-2012). Results: Among 398,752 patients (median age 52, 45% male), 55% underwent colorectal surgery, whereas 45% had noncolorectal GI surgery. Five percent of patients received ketorolac. Adjusting for demographic characteristics, comorbidities, surgery type/indication, and preoperative medications, patients receiving ketorolac had higher odds of reintervention (odds ratio [OR] 1.20, 95% confidence interval [CI] 1.08-1.32), ED visit (OR 1.44, 95% CI 1.37-1.51), and readmission within 30 days (OR 1.11, 95% CI 1.05-1.18) compared to those who did not receive ketorolac. Ketorolac use was associated with readmissions related to anastomotic complications (OR 1.20, 95% CI 1.06-1.36). Evaluating only admissions with ≤ 3 days duration to exclude cases where ketorolac might have been used for complication-related pain relief, the odds of complications associated with ketorolac were even greater. Conclusions: Use of intravenous ketorolac was associated with greater odds of reintervention, ED visit, and readmission in both colorectal and noncolorectal GI surgery. Given this confirmatory evaluation of other reports of a negative association and the large size of this cohort, clinicians should exercise caution when using ketorolac in patients undergoing GI surgery. abstract_id: PUBMED:32565903 Breast cancer and the black swan. Most current research in cancer is attempting to find ways of preventing patients from dying after metastatic relapse. Driven by data and analysis, this project is an approach to solve the problem upstream, i.e., to prevent relapse. This project started with the unexpected observation of bimodal relapse patterns in breast and a number of other cancers. This was not explainable with the current cancer paradigm that has guided cancer therapy and early detection for many years. After much analysis using computer simulation and input from a number of medical specialties, we eventually came to the conclusion that the surgery to remove the primary tumour produced systemic inflammation for a week after surgery. This systemic inflammation apparently caused exits of cancer cells and micrometastases from dormant states and resulted in relapses in the first 3 years post-surgery. It was determined in a retrospective study that the common inexpensive perioperative non-steroidal anti-inflammatory drug (NSAID) ketorolac could curtail the early relapse events after breast cancer surgery. A second retrospective study strongly confirmed this but an apparently underpowered prospective study showed no advantage. We are analysing these data and are now proposing to test the perioperative NSAID at Beth Israel Deaconess Medical Centre with triple-negative breast cancer (TNBC) patients, the category that could respond best to the perioperative NSAID. If this works as well as we expect, we would then transfer this technology to low- and/or middle-incomes countries (LMICs), starting with Nigeria where early onset type of TNBC is common. There is an unmet need in LMICs, especially in countries like Nigeria (190 million population), for a means to prevent surgery induced relapse that we are attempting to resolve. This work aims, thus, to describe eventual mechanisms, and ways to test a solution addressing an unmet need. But first, we consider the context, including within an historical perspective, important to explain how and why a Kuhnian paradigm shift may be considered. abstract_id: PUBMED:19564004 Appropriateness of ketorolac use in a trauma hospital Objective: To evaluate the suitability of ketorolac and non-steroidal anti-inflamatory drugs (NSAIDs) and other analgesic drugs currently used in the hospital. Material And Method: We have followed the steps to develop a PDCA cycle (plan, do, check, act) or quality improvement cycle. The quality problem was analysed using an Ishikawa diagram. We defined both qualitative quality indicators, those that measure prescription quality, and quantitative ones (defined daily dose, DDD/100BDs), which measure drug consumption, being the objectives to achieve. The study was conducted in all patients admitted to the hospital and who were admitted to orthopaedic and trauma surgery and plastic surgery departments with unit-dose dispensing systems. The strategy used was to give information to physicians through meetings and documentation. Finally, the results were analysed and compared with the initial objectives. Results: The study was performed on 260 patients in the first study period and 292 in the second. Qualitative indicators: intravenous ketorolac use &lt; or =2 days, increased in 25.5% (p&lt;0.001); in patients &gt; or =65 years old at dose &lt; or =60 mg/day it increased 27.7% (p&lt;0.05). Quantitative indicators: in the second study period, ketorolac use decreased (plastic surgery department: 61.8 DDD/100BDs to 14.8), whereas tramadol, ibuprofen and metamizole increased (plastic surgery department: 0 to 14.1 in tramadol, 8.7 to 48.6 in ibuprofen and 50.1 to 71 in metamizole). Conclusions: Appropriateness of ketorolac, NSAIDs and tramadol use has been achieved, thus improving patient safety. Strategies have been effective. abstract_id: PUBMED:32474196 Reassessing Opioid Use in Breast Surgery. Background: This study aims to assess multimodal pain management and opioid prescribing practices in patients undergoing breast surgery. Methods: A retrospective review of patients undergoing breast surgery at an academic medical center between April 1, 2018 and September 30, 2019, was performed. Patients with a history of recent opioid use or conditions precluding use of nonsteroidal anti-inflammatory drugs (NSAIDs) or acetaminophen (APAP) were excluded. Opioid-sparing pain regimens were assessed. Opioids prescribed on discharge were recorded as oral morphine equivalents (OMEs) and concordance with the Opioid Prescribing Engagement Network (OPEN) determined. Results: The total study population consisted of 518 patients. 358 patients underwent minor outpatient procedures (sentinel lymph node biopsy, lumpectomy, and excisional biopsy), 10-40% of whom were appropriately prescribed as per the OPEN. Perioperatively, 53.9% of patients received APAP, 24.6% NSAIDs, 20.4% gabapentin, and 0.3% blocks; intraoperatively, 95.8% received local anesthetic and 25.7% ketorolac. For mastectomy without reconstruction, 63-88% of prescriptions were concordant with the OPEN. For mastectomy with reconstruction, discharge opioids ranged from 25 to 400 OMEs with a mean of 134.4 OMEs; 25% of patients received a refill. Of all patients undergoing mastectomy ± reconstruction, 62.5% received APAP, 18.8% NSAIDs, 38.8% pregabalin, and 20.6% locoregional block perioperatively; 37.5% received local anesthetic and 15.6% ketorolac intraoperatively. Of 143 inpatient stays, 89% received APAP, 38% NSAID, and 29% benzodiazepines; 29 patients received no opioids inpatient but were still prescribed 25-200 OMEs on discharge. Conclusions: There is need for a multidisciplinary approach to pain management with the use of enhanced recovery after surgery protocols as potential means to standardize perioperative regimens and mitigate opioid overprescription. abstract_id: PUBMED:32340809 Ketorolac use and anastomotic leak in patients with esophageal cancer. Objectives: Recent evidence has shown an association between postoperative ketorolac use and anastomotic leak in patients undergoing intestinal and colorectal operations, but this relationship has been minimally explored after esophagectomy. As the use of nonopioid pain control and enhanced recovery protocols is increasingly prioritized, determination of a possible correlation between perioperative ketorolac use and leak is essential. Methods: Records of patients undergoing esophagectomy for adenocarcinoma at a single institution from 2006 to 2018 reviewed for occurrence of anastomotic leak. Institutional pharmacy records were queried for ketorolac administration during the surgical case through the time of discharge. Multivariable logistic regression was used to determine the relationship between ketorolac administration and anastomotic leak. Results: A total of 1019 patients met inclusion criteria, the majority of whom were male (907, 89%) with a median age of 62 years. Patients predominantly presented with locoregionally advanced disease and were treated with initial chemoradiation. Ketorolac was administered to 686 patients (67%); use was observed to increase over the study period from 49% in 2006 to 92% in 2016. Conversely, anastomotic leak occurred in 87 patients (9%) overall and decreased over time from 15% (11/72) in 2006 to 2% (2/83) in 2018. Upon multivariable analysis, neither ketorolac administration evaluated as a categoric variable (odds ratio, 0.99; P = .958) or as a continuous variable using dose (odds ratio, 1.00; P = .843) demonstrated an association with anastomotic leak. Conclusions: Ketorolac in the postoperative period after esophagectomy has become an integral component of enhanced recovery pathways and does not appear to be associated with anastomotic leak. abstract_id: PUBMED:10064634 Postoperative hemorrhage after tonsillectomy: use of ketorolac tromethamine. Recent reports have associated an increased incidence of bleeding after tonsillectomy with the perioperative use of ketorolac tromethamine. To review this association, we examined the hospital and office records of 310 pediatric patients who underwent tonsillectomy with or without adenoidectomy at our institution during a 2-year period. Of these patients, 213 received ketorolac administered as a single dose at the conclusion of the procedure. The remaining 97 patients did not receive ketorolac. The frequency of postoperative hemorrhage was not found to differ significantly between these 2 groups (2.3% vs. 3.1% respectively, P = 0.71). Furthermore, the average time to discharge after surgery was significantly shorter in those patients who received ketorolac than in those who did not (8.5 hours vs. 12.5 hours, respectively, P &lt; 0.0001). The frequency of overnight hospital stays was also significantly lower in those patients who received ketorolac (16.0% vs. 31.6%, respectively, P &lt; 0.01). Concern over the potential for increased hemorrhage after tonsillectomy has led several authors to caution against the use of ketorolac in this setting. In our study, however, the use of ketorolac was not found to increase the incidence of posttonsillectomy hemorrhage and furthermore was associated with a significant decrease in the length of hospital stay as well as a decreased likelihood of overnight hospital stay after surgery. Answer: The safety of ketorolac after cardiac surgery has been a concern due to cardiovascular safety issues identified with the use of nonsteroidal anti-inflammatory drugs (NSAIDs), leading the FDA to issue a black box warning in 2005 recommending against their use following cardiac surgery. However, a retrospective observational study conducted at a single center, regional hospital, which included 1,309 cardiac surgical patients, found that ketorolac appears to be well-tolerated when administered selectively after cardiac surgery. In this study, 488 patients received ketorolac for postoperative analgesia within 72 hours of surgery. The outcomes for ketorolac-treated patients were similar or better compared to those who did not receive ketorolac, including rates of gastrointestinal bleeding, renal failure requiring dialysis, perioperative myocardial infarction, stroke or transient ischemic attack, and death. After adjusting for various factors in a multivariate model, treatment with ketorolac was not a predictor for adverse outcomes in this cohort. These findings suggest that, despite the black box warning, ketorolac may be safe for use after cardiac surgery, highlighting the need for further research regarding its perioperative administration (PUBMED:24231193).
Instruction: Is alpha-synuclein in the colon a biomarker for premotor Parkinson's disease? Abstracts: abstract_id: PUBMED:27803984 α-Synuclein in the colon and premotor markers of Parkinson disease in neurologically normal subjects. Extranigral non-motor signs precede the first motor manifestations of Parkinson's disease by many years in some patients. The presence of α-synuclein deposition within colon tissues in patients with Parkinson's disease can aid in identifying early neuropathological changes prior to disease onset. In the present study, we evaluated the roles of non-motor symptoms and signs and imaging biomarkers of nigral neuronal changes and α-synuclein accumulation in the colon. Twelve subjects undergoing colectomy for primary colon cancer were recruited for this study. Immunohistochemical staining for α-synuclein in normal and phosphorylated forms was performed in normally appearing colonic tissue. We evaluated 16 candidate premotor risk factors in this study cohort. Among them, ten subjects showed positive immunostaining with normal- and phosphorylated-α-synuclein. An accumulation of premotor markers in each subject was accompanied with positive normal- and phosphorylated-α-synuclein immunostaining, ranging from 2 to 7 markers per subject, whereas the absence of Lewy bodies in the colon was associated with relative low numbers of premotor signs. A principal component analysis and a cluster analysis of these premotor markers suggest that urinary symptoms were commonly clustered with deposition of peripheral phosphorylated-α-synuclein. Among other premotor marker, color vision abnormalities were related to non-smoking. This mathematical approach confirmed the clustering of premotor markers in preclinical stage of Parkinson's disease. This is the first report showing that α-synuclein in the colon and other premotor markers are related to each other in neurologically normal subjects. abstract_id: PUBMED:22550057 Is alpha-synuclein in the colon a biomarker for premotor Parkinson's disease? Evidence from 3 cases. Background: Despite clinicopathological evidence that Parkinson's disease (PD) may begin in peripheral tissues, identification of premotor Parkinson's disease is not yet possible. Alpha-synuclein aggregation underlies Parkinson's disease pathology, and its presence in peripheral tissues may be a reliable disease biomarker. Objective: We sought evidence of alpha-synuclein pathology in colonic tissues before the development of characteristic Parkinson's disease motor symptoms. Methods: Old colon biopsy samples were available for three subjects with PD. Biopsies were obtained 2-5 years before PD onset. We performed immunohistochemistry studies for the presence of alpha-synuclein and Substance P in these samples. Results: All subjects showed immunostaining for alpha-synuclein (two, five and two years before first motor Parkinson's disease symptom). No similar alpha-synuclein immunostaining was seen in 23 healthy controls. Staining of samples for substance P suggested colocalization of alpha-synuclein and substance P in perikarya and neurites. Conclusions: This is the first demonstration of alpha-synuclein in colon tissue prior to onset of PD. Additional study is required to determine whether colonic mucosal biopsy may be a biomarker of premotor PD. abstract_id: PUBMED:27324838 Similar α-Synuclein staining in the colon mucosa in patients with Parkinson's disease and controls. Background: The gut is proposed as a starting point of idiopathic IPD, but the presence of α-synuclein in the IPD colon mucosa is debated. Objectives: The objective of this study was to evaluate if α-synuclein in the colon mucosa can serve as a biomarker of IPD. Methods: Immunohistochemistry was used to locate and quantify in a blinded approach α-synuclein in the mucosa from biopsies of the right and left colon in 19 IPD patients and 8 controls. Results: Total α-synuclein was present in all but 1 IPD patients and in all controls; phosphorylated α-synuclein was present in all subjects. There was no intensity difference depending on disease status. Staining of total α-synuclein was stronger in the right colon (p = .04). Conclusions: Conventional immunohistochemistry α-synuclein staining in colon mucosal biopsies cannot serve as a biomarker of idiopathic PD. These findings do not contradict the assumption of disease starting in the colon, and a colon segment-specific risk for disease initiation can still be hypothesized. © 2016 International Parkinson and Movement Disorder Society. abstract_id: PUBMED:28776303 Premotor Diagnosis of Parkinson's Disease. Typical Parkinsonian symptoms consist of bradykinesia plus rigidity and/or resting tremor. Some time later postural instability occurs. Pre-motor symptoms such as hyposmia, constipation, REM sleep behavior disorder and depression may antecede these motor symptoms for years. It would be ideal, if we had a biomarker which would allow to predict who with one or two of these pre-motor symptoms will develop the movement disorder Parkinson's disease (PD). Thus, it is interesting to learn that biopsies of the submandibular gland or colon biopsies may be a means to predict PD, if there is a high amout of abnormally folded alpha-synuclein and phosphorylated alpha-synuclein. This would be of relevance if we would have available means to stop the propagation of abnormal alpha-synuclein which is otherwise one of the reasons of this spreading disease PD. abstract_id: PUBMED:26686342 Alpha-synuclein in gastric and colonic mucosa in Parkinson's disease: Limited role as a biomarker. Background: Gastric and colonic alpha-synuclein immunoreactivity has been reported in patients with Parkinson's disease (PD). However, enteric alpha-synuclein also has been reported in healthy individuals. Objectives: We aimed to investigate the utility of alpha-synuclein immunoreactivity from gastric and colonic mucosal tissues obtained by routine endoscopy to detect PD, and to correlate the pathological burden of alpha-synuclein with motor and nonmotor features of PD. Methods: We recruited 104 study subjects, consisting of 38 patients with PD, 13 patients with probable multiple system atrophy (MSA), and 53 healthy controls. Gastric and colonic mucosal tissues obtained by endoscopic gastroduodenoscopy and colonoscopy were assessed using alpha-synuclein immunohistochemistry. Detailed motor and nonmotor features of PD were correlated with enteric alpha-synuclein immunoreactivity. Results: No difference was seen in the enteric α-SYN immunoreactivity among patients with PD (31.6% for stomach and 10.4% for colon), patients with MSA (40.0% for stomach and 8.0% for colon), and healthy controls (33.3% for stomach and 18.5% for colon). The frequency of positive alpha-synuclein immunoreactivity was higher in gastric biopsy tissues than in colonic biopsy tissues in all of the study groups (P &lt; 0.05). No significant correlation was found between the presence of alpha-synuclein immunoreactivity and the motor and nonmotor features of PD. Conclusions: The presence of alpha-synuclein immunoreactivity in gastric and colonic mucosa was detected in a similar manner in patients with PD, patients with MSA, and controls, thus suggesting a limited role of enteric mucosal alpha-synuclein as a diagnostic biomarker for PD. Future studies are warranted to detect pathological alpha-synuclein strains. abstract_id: PUBMED:22508282 Biochemical premotor biomarkers for Parkinson's disease. A biomarker is a biological characteristic that is objectively measured and evaluated as an indicator of normal biological or pathologic processes or of pharmacologic responses to a therapeutic intervention. We reviewed the current status of target protein biomarkers (eg, total/oligomeric α-synuclein and DJ-1) in cerebrospinal fluid, as well as on unbiased processes that can be used to discover novel biomarkers. We have also provide details about strategies toward potential populations/models and technologies, including the need for standardized sampling techniques, to pursue the identification of new biochemical markers in the premotor stage of Parkinson's disease in the future. abstract_id: PUBMED:24375496 Alimentary, my dear Watson? The challenges of enteric α-synuclein as a Parkinson's disease biomarker. An accurate early diagnostic test for Parkinson's disease (PD) is a critical unmet need. Recently, independent groups using different histological techniques have reported that the presence of alpha-synuclein (α-syn) in colonic biopsy tissue is able to distinguish living patients with PD from those without the disease. In addition, a further study has suggested that the presence of α-syn in colonic biopsy tissue may be evident in early or even prodromal PD. However, several questions remain regarding the translation of these findings into using the assessment of α-syn deposition in the enteric nervous system as a diagnostic biomarker for prodromal PD. Here we address critical issues related to the location and quantification of enteric α-syn, detection of α-syn with currently available histological techniques, timing of detection of α-syn deposition, and, most crucially, whether enteric α-syn can distinguish those with PD from both healthy individuals and individuals with other related diseases. We conclude that, although enteric α-syn is a very exciting prospect, further studies will be vital to determine whether enteric α-syn deposition has the potential to be the biomarker for prodromal PD that the field so desperately seeks. abstract_id: PUBMED:28353371 The Systemic Synuclein Sampling Study: toward a biomarker for Parkinson's disease. The search for a biomarker for Parkinson's disease (PD) has led to a surge in literature describing peripheral α-synuclein (aSyn) in both biofluids and biopsy/autopsy tissues. Despite encouraging results, attempts to capitalize on this promise have fallen woefully short. The Systemic Synuclein Sampling Study (S4) is uniquely designed to identify a reproducible diagnostic and progression biomarker for PD. S4 will evaluate aSyn in multiple tissues and biofluids within the same subject and across the disease spectrum to identify the optimal biomarker source and provide vital information on the evolution of peripheral aSyn throughout the disease. Additionally, S4 will correlate the systemic aSyn profile with an objective measure of nigrostriatal dopaminergic function furthering our understanding of the pathophysiological progression of PD. abstract_id: PUBMED:24262174 Premotor parkinsonism models. Aside from motor symptoms, Parkinson's disease is associated with a number of non-motor symptoms arising many years before motor signs. The most prevalent and predictive premotor symptoms include olfactory dysfunction, REM sleep behaviour disorder (RBD) and constipation. Several studies in toxin- or gene-based models have specifically investigated these non-motor signs in a premotor context. Altered olfactory discrimination has been reproduced both in toxin- and in gene-based models, and genetic models may also reproduce the underlying pathophysiological mechanisms. Sleep alterations have also been demonstrated, mostly in toxin-based models, and RBD-like features can be demonstrated in non-human primates. Gastrointestinal dysfunction and associated enteric pathology are reproduced both in toxin and genetic models displaying reduced colonic motility and enteric α-synuclein accumulation. This review describes the main premotor symptoms that are recapitulated both in toxin- and in gene-based models, their relevance to the human disease, and their potential to understand the underlying mechanisms of early symptoms and disease progression. abstract_id: PUBMED:30619053 Dermal Phospho-Alpha-Synuclein Deposition in Patients With Parkinson's Disease and Mutation of the Glucocerebrosidase Gene. Heterozygous mutations in the glucocerebrosidase gene (GBA1) represent the most common genetic risk factor for Parkinson's disease (PD) and are histopathologically associated with a widespread load of alpha-synuclein in the brain. Therefore, PD patients with GBA1 mutations are a cohort of high interest for clinical trials on disease-modifying therapies targeting alpha-synuclein. There is evidence that detection of phospho-alpha-synuclein (p-syn) in dermal nerve fibers might be a biomarker for the histopathological identification of PD patients even at premotor or very early stages of disease. It is so far unknown whether dermal p-syn deposition can also be found in PD patients with GBA1 mutations and may serve as a biomarker for PD in these patients. Skin biopsies of 10 PD patients with different GBA1 mutations (six N370S, three E326K, one L444P) were analyzed by double-immunofluorescence labeling with anti-p-syn and anti-protein gene product 9.5 (PGP9.5, axonal marker) to detect intraaxonal p-syn deposition. Four biopsy sites (distal, proximal leg, paravertebral Th10, and C7) per patient were studied. P-syn was found in six patients (three N370S, three E326K). P-syn deposition was mainly detected in autonomic nerve fibers, but also in somatosensory fibers and was not restricted to a certain GBA1 mutation. In summary, dermal p-syn in PD patients with GBA1 mutations seems to offer a similar distribution and frequency as observed in patients without a known mutation. Skin biopsy may be suitable to study p-syn deposition in these patients or even to identify premotor patients with GBA1 mutations. Answer: The presence of alpha-synuclein in the colon has been investigated as a potential biomarker for premotor Parkinson's disease (PD). Some studies have found evidence of alpha-synuclein pathology in colonic tissues before the development of characteristic PD motor symptoms, suggesting it could be a reliable disease biomarker (PUBMED:22550057). Immunohistochemical staining for alpha-synuclein in normally appearing colonic tissue has shown positive results in subjects with premotor risk factors for PD (PUBMED:27803984). Additionally, the presence of phosphorylated alpha-synuclein in the colon has been associated with urinary symptoms, which are common premotor markers in the preclinical stage of PD (PUBMED:27803984). However, other studies have reported that conventional immunohistochemistry for alpha-synuclein staining in colon mucosal biopsies cannot serve as a biomarker of idiopathic PD, as there was no intensity difference in staining between PD patients and controls (PUBMED:27324838). Similarly, another study found no significant difference in enteric alpha-synuclein immunoreactivity among patients with PD, patients with multiple system atrophy (MSA), and healthy controls, suggesting a limited role of enteric mucosal alpha-synuclein as a diagnostic biomarker for PD (PUBMED:26686342). The challenges of using enteric alpha-synuclein as a PD biomarker include issues related to the location and quantification of enteric alpha-synuclein, detection with current histological techniques, timing of detection, and distinguishing PD from other related diseases (PUBMED:24375496). Despite the potential of biopsies of the submandibular gland or colon to predict PD through abnormal alpha-synuclein, the lack of means to stop the propagation of abnormal alpha-synuclein limits its current relevance (PUBMED:28776303). In summary, while there is some evidence supporting the presence of alpha-synuclein in the colon as a biomarker for premotor PD, the findings are inconsistent, and further research is needed to establish its utility and reliability as a diagnostic tool (PUBMED:22550057; PUBMED:27803984; PUBMED:27324838; PUBMED:26686342; PUBMED:24375496; PUBMED:28776303).
Instruction: Limitations of GD-EOB-DTPA-enhanced MRI: can clinical parameters predict suboptimal hepatobiliary phase? Abstracts: abstract_id: PUBMED:27842889 Limitations of GD-EOB-DTPA-enhanced MRI: can clinical parameters predict suboptimal hepatobiliary phase? Aim: To establish cut-off levels of the clinical parameters, which would predict suboptimal 30 minutes delayed hepatobiliary phase (HBP) with high specificity. Materials And Methods: This retrospective study included patients with chronic liver disease who underwent hepatocellular carcinoma screening with Gd-EOB-DTPA-enhanced magnetic resonance imaging (MRI) between 1 January 2011 and 30 November 2014. For each case, HBP was graded as adequate or suboptimal, based on Liver Image Reporting and Data System (LI-RADS) criteria. The following laboratory data obtained within 3 months of the MRI date was extracted: total bilirubin (TB), direct bilirubin (DB), serum glutamic oxaloacetic transaminase (SGOT), serum glutamic-pyruvic transaminase (SGPT), alkaline phosphatase (ALP), albumin, activated partial thromboplastin time (aPTT), and International normalised ratio (INR). Model For End-Stage Liver Disease (MELD) scores were calculated as 3.78×ln[TB] + 11.2×ln[INR] + 9.57×ln[creatinine] + 6.43. Receiver operating characteristic (ROC) curve analysis was used to establish cut-off values for predicting suboptimal HBP. Results: Of 284 patients, 242 (85.2%) patients (91; 57.6% male) had an adequate HBP and 42 (14.8%) patients (13; 61.9% male) had suboptimal HBP, with mean ages of 58.5±9.7 years and 55±12.7 years, respectively (p=0.096). Areas under the ROC curve for predicting suboptimal HBP were 0.85 (95%CI 0.79-0.91) for the MELD score, 0.88 (95%CI 0.82-0.93) for TB, and 0.91 (95%CI 0.86-0.95) for DB. Accuracy, positive likelihood ratios and cut-off values for predicting suboptimal HBP were, respectively: 86.7% and 11.2 for the MELD score ≥16.7, 88.2% and 28.7 for TB ≥4.3 mg/dl, and 91.1% and 36.4 for DB ≥1.3 mg/dl. SGOT, SGPT, and ALP were not statistically significantly different between the groups. Conclusion: Cut-off levels of MELD score, DB, and TB can predict an suboptimal HBP with high accuracy. Prospective identification of patients with a high likelihood of an suboptimal HBP can help to avoid administering a more costly agent to patients who would not benefit from its unique properties. abstract_id: PUBMED:24151218 Feasibility of semiautomated MR volumetry using gadoxetic acid-enhanced MRI at hepatobiliary phase for living liver donors. Purpose: To assess the feasibility of semiautomated MR volumetry using gadoxetic acid-enhanced MRI at the hepatobiliary phase compared with manual CT volumetry. Methods: Forty potential live liver donor candidates who underwent MR and CT on the same day, were included in our study. Semiautomated MR volumetry was performed using gadoxetic acid-enhanced MRI at the hepatobiliary phase. We performed the quadratic MR image division for correction of the bias field inhomogeneity. With manual CT volumetry as the reference standard, we calculated the average volume measurement error of the semiautomated MR volumetry. We also calculated the mean of the number and time of the manual editing, edited volume, and total processing time. Results: The average volume measurement errors of the semiautomated MR volumetry were 2.35% ± 1.22%. The average values of the numbers of editing, operation times of manual editing, edited volumes, and total processing time for the semiautomated MR volumetry were 1.9 ± 0.6, 8.1 ± 2.7 s, 12.4 ± 8.8 mL, and 11.7 ± 2.9 s, respectively. Conclusion: Semiautomated liver MR volumetry using hepatobiliary phase gadoxetic acid-enhanced MRI with the quadratic MR image division is a reliable, easy, and fast tool to measure liver volume in potential living liver donors. abstract_id: PUBMED:28742376 Hypervascular Transformation of Hypovascular Hypointense Nodules in the Hepatobiliary Phase of Gadoxetic Acid-Enhanced MRI: A Systematic Review and Meta-Analysis. Objective: The purpose of this study is to evaluate the outcomes of hypovascular hypointense nodules in the hepatobiliary phase of gadoxetic acid-enhanced MRI and the risk factors for the hypervascular transformation of the nodules through a systematic review and meta-analysis. Materials And Methods: We searched the Ovid-MEDLINE and EMBASE databases for published studies of hypovascular hypointense nodules in patients with chronic liver disease. The pooled proportions of the overall and cumulative incidence rates at 1, 2, and 3 years for the transformation of hypovascular hypointense nodules into hypervascular hepatocellular carcinomas (HCCs) were assessed by using random-effects modeling. Metaregression analysis was performed. Results: Sixteen eligible studies with 944 patients and 1819 hypovascular hypointense nodules in total were included. The pooled overall rate of hypervascular transformation was 28.2% (95% CI, 22.7-33.6%; I2 = 87.46%). The pooled 1-, 2-, and 3-year cumulative incidence rates were 18.3% (95% CI, 9.2-27.4%), 25.2% (95% CI, 12.2-38.2%), and 30.3% (95% CI, 18.8-41.9%), respectively. The metaregression analysis revealed that the mean initial nodule size (cutoff value, 9 mm) was a significant factor affecting the heterogeneity of malignant transformation. Conclusion: Hypovascular hypointense nodules detected in the hepatobiliary phase of gadoxetic acid-enhanced MRI carry a significant potential of transforming into hypervascular HCCs. The size of nodules is a significant risk factor for hypervascular transformation. abstract_id: PUBMED:27547685 Efficacy of the projection onto convex sets (POCS) algorithm at Gd-EOB-DTPA-enhanced hepatobiliary-phase hepatic MRI. Purpose: To investigate the efficacy of the projection onto convex sets (POCS) algorithm at Gd-EOB-DTPA-enhanced hepatobiliary-phase MRI. Methods: In phantom study, we scanned a phantom and obtained images by conventional means (P1 images), by partial-Fourier image reconstruction (PF, P2 images) and by PF with the POCS algorithm (P3 images). Then we acquired and compared subtraction images (P2-P1 images and P3-P1 images). In clinical study, 55 consecutive patients underwent Gd-EOB-DTPA (EOB)-enhanced 3D hepatobiliary-phase MRI on a 1.5T scanner. Images were obtained using conventional method (C1 images), PF (C2 images), and PF with POCS (C3 images). The acquisition time was 17-, 14-, and 14 s for protocols C1, C2 and C3, respectively. Two radiologists assigned grades for hepatic vessel sharpness and we compared the visual grading among the 3 protocols. And one radiologist compared signal-to-noise-ratio (SNR) of the hepatic parenchyma. Results: In phantom study, there was no difference in signal intensity on a peripheral phantom column on P3-P1 images. In clinical study, there was no significant difference between C1 and C3 images (2.62 ± 0.49 vs. 2.58 ± 0.49, p = 0.70) in the score assigned for vessel sharpness nor in SNR (13.3 ± 2.67 vs. 13.1 ± 2.51, p = 0.18). Conclusion: The POCS algorithm makes it possible to reduce the scan time of hepatobiliary phase (from 17 to 14 s) without reducing SNR and without increasing artifacts. abstract_id: PUBMED:34298844 Characteristics and Lenvatinib Treatment Response of Unresectable Hepatocellular Carcinoma with Iso-High Intensity in the Hepatobiliary Phase of EOB-MRI. In hepatocellular carcinoma (HCC), CTNNB-1 mutations, which cause resistance to immune checkpoint inhibitors, are associated with HCC with iso-high intensity in the hepatobiliary phase of gadoxetic acid-enhanced magnetic resonance imaging (EOB-MRI) in resectable HCC; however, analyses on unresectable HCC are lacking. This study analyzed the prevalence, characteristics, response to lenvatinib, and CTNNB-1 mutation frequency in unresectable HCC with iso-high intensity in the hepatobiliary phase of EOB-MRI. In 52 patients with unresectable HCC treated with lenvatinib, the prevalence of iso-high intensity in the hepatobiliary phase of EOB-MRI was 13%. All patients had multiple HCCs, and 3 patients had multiple HCCs with iso-high intensity in the hepatobiliary phase of EOB-MRI. Lenvatinib response to progression-free survival and overall survival were similar between patients with or without iso-high intensity in the hepatobiliary phase of EOB-MRI. Seven patients (three and four patients who had unresectable HCC with or without iso-high intensity in the hepatobiliary phase of EOB-MRI, respectively) underwent genetic analyses. Among these, two (67%, 2/3) who had HCC with iso-high intensity in the hepatobiliary phase of EOB-MRI carried a CTNNB-1 mutation, while all four patients who had HCC without iso-high intensity in the hepatobiliary phase of EOB-MRI did not carry the CTNNB-1 mutation. This study's findings have clinical implications for the detection and treatment of HCC with iso-high intensity in the hepatobiliary phase of EOB-MRI. abstract_id: PUBMED:37404221 A case of focal nodular hyperplasia-like lesion presenting unusual signal intensity on the hepatobiliary phase of gadoxetic acid-enhanced magnetic resonance image. Focal nodular hyperplasia (FNH) or FNH-like lesions of the liver are benign lesions that can be mostly diagnosed by hepatobiliary phase gadoxetic acid-enhanced magnetic resonance imaging (MRI). Accurate imaging diagnosis is based on the fact that most FNHs or FNH-like lesions show characteristic hyper- or isointensity on hepatobiliary phase images. We report a case of an FNH-like lesion in a 73-year-old woman that mimicked a malignant tumor. Dynamic contrast-enhanced computed tomography (CT) and MRI using gadoxetic-acid revealed an ill-defined nodule showing early enhancement in the arterial phase and gradual and prolonged enhancement in the portal and equilibrium/transitional phases. Hepatobiliary phase imaging revealed inhomogeneous hypointensity, accompanied by a slightly isointense area compared to the background liver. Angiography-assisted CT showed a portal perfusion defect of the nodule, inhomogeneous arterial blood supply in the early phase, and less internal enhancement in the late phase, accompanied by irregularly shaped peritumoral enhancement. No central stellate scar was identified in any of the images. Imaging findings could not exclude the possibility of hepatocellular carcinoma, but the nodule was pathologically diagnosed as an FNH-like lesion by partial hepatectomy. In the present case, an unusual inhomogeneous hypointensity on hepatobiliary phase imaging made it difficult to diagnose the FNH-like lesions. abstract_id: PUBMED:37986292 Suboptimal hepatobiliary phase image in gadoxetic acid-enhanced liver MRI for the evaluation of the HCC: Predictive factors. To determine the relevant laboratory values for hepatobiliary phase (HBP) imaging and predictive factors for suboptimal HBP images on gadoxetic acid-enhanced liver magnetic resonance imaging (MRI) for the evaluation of hepatocellular carcinoma (HCC) in patients with chronic liver disease (CLD). This study included 307 patients with CLD who underwent gadoxetic acid-enhanced liver MRI for HCC evaluation. The liver-portal vein contrast ratio and liver-spleen contrast ratio were calculated from the measurements of the HBP images. In this study, a suboptimal HBP image was defined as the presence of a bright portal vein or a liver-spleen contrast ratio of &lt;1.5. Correlation, comparison, and receiver operating characteristic analyses were performed between the measured parameters on the HBP images and hepatic and renal function tests. The estimated glomerular filtration rate did not correlate with any measured or calculated values on the HBP images. On receiver operating characteristic analysis, the optimal cutoff value for the bright portal vein was an albumin level of 4.05 g/dL (area under the curve, 0.971; sensitivity, 65%; specificity, 82%). The optimal cutoff value of the suboptimal HBP image was a serum direct bilirubin level of 0.83 mg/dL (area under the curve, 0.830; sensitivity, 69%; specificity, 84%). On gadoxetic acid-enhanced MRI for the evaluation of HCC in patients with CLD, suboptimal HBP images were most strongly correlated with serum direct bilirubin levels. Renal function was not associated with suboptimal HBP imaging. Although the sensitivity is low, suboptimal HBP images can be predicted before gadoxetic acid-enhanced liver MRI can be performed. abstract_id: PUBMED:24637082 Visualization of liver uptake function using the uptake contrast-enhanced ratio in hepatobiliary phase imaging. Purpose: To visualize liver uptake function using the uptake contrast-enhanced ratio in hepatobiliary phase (uptake CERH) magnetic resonance imaging. Materials And Methods: Thirty-seven patients with hepatocellular carcinoma (HCC) and 23 with metastatic liver cancer were evaluated. Hepatobiliary phase images were acquired 20min after an intravenous bolus injection of gadoxetic acid disodium. We assumed that the contrast-enhanced ratio in the hepatobiliary phase (CERH) in the spleen was similar to the contrast-enhanced ratio in the extracellular matrix (CEREM). The Uptake CERH value was defined as the percentage signal gain between the precontrast and hepatobiliary phase images (without CEREM). The Uptake CERH value measured the tumor-free liver parenchyma. The association of the uptake CERH value with the biochemical liver function test results, and hepatocellular density in the liver parenchyma was assessed. Correlations were examined using Pearson correlation coefficient and the Mann-Whitney test. Results: The uptake CERH value was correlated with albumin, bilirubin, indocyanine green retention rate at 15min, prothrombin activity(%), platelet count, and cellular density in the liver parenchyma (p&lt;0.01). Conclusions: Uptake CERH images are useful for visualizing liver uptake function. abstract_id: PUBMED:37711824 Effect of body mass index (BMI) on image contrast in the hepatobiliary phase of Gd-EOB-DTPA-enhanced-MRI and the feasibility of the application of half-dose Gd-EOB-DTPA to hepatobiliary phase imaging in patients with a BMI less than 24: a comparative study. Background: Gadolinium-ethoxybenzyl-diethylenetriamine-pentaacetic acid (Gd-EOB-DTPA)-enhanced magnetic resonance imaging (MRI) can detect more lesions through the image contrast of hepatobiliary phase. Body mass index (BMI) reflects the composition ratio of human tissue, which is an influencing factor of magnetic resonance image contrast. Meanwhile, Gd-EOB-DTPA is recommended to use the minimum dose when the diagnosis demands could be met. The aim of this paper was to investigate the effect of BMI on hepatobiliary phase image contrast and explore the feasibility of using low-dose Gd-EOB-DTPA to obtain good hepatobiliary phase image contrast in patients with normal and lean BMI. Methods: Eighty-two patients who had previously undergone Gd-EOB-DTPA-enhanced MRI (0.025 mmol/kg) were collected and divided into group A (BMI &lt;24 kg/m2) and group B (BMI ≥24 kg/m2) according to Chinese BMI standards. Liver-to-portal vein contrast ratio (LPC20) and liver-to-spleen contrast ratio (LSC20) in hepatobiliary phase (20 min after injection) were calculated. Thirty patients with a BMI &lt;24 kg/m2 who were about to receive Gd-EOB-DTPA-enhanced MRI were randomly divided into group C (0.0125 mmol/kg) and group D (0.025 mmol/kg). Image acquisition was performed at 10, 15, and 20 min after injection. LPC10, LPC15, LPC20 and LSC10, LSC15, LSC20 in corresponding phases were calculated. Results: In retrospective grouping study, compared with group B, group A's LPC20 was significantly higher [2.63 (2.42-3.00) vs. 2.22 (1.97-2.67); P&lt;0.01]. In prospective grouping study, there were no differences in LPC15, LSC15, LPC20 and LSC20 between group C and group D. Intragroup comparison in each group showed that LPC15 (group C: 2.67±0.33; group D: 2.61±0.21) and LPC20 (group C: 2.74±0.37; group D: 2.72±0.27) were higher than LPC10 (group C: 2.19±0.18; group D: 1.94±0.17) (all P&lt;0.01), while there were no changes between LPC15 and LPC20. Conclusions: Under conventional dose, hepatobiliary phase image contrast in patients with a BMI &lt;24 was higher, which was mainly manifested in the high LPC. For patients with a BMI &lt;24 kg/m2, using a half conventional dose (0.0125 mmol/kg), good hepatobiliary phase image contrast can still be obtained at 15-20 min after administration. abstract_id: PUBMED:35284527 Gd-EOB-DTPA-enhanced MRI radiomic features for predicting histological grade of hepatocellular carcinoma. Background: Prediction models for the histological grade of hepatocellular carcinoma (HCC) remain unsatisfactory. The purpose of this study is to develop preoperative models to predict histological grade of HCC based on gadolinium-ethoxybenzyl diethylenetriamine pentaacetic acid (Gd-EOB-DTPA)-enhanced magnetic resonance imaging (MRI) radiomics. And to compare the performance between artificial neural network (ANN) and logistic regression model. Methods: A total of 122 HCCs were randomly assigned to the training set (n=85) and the test set (n=37). There were 242 radiomic features extracted from volumetric of interest (VOI) of arterial and hepatobiliary phases images. The radiomic features and clinical parameters [gender, age, alpha-fetoprotein (AFP), carcinoembryonic antigen (CEA), carbohydrate antigen 19-9 (CA19-9), alanine aminotransferase (ALT), aspartate transaminase (AST)] were selected by permutation test and decision tree. ANN of arterial phase (ANN-AP), logistic regression model of arterial phase (LR-AP), ANN of hepatobiliary phase (ANN-HBP), logistic regression mode of hepatobiliary phase (LR-HBP), ANN of combined arterial and hepatobiliary phases (ANN-AP + HBP), and logistic regression model of combined arterial and hepatobiliary phases (LR-AP + HBP) were built to predict HCC histological grade. Those prediction models were assessed and compared. Results: ANN-AP and LR-AP were composed by AST and radiomic features based on arterial phase. ANN-HBP and LR-HBP were composed by AFP and radiomic features based on hepatobiliary phase. ANN-AP + HBP and LR-AP + HBP were composed by AST and radiomic features based on arterial and hepatobiliary phases. The prediction models could distinguish between high-grade tumors [Edmondson-Steiner (E-S) grade III and IV] and low-grade tumors (E-S grade I and II) in both training set and test set. In the test set, the AUCs of ANN-AP, LR-AP, ANN-HBP, LR-HBP, ANN-AP + HBP and LR-AP + HBP were 0.889, 0.777, 0.941, 0.819, 0.944 and 0.792 respectively. The ANN-HBP was significantly superior to LR-HBP (P=0.001). And the ANN-AP + HBP was significantly superior to LR-AP + HBP (P=0.007). Conclusions: Prediction models consisting of clinical parameters and Gd-EOB-DTPA-enhanced MRI radiomic features (based on arterial phase, hepatobiliary phase, and combined arterial and hepatobiliary phases) could distinguish between high-grade HCCs and low-grade HCCs. And the ANN was superior to logistic regression model in predicting histological grade of HCC. Answer: Yes, clinical parameters can predict suboptimal hepatobiliary phase (HBP) in Gd-EOB-DTPA-enhanced MRI. A study aimed to establish cut-off levels of clinical parameters that would predict suboptimal 30 minutes delayed HBP with high specificity. The study included patients with chronic liver disease who underwent hepatocellular carcinoma screening with Gd-EOB-DTPA-enhanced MRI. The HBP was graded as adequate or suboptimal based on Liver Image Reporting and Data System (LI-RADS) criteria. Clinical parameters such as total bilirubin (TB), direct bilirubin (DB), and Model For End-Stage Liver Disease (MELD) scores were analyzed. The study found that areas under the receiver operating characteristic (ROC) curve for predicting suboptimal HBP were high for the MELD score, TB, and DB. Cut-off values for predicting suboptimal HBP with high accuracy were identified for MELD score (≥16.7), TB (≥4.3 mg/dl), and DB (≥1.3 mg/dl). Other parameters like serum glutamic oxaloacetic transaminase (SGOT), serum glutamic-pyruvic transaminase (SGPT), and alkaline phosphatase (ALP) were not statistically significantly different between the groups. The conclusion was that cut-off levels of MELD score, DB, and TB can predict a suboptimal HBP with high accuracy, and prospective identification of patients with a high likelihood of suboptimal HBP can help to avoid administering a more costly agent to patients who would not benefit from its unique properties (PUBMED:27842889).
Instruction: The introduction of Greek Central Health Fund: Has the reform met its goal in the sector of Primary Health Care or is there a new model needed? Abstracts: abstract_id: PUBMED:35848796 Geographic accessibility to childhood tuberculosis care in Pakistan. Background: Tuberculosis (TB) in children is difficult to detect and often needs specialists to diagnose; the health system is supposed to refer to higher level of health care when diagnosis is not settled in a sick child. In Pakistan, the primary health care level can usually not diagnose childhood TB and will refer to a paediatricians working at a secondary or tertiary care hospital. We aimed to determine the health services access to child TB services in Pakistan. Objective: We aimed to determine the geographical access to child TB services in Pakistan. Method: We used geospatial analysis to calculate the distance from the nearest public health facility to settlements, using qGIS, as well as population living within the World Health Organization's (WHO) recommended 5-km distance. Result: At primary health care level, 14.1% of facilities report child TB cases to national tuberculosis program and 74% of the population had geographical access to general primary health care within 5-km radius. To secondary- and tertiary-level health care, 33.5% of the population had geographical access within 5-km radius. The average distance from a facility for diagnosis of childhood TB was 26.3 km from all settlement to the nearest child TB sites. The population of one province (Balochistan) had longer distances to health care services. Conclusion: With fairly good coverage of primary health care but lower coverage of specialist care for childhood TB, the health system depends heavily on a good referral system from the communities. abstract_id: PUBMED:35702076 Application value of whole-course health management for patients with nonvalvular atrial fibrillation with oral warfarin treatment. Objective: To explore the effect of the whole-process health management model on the compliance of oral warfarin treatment in patients with non-valvular atrial fibrillation in primary hospitals. Methods: We retrospectively analyzed the clinical data of 130 patients with non-valvular atrial fibrillation treated in the Department of Cardiovascular Medicine, Hai'an People's Hospital from January 2019 to December 2019. Among them, 63 patients who received routine continuing care were included in the control group, and 67 patients treated with whole-course health management model of primary hospitals were included in the observation group. The two groups were compared in terms of the following parameters: Warfarin anticoagulation knowledge, medication compliance, compliance rate (international normalized ratio, INR) monitoring, bleeding events (gingival bleeding, subcutaneous bleeding, gastrointestinal bleeding, etc.), embolic events (vascular thrombosis), negative emotions before and after management, and patient satisfaction. Logistic analysis was used to analyze independent risk factors affecting the effect of warfarin anticoagulation in patients with non-valvular atrial fibrillation. Results: Compared with the control group, the warfarin anticoagulation knowledge, medication compliance, and INR compliance rate of the observation group were significantly higher, and the incidence of adverse events was significantly lower. Self-rating Anxiety Scale (SAS) and Self-rating Depression Scale (SDS) scores were not significantly different between the two groups before management. After management, SAS and SDS scores decreased significantly in both groups, and were lower in the observation group compared with the control group. The management satisfaction was also significantly higher in the observation group. Conclusion: Compared with the conventional continuation care model, the whole-process management in primary hospitals can improve patients' compliance with medical advice and treatment efficacy, with lower risk of bleeding and higher patient satisfaction, providing a better option for the out-of-hospital management of anticoagulation for non-valvular atrial fibrillation patients. Age, hypertension, diabetes, knowledge of warfarin anticoagulation and medication compliance were independent risk factors for the effect of warfarin anticoagulation in patients with non-valvular atrial fibrillation. abstract_id: PUBMED:22398477 Newfoundland and Labrador: 80/20 staffing model pilot in a long-term care facility. This project, based in Newfoundland and Labrador's Central Regional Health Authority, is the first application of an 80/20 staffing model to a long-term care facility in Canada. The model allows nurse participants to spend 20% of their paid time pursuing a professional development activity instead of providing direct patient care. Newfoundland and Labrador has the highest aging demographic in Canada owing, in part, to the out-migration of younger adults. Recruiting and retaining nurses to work in long-term care in the province is difficult; at the same time, the increasing acuity of long-term care residents and their complex care needs mean that nurses must assume greater leadership roles in these facilities. This project set out to increase capacity for registered nurse (RN) leadership, training and support and to enhance the profile of long-term care as a place to work. Six RNs and one licensed practical nurse (LPN) participated and engaged in a range of professional development activities. Several of the participants are now pursuing further nursing educational activities. Central Health plans to continue a 90/10 model for one RN and one LPN per semester, with the timeframe to be determined. The model will be evaluated and, if it is deemed successful, the feasibility of implementing it in other sites throughout the region will be explored. abstract_id: PUBMED:30291739 Making It Happen: Middle Managers' Roles in Innovation Implementation in Health Care. Background: Middle managers are given scant attention in the implementation literature in health care, where the focus is on senior leaders and frontline clinicians. Aims: To empirically examine the role of middle managers relevant to innovation implementation and how middle managers experience the implementation process. Methods: A qualitative study was conducted using the methods of grounded theory. Data were collected through semistructured interviews with middle managers (N = 15) in Nova Scotia and New Brunswick, Canada. Participants were purposively sampled, based on their involvement in implementation initiatives and to obtain variation in manager characteristics. Data were collected and analyzed concurrently, using an inductive constant comparative approach. Data collection and analysis continued until theoretical saturation was reached. Results: Middle managers see themselves as being responsible for making implementation happen in their programs and services. As a result, they carry out five roles related to implementation: planner, coordinator, facilitator, motivator, and evaluator. However, the data also revealed two determinants of middle managers' role in implementation, which they must negotiate to fulfill their specific implementation roles and activities: (1) They perform many other roles and responsibilities within their organizations, both clinical and managerial in nature, and (2) they have limited decision-making power with respect to implementation and must work within the parameters set by upper levels of the organization. Linking Evidence To Action: Middle managers play an important role in translating adoption decisions into on-the-ground implementation. Optimizing their capacity to fulfill this role may be key to improving innovation implementation in healthcare organizations. abstract_id: PUBMED:2807299 Health educators in India--a profile. While India has a long history of health education, its formal integration into health care services is just three decades old. In addition to providing health education through traditional or informal methods, the country has been stimulating the integration of health education through a professionally trained group of health workers, commonly called: "Health Educators". The impression remains, however, that "information giving" is the primary goal and function of a health educator, and, more often than not, his activities remain confined to publicity. This leads to a debate of the issue of: "who should conduct health education activities, a health education specialist or some other health worker"? abstract_id: PUBMED:36429653 What Is Next for Public Health after COVID-19 in Italy? Adopting a Youth-Centred Care Approach in Mental Health Services. Although endeavours to protect mental well-being during the COVID-19 pandemic were taken at national and regional levels, e.g., mental support in school, a COVID-19 emergency toll-free number for psychological support, these were sporadic conjunctural financing interventions. In this Communication, the authors conducted a systematic search for programmatic and policy documents and reports with a solid literature and policy analysis concerning the main objective, which is to analyse the appropriateness in implementing gender- and age-sensitive, integrated, youth-centred mental health services in Italy. The Italian National Action Plan for Mental Health reports a highly fragmented situation in the Child and Adolescent Neuropsychiatry services, in terms of an integrated and comprehensive regional network of services for the diagnosis, treatment, and rehabilitation of neuropsychological disorders in young people. Wide-ranging interventions, systemic actions should be implemented, funded, and included in an overall structural strengthening of the healthcare system, including those dedicated to transition support services. In this context, the National Recovery and Resilience Plan (NRRP), may represent an opportunity to leverage specific funds for mental health in general, and for youth in particular. Finally, mental health service governance should be harmonized at both national and regional EU levels-with the adoption of best practices implemented by other Member States. This includes, among others, health information system and data collection, which is critical for analysing epidemiological trends and for monitoring and evaluating services, to offer a public and integrated system for the care and protection of young people, in line with the Convention on the Rights of the Child. abstract_id: PUBMED:32493357 Key factors for national spread and scale-up of an eConsult innovation. Background: Expanding healthcare innovations from the local to national level is a complex pursuit requiring careful assessment of all relevant factors. In this study (a component of a larger eConsult programme of research), we aimed to identify the key factors involved in the spread and scale-up of a successful regional eConsult model across Canada. Methods: We conducted a constant comparative thematic analysis of stakeholder discussions captured during a full-day National eConsult Forum meeting held in Ottawa, Canada, on 11 December 2017. Sixty-four participants attended, representing provincial and territorial governments, national organisations, healthcare providers, researchers and patients. Proceedings were recorded, transcribed and underwent qualitative analysis using the Framework for Applied Policy Research. Results: This study identified four main themes that were critical to support the intentional efforts to spread and scale-up eConsult across Canada, namely (1) identifying population care needs and access problems, (2) engaging stakeholders who were willing to roll up their sleeves and take action, (3) building on current strategies and policies, and (4) measuring and communicating outcomes. Conclusions: Efforts to promote innovation in healthcare are more likely to succeed if they are based on an understanding of the forces that drive the spread and scale-up of innovation. Further research is needed to develop and strengthen the conceptual and applied foundations of the spread and scale-up of healthcare innovations, especially in the context of emergent learning health systems across Canada and beyond. abstract_id: PUBMED:32668599 Individual Health Budgets in Mental Health: Results of Its Implementation in the Friuli Venezia Giulia Region, Italy. Background: Individual Health Budget (IHB) is an intervention for recovery in mental health services, providing personalized care for subjects with severe disorders and complex needs. Little is known on its effectiveness and on the criteria for its delivery. Methods: A total of 67 IHB beneficiaries and 61 comparators were recruited among service users of the Mental Health Department of the Trieste Healthcare Agency, Italy. Data included sociodemographic and clinical variables, type of IHB, and Health of the Nation Outcome Scale (HoNOS) scores. Results: A comparison between groups showed significant differences in several socioeconomic and clinical characteristics. Multivariate logistic regression showed that IHB was positively associated to the 20-49 age group, single status, unemployment, low family support, cohabitation with relatives or friends, diagnosis of personality disorder, and a higher number of hospitalizations. The IHB group was at a higher risk of severe problems related to aggressive or agitated behaviors (OR = 1.4), hallucinations and delusions (OR = 1.5), and impairment in everyday life activities (OR = 2.1). Conclusions: IHB was used in patients with severe clinical and social problems. More resources, however, may be aimed at the working and social axes. More research is needed to better assess clinical and social outcomes of IHB and to adjust their intensity in a longitudinal perspective in order to enhance cost-effectiveness. abstract_id: PUBMED:35087352 The Evolving Roles of Nurses Providing Care at Home: A Qualitative Case Study Research of a Transitional Care Team. Purpose: To examine the roles of transitional care nurses in an integrated healthcare system and how the integrated healthcare system influences their evolving roles. Background: Transitional care teams have been introduced to enable the seamless transfer of patients from acute-care to the home settings. A qualitative case study of the transitional care team was conducted to understand the changing roles of these nurses in an integrated Regional Health System (RHS) in Singapore. Methods: A hospital transitional team of an integrated RHS was studied. Purposive sampling was used. Non-participant observations and follow-up interviews were conducted with four nurses. Data were triangulated with the interviews of two managers and three healthcare professionals, and the analysis of documents. Within-case thematic analysis was carried out. Results: Three themes were identified: 'Coming together to meet the needs of all'; 'Standing strong amidst the stormy waves'; and 'Searching for the right formula in handling complexity'. These themes have explained on the atypical roles taken on by nurses in their attempts to close the gaps and meet the patients' needs. Various factors influencing the evolving roles were revealed. Conclusion: The roles of nurses have 'emerged differently' from their traditional counterparts. Various nursing roles have been undertaken to facilitate care integration. The findings emphasised the important balance between formal structural practices and informal processes in facilitating and supporting the nurses in their role development. abstract_id: PUBMED:32486888 Enabling patient-centred policy for electronic consultations: A qualitative analysis of discussions from a stakeholder meeting. Introduction: To support the expansion of a successful regional electronic consultation (eConsult) service, we hosted a full-day national eConsult Policy Think Tank, connecting health-services researchers, clinicians, patients and policymakers to discuss policy considerations related to eConsult. In this paper, we assess the discussion arising from the Think Tank to identify and understand the policy enablers and barriers to the national spread and scale of eConsult services across Canada. Methods: We conducted a constant comparative thematic analysis of stakeholder discussions captured during the Think Tank held in Ottawa, Canada, on 5 December 2016. Forty-seven participants attended and debated the following topic areas: (a) delivery of services and standards; (b) payment considerations; and (c) equitable access. The meeting was recorded, and verbatim transcripts were analysed using qualitative approaches. Results: We identified four themes affecting spread and scale of eConsult innovation from a policy perspective: (a) patient-centredness; (b) value; (c) regulation; and (d) considerations for spread and scale. Patient-centredness was viewed as a foundational principle upon which policy shifts should be guided. Active participation of patient partners transitioned the discussions and resulting recommendations from provider-centred to patient-centred thinking around the relevant policy issues, explicitly demonstrating the importance of patient involvement in healthcare policy decision making. Discussion: eConsult was viewed as a high-value, disruptive innovation with great potential to transform access to specialists in Canada. A patient-centred approach to policy change (and not just healthcare delivery) was identified as a novel yet critical enabler to the scale and spread of eConsult across Canada. Answer: The provided abstracts do not directly address the introduction of the Greek Central Health Fund or its impact on the sector of Primary Health Care in Greece. Therefore, based on the available abstracts, it is not possible to determine whether the reform has met its goal in the sector of Primary Health Care in Greece or if there is a need for a new model. The abstracts discuss various health care topics, including geographic accessibility to childhood tuberculosis care in Pakistan (PUBMED:35848796), the application value of whole-course health management for patients with nonvalvular atrial fibrillation (PUBMED:35702076), an 80/20 staffing model pilot in a long-term care facility in Canada (PUBMED:22398477), the roles of middle managers in innovation implementation in health care (PUBMED:30291739), the profile of health educators in India (PUBMED:2807299), adopting a youth-centered care approach in mental health services in Italy post-COVID-19 (PUBMED:36429653), key factors for the national spread and scale-up of an eConsult innovation in Canada (PUBMED:32493357), the implementation of Individual Health Budgets in mental health in the Friuli Venezia Giulia Region, Italy (PUBMED:32668599), the evolving roles of nurses providing care at home in an integrated healthcare system in Singapore (PUBMED:35087352), and enabling patient-centered policy for electronic consultations (PUBMED:32486888). To answer the question regarding the Greek Central Health Fund and its impact on Primary Health Care, one would need to consult literature and data specifically related to the Greek healthcare system, the implementation of the Greek Central Health Fund, and its outcomes in the context of Primary Health Care. This would involve analyzing policy documents, health service performance metrics, patient outcomes, and possibly stakeholder interviews or surveys within the Greek healthcare context.
Instruction: Is damage control orthopedics essential for the management of bilateral femoral fractures associated or complicated with shock? Abstracts: abstract_id: PUBMED:20009694 Is damage control orthopedics essential for the management of bilateral femoral fractures associated or complicated with shock? An animal study. Background: The maximum score of a single anatomic system, the Injury Severity Score, may not reflect the overall damage inflicted by bilateral femoral fractures and justify the strategy of damage control orthopedics (DCO). It is necessary to investigate effects of various therapeutic procedures on such fractures with or without shock to facilitate correct decision making on DCO. Methods: A model of bilateral femoral fractures was made in 36 of 48 male New Zealand White rabbits. A model of bilateral femoral shaft fractures associated with shock was made. After resuscitation, a reamed intramedullary nailing fixation was performed in the first group (IM group), and an external fixation device applied in the second group (EF group), and the fractures in the third group (control group) were supported with splints only. They were divided into four groups: shock with IM nailing (shock-IM), shock with external fixation (shock-EF), shock with conservative method (shock-Cons), and intramedullary nailing without shock (nonshock-IM). Vital signs and inflammatory reactions were recorded. Thirty-six hours after the therapeutic procedures in four groups, the animals were killed for histologic evaluation. Results: The changes of vital signs were most significant in shock-IM group (p &lt; 0.05). The exaggerated levels of interleukin-6, Interleukin-10, and tumor necrosis factor alpha concentrations demonstrated a significant difference between all the groups-shock-IM and other groups (p &lt; 0.05). As to histologic appearances, the statistical difference varies from organ to organ. There is highly significant difference when the IM group is compared with the other two groups as far as lungs are concerned. As to the liver, there is only significant difference between the IM group and the control group. In terms of kidney and heart, there is no significant difference cross the groups. As to histologic appearances, there is highly significant difference in lungs between shock-IM group and other three groups. There is significant difference in liver between the shock-IM group and the shock-Cons group (p &lt; 0.05). Kidneys and heart were less affected cross the groups. Conclusions: In this study, an early reamed intramedullary nailing fixation procedure resulted in more adverse effects on system stress, inflammatory response, and multiple organs. The injuries also cause histologic damages to lungs and liver. Therefore, early reamed intramedullary nailing fixation may pose a potential risk of developing complications and adopting the DCO strategy may be more preferable. Shock and IM combined cause most severe damages, followed by IM without shock, shock plus EF, and shock plus conservative procedure in that order. If IM must be used for some reasons, it is desirable be delayed until shock has been fully controlled and vasculorespiratory stability restored. abstract_id: PUBMED:19756456 Damage Control Orthopedics. What is the current situation? Damage Control Orthopedics is a strategy for treatment of fractures in severely injured patients. The aim is to reduce secondary damage and thereby improve the patient's outcome. The relevant fractures are primarily stabilized with external fixators instead of a primary definitive osteosynthesis. The less traumatic and shorter surgical procedure is thought to reduce the additional trauma load and should thereby minimize the "second hit" situation. After stabilization of the patient on the intensive care unit secondary definitive ostesynthesis can then be performed after 4-14 days.The available animal studies, retrospective clinical studies and prospective cohort studies seem to support the concept of damage control. The only available randomized study shows an advantage of this strategy in a subgroup of borderline patients. A meta-analysis could not find convincing evidence that definitively proves the advantage of this concept. A new multi-center randomized study has been started to evaluate the concept of damage control in a defined group of critically injured patients with femoral shaft fractures. abstract_id: PUBMED:32351834 Effective Management of Femur Fracture Using Damage Control Orthopedics Following Fat Embolism Syndrome. Fat embolism syndrome (FES) is a rare event following a traumatic injury, and its pathophysiologic mechanism continues to be elusive. Fat embolism syndrome generally occurs when a bone marrow fat enters the bloodstream resulting in a cascade of inflammatory response, hyper-coagulation, and an array of symptoms that generally begin within 24-48 hours. FES early symptoms include petechial rash, shortness of breath, altered mental status, seizures, fever, and may result in decreased urine output. The common etiologies of a fat embolism include long bone fractures, mainly femoral and pelvic fractures. There are multiple management methods described in the literature to help prevent FES and other long bone fracture complications from occurring. Although not universally adopted, the damage control orthopedics (DCO) has been the major management option for patients with a long bone fracture. DCO is entertained by provisional immobilization of patients with long bone fractures and those who are considered severely traumatized patients (STP). Thus, immobilization can help minimize the traumatic effect and the subsequent second hit by performing non-life saving surgical procedures. In this case, a patient with a transverse femur fracture suffered disconcerting symptoms of fat embolism prior to definitive femur repair. Hence, damage control orthopedics was entertained with a postponement of his femur repair to facilitate stabilization. The use of damage control orthopedics was successful in this patient with no long term complications. abstract_id: PUBMED:22850423 The use of 'damage control orthopedics' techniques in children with segmental open femur fractures. Femur fractures are common long bone injuries in children. Open femur fractures; however, are uncommon. Traditionally, long-term external fixation has been recommended for treatment of these fractures. Damage control orthopedics is a well-recognized concept in adult orthopedic trauma management. Little or no literature exists demonstrating the use of damage control orthopedics principles in children. These cases illustrate the utility of these techniques as a temporary bridge in the management of open, segmental pediatric femur fractures complicated by severe soft tissue injury and bone loss, which were managed definitively by submuscular bridge plates. abstract_id: PUBMED:24095066 The evolution of damage control orthopedics: current evidence and practical applications of early appropriate care. This article summarizes the evolution of literature and practice related to fracture care in polytrauma patients. Particular emphasis is given to the management of femoral shaft fractures and the concept of damage control in these complex patients. The application of these guidelines in common clinical practice is also discussed. abstract_id: PUBMED:29285613 Are large fracture trials really possible? What we have learned from the randomized controlled damage control study? Purpose: Although they are considered the 'gold standard' of evidence-based medicine, randomized controlled trials are still a rarity in orthopedic surgery. In the management of patients with multiple trauma, there is a current trend toward 'damage control orthopedics', but to date, there is no proof of the superiority of this concept in terms of evidence-based medicine. The purpose of this article is to present unexpected difficulties we encountered in successfully completing our randomized controlled trial and to discuss the problematic differences between theoretically planning a trial and real-life practical experience of implementing the plan, with attention to published strategies. Methods: The multicenter randomized controlled trial on risk adapted damage control orthopedic surgery of femur shaft fractures in multiple trauma patients (DCO study) was designed to determine whether 'risk adapted damage control orthopedics' of femoral shaft fractures is advantageous when treating multiple trauma patients. We compared our methods of study planning and realization point by point with published methods for conducting such trials. Results: The study was methodically planned. We met the most prerequisites for successfully completing a large fracture trial, but experienced unexpected difficulties. After 2.5 years, the Deutsche Forschungsgemeinschaft suspended the financing because of low recruitment. The reasons were multifactorial. Conclusions: We believe it is much more difficult to perform a large fracture trial in reality than to plan it in theory. Even the theoretically best designed trial can prove unsuccessful in its implementation. The question remains: are large fracture trials even possible? Hopefully YES! Trial Registration: Current Controlled Trials ISRCTN10321620. Date assigned: 09/02/2007. Level Of Evidence: Level I. abstract_id: PUBMED:18507945 Application of damage control orthopedics in 41 patients with severe multiple injuries. Objective: To probe the feasibility and efficacy of damage control orthopedics (DCO) in treating severe multiple injuries. Methods: A retrospective analysis was made on the clinical data of 41 patients (31 males and 10 females, aged 18-71 years, mean: 36.4) with multiple injuries admitted to our department and treated by DCO from January 1995 to December 2005. Results: As a first-stage therapy, devascularization of internal iliac arteries was performed in 29 patients with pelvic fractures combined with massive bleeding, including ligation of bilateral internal iliac arteries in 21 patients and embolization of bilateral internal iliac arteries in 8. And early external fixation of pelvis was performed in 10 patients. Ten patients with severe multiple injuries combined with femoral fractures were managed with primary debridement and temporal external fixation and 2 patients with spinal fractures combined with spinal cord compression received simple laminectomy. Thirty-one patients received definite internal fixation after resuscitation in intensive care unit. The overall mortality rate was 12.1% (5/41) with an average injury severity score of 41.4. The main causes of death were hemorrhagic shock and associated injuries. Complications occurred in 7 patients including acute respiratory distress syndrome in 3 cases, thrombosis of right common iliac artery in 1, subphernic abscess in 2 and infection of deep wound in lower extremity in 1. After treatment, all the patients got cured. Conclusions: Prompt diagnosis and integrated treatment are keys to higher survival rate in patients with severe multiple injuries. In this condition, DCO is an effective and safe option. abstract_id: PUBMED:26738456 Bilateral femoral shaft fractures complicated by fat and pulmonary embolism: a case report. A 25-year-old man was admitted to our hospital because of pulmonary embolism and suspected fat embolism after sustaining bilateral femoral shaft fracture. A left arm weakness, tachycardia and sudden hemoglobin drop delayed his definitive fixation with intramedullary nailing. His clinical course was further complicated by bleeding from the pin sites of the external fixators which had initially been used to temporarily stabilize his femoral fractures (clotting disturbances). A lower leg Doppler ultrasound and a new pelvic-chest CT angiography excluded any remaining thrombus, meanwhile the embolus had broken in smaller pieces, more distally. His unfractionated heparin was revised to a Low Molecular Weight Heparin at prophylactic dose. After a 10 day period and when his condition had been improved bilateral reamed nailing was performed. Although bilateral closed femoral shaft fractures should be stabilized early, fat embolism syndrome (FES) and thromboembolic events (TEV) should always be kept in mind in these patients. abstract_id: PUBMED:23777984 Eritoran attenuates tissue damage and inflammation in hemorrhagic shock/trauma. Background: Severe injury and associated hemorrhagic shock lead to an inflammatory response and subsequent increased tissue damage. Numerous reports have shown that injury-induced inflammation and the associated end-organ damage is driven by Toll-like receptor 4 (TLR4) activation via damage-associated molecular patterns. We examined the effectiveness of Eritoran tetrasodium (E5564), an inhibitor of TLR4 function, in reducing inflammation induced during hemorrhagic shock with resuscitation (HS/R) or after peripheral tissue injury (bilateral femur fracture, BFF). Material And Methods: Mice underwent HS/R or BFF with or without injection of Eritoran (5 mg/kg body weight) or vehicle control given before, both before and after, or only after HS/R or BFF. Mice were sacrificed after 6 h and plasma and tissue cytokines, liver damage (histology; aspartate aminotransferase/alanine aminotransferase), and inflammation (NF-κB) and gut permeability were assessed. Results: In HS/R Eritoran significantly reduced liver damage (values ± SEM: alanine aminotransferase 9910 ± 3680 U/L versus 1239 ± 327 U/L and aspartate aminotransferase 5863 ± 2000 U/L versus 1246 ± 243 U/L, P &lt; 0.01) at 6 h compared with control when given just before HS and again just prior to resuscitation. Eritoran administration also led to lower IL-6 levels in plasma and liver and less NF-κB activation in liver. Increases in gut barrier permeability induced by HS/R were also prevented with Eritoran. Eritoran similarly diminished BFF-mediated systemic inflammatory responses. Conclusion: These data suggest Eritoran can inhibit tissue damage and inflammation induced via TLR4/myeloid differentiation factor 2 signaling from damage-associated molecular patterns released during HS/R or BFF. Eritoran may represent a promising therapeutic for trauma patients to prevent multiple organ failure. abstract_id: PUBMED:33621700 Bilateral femoral shaft fracture in polytrauma patients: Can intramedullary nailing be done on an emergency basis? Introduction: Whether damage control orthopedics (DCO) or early total care (ETC) is the best way to treat polytrauma patients who have suffered a bilateral femoral shaft fracture remains unanswered. The aim of this study was to evaluate the morbidity of bilateral femur fractures treated by simultaneous intramedullary (IM) nailing according to ETC principles. Materials And Methods: This retrospective single-centre study included all polytrauma patients who had suffered a femoral shaft fracture and were treated at our level I trauma centre. Demographic data, associated lesions, injury severity score (ISS) and occurrence of acute respiratory distress syndrome (ARDS) were collected prospectively in our trauma database. Unilateral fractures (UF) were compared to bilateral fractures (BF). The risk of ARDS was evaluated by multivariate logistic regression. Results: Between 2010 and 2019, 176 UF (88%) and 25 BF (12%) were included. Patients with BF had a higher ISS (36 vs. 25, p&lt;0.001) and more brain injuries (44% vs. 15%, p=0.001) than patients with a UF. More blood transfusions were done in BF than UF (4.0 vs. 1.6 units, p=0.002). The incidence of ARDS was higher in BF patients than UF (36% vs. 4%) with longer stay in intensive care (18 vs. 12 days, p=0.02) and in the hospital (32 vs. 23 days, p=0.006). There were no deaths in either group. The risk of ARDS was correlated to ISS, but not to bilaterality. Discussion: Studies on DCO and ETC report similar mortality and ARDS rates for BF. ISS appears to determine the postoperative morbidity irrespective of how the patients are managed. In contrast with DCO, perioperative intensive care has a predominant role in ETC, allowing early definitive fixation of fractures, even in severely injured patients. Conclusion: Bilateral femoral shaft fractures are a sign of severe trauma leading to high postoperative morbidity. The patient is likely to have concomitant severe injuries. Simultaneous ECM can be done emergently providing appropriate perioperative intensive care management. Level Of Evidence: IV; retrospective study. Answer: Damage control orthopedics (DCO) is considered an essential strategy for the management of bilateral femoral fractures associated or complicated with shock. An animal study demonstrated that early reamed intramedullary nailing fixation in the presence of shock resulted in more adverse effects on system stress, inflammatory response, and multiple organs, particularly the lungs and liver, suggesting that DCO may be preferable in such cases (PUBMED:20009694). The concept of DCO involves primary stabilization of fractures with external fixators to reduce the additional trauma load and minimize the "second hit," with definitive osteosynthesis performed after the patient's condition has stabilized (PUBMED:19756456). In cases of fat embolism syndrome (FES), which can occur following traumatic injuries such as femoral fractures, DCO has been successfully used to manage the condition by postponing definitive repair until the patient is stabilized (PUBMED:32351834). The use of DCO techniques in children with open femur fractures has also been illustrated as a temporary bridge in management, demonstrating the utility of these techniques (PUBMED:22850423). The evolution of DCO practices has been summarized in the literature, emphasizing the management of femoral shaft fractures in polytrauma patients and the practical applications of early appropriate care (PUBMED:24095066). However, conducting large randomized controlled trials to provide definitive evidence for the superiority of DCO has proven to be challenging due to various difficulties encountered in the recruitment and execution of such studies (PUBMED:29285613). Retrospective analyses have shown that DCO is an effective and safe option for treating severe multiple injuries, with prompt diagnosis and integrated treatment being key to a higher survival rate (PUBMED:18507945). Even in the presence of complications such as FES and thromboembolic events, early stabilization of bilateral closed femoral shaft fractures is important, but these conditions must be carefully managed (PUBMED:26738456). Lastly, a retrospective study on polytrauma patients with bilateral femoral shaft fractures treated by simultaneous intramedullary nailing according to early total care (ETC) principles found that while bilateral fractures were indicative of severe trauma and associated with higher morbidity, the risk of acute respiratory distress syndrome (ARDS) was correlated to the injury severity score (ISS) rather than the bilaterality of the fractures, suggesting that appropriate perioperative intensive care is crucial for successful outcomes, even when early definitive fixation of fractures is performed (PUBMED:33621700). This study indicates that while DCO is a critical consideration, the overall management strategy, including perioperative care, plays a significant role in the treatment of severe trauma cases such as bilateral femoral shaft fractures. In summary, DCO is considered essential in the management of bilateral femoral fractures complicated by shock, as it aims to minimize secondary damage and improve patient outcomes by initially stabilizing fractures in a less invasive manner and delaying definitive surgery until the patient is more stable. However, the decision between DCO and ETC must be tailored to the individual patient's condition, taking into account the severity of the injury and the patient's physiological status.
Instruction: Live births resulting from unintended pregnancies: is there variation among states? Abstracts: abstract_id: PUBMED:10379430 Live births resulting from unintended pregnancies: is there variation among states? The PRAMS Working Group. Context: States need data on live births resulting from unintended pregnancies in order to assess the need for family planning services; however, many states do not collect such data. Some states may use extrapolated rates from other states. Methods: Pregnancy Risk Assessment Monitoring System (PRAMS) data were assessed to explore the feasibility of extrapolating data on the percentage of live births resulting from unintended pregnancies from states that collect these data to states that do not. Data on women who had live births between 1993 and 1995 were examined for eight states: Alabama, Florida, Georgia, Michigan, New York (excluding New York City), Oklahoma, South Carolina and West Virginia. Logistic regression was used to determine state variation in the odds of delivering a live birth resulting from an unintended pregnancy after adjustment for maternal race, marital status, age, education, previous live birth and participation in the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC). Results: The percentage of live births resulting from unintended pregnancy ranged from 33% in New York to 49% in Alabama, Georgia and South Carolina. Compared with women in Alabama, women in Oklahoma were more likely to deliver a live birth resulting from an unintended pregnancy (odds ratio of 1.2, confidence interval of 1.1-1.3) and women in New York State were less likely (odds ratio of 0.7, confidence interval of 0.6-0.8) to have such a birth. However, unmarried white women in New York had lower odds of having a live birth resulting from an unintended pregnancy and married black women in Michigan had higher odds of having a live birth resulting from unintended pregnancy than their counterparts in Alabama. Although the percentages varied, in all eight states women who were black, were unmarried, were younger than 20 years of age, had less than 12 years of education or had more than one child had higher percentages of live births resulting from unintended pregnancy than women with other demographic characteristics. Conclusions: Data on which women have the greatest risk of delivering a live birth resulting from an unintended pregnancy may be extrapolated from one state to another, but the rate of such births may overestimate or underestimate the problem from one state to another. abstract_id: PUBMED:26044941 Medicaid family planning waivers in 3 States: did they reduce unwanted births? Effects of Medicaid family planning waivers on unintended births and contraceptive use postpartum were examined in Illinois, New York, and Oregon using the Pregnancy Risk Assessment Monitoring System. Estimates for women who would be Medicaid eligible "if" pregnant in the waiver states and states without expansions were derived using a difference-in-differences approach. Waivers in New York and Illinois were associated with almost a 5.0 percentage point reduction in unwanted births among adults and with a 7 to 8.0 percentage point reduction, among youth less than 21 years of age. Oregon's waiver was associated with an almost 13 percentage point reduction in unintended, mostly mistimed, births. No statistically significant effects were found on contraceptive use. abstract_id: PUBMED:35180356 Association of Affordable Care Act Medicaid Expansions with Births Among Low-Income Women of Reproductive Age. Background: This study examined the association between Medicaid expansions under the Affordable Care Act (ACA) and births among low-income women of reproductive age in the United States. Materials and Methods: We used data from the 2008 to 2019 American Community Survey to estimate the association between state adoption of Medicaid expansion under the ACA and the percent of low-income women of reproductive age with a birth in the past year using a difference-in-difference research design. Subgroup analysis was explored by race and ethnicity, age group, educational attainment, marital status, and number of children. Results: We found that Medicaid expansion was associated with a small reduction in births among low-income women of reproductive age by 0.45 percentage points (95% confidence interval: -0.84 to -0.05). In subgroup analyses, we found reductions in births among Hispanic women, American Indian or Alaska Native women, women 25-29 years of age, women 35-39 years of age, unmarried women, and women with more than three children. Conclusions: Reductions in births associated with Medicaid expansion could suggest that expanding Medicaid addressed previously unmet reproductive health care needs among low-income women of reproductive age. The reductions in births among low-income women that we observe were occurring among some groups with higher unintended pregnancy rates, including Hispanic women, American Indian or Alaska Native women, young women, and unmarried women. These findings underscore the importance of reproductive health care access through insurance coverage on empowering women to have control over their reproductive decision-making and timing. abstract_id: PUBMED:25339786 Are Hispanic Women Happier About Unintended Births? Reducing unintended pregnancies - particularly among Hispanic and Black women, who have relatively high rates - is a key public health goal in the United States. However, descriptive literature has suggested that Hispanic women are happier about these pregnancies compared with White and Black women, which could mean that there is variation across groups in the consequences of the resulting births. The purpose of this study was to examine variations in happiness about unintended births by race-ethnicity and to assess possible explanations for these differences. Using data from the National Survey of Family Growth (n=1,462 births) I find that Hispanic women report being happier about unintended births compared with White and Black women. Higher happiness among Hispanics was particularly pronounced among a subgroup of women: those who were foreign-born and very religious. Overall, results confirm previous findings that intention status alone is incomplete for capturing pregnancy experiences. Happiness offers complementary information that is important when making comparisons by race-ethnicity and nativity. abstract_id: PUBMED:10728275 Live births resulting from unintended pregnancies: an evaluation of synthetic state-based estimates. Objectives: Most states lack information on the proportion of live births resulting from unintended pregnancies. We evaluated a potential solution to the lack of data, a synthetic state-based estimate of the percentage of live births resulting from unintended pregnancies for the state of Georgia. Methods: We constructed the synthetic estimate by standardizing the 1995 National Survey of Family Growth data by the race, marital status, and age distribution of Georgia residents ages 15-44 years who delivered a live birth during 1990-1994. Two surveys conducted in Georgia during the same period that collected information on unintended pregnancies were used for comparison: the Georgia Women's Health Survey (GWHS) and the Georgia Pregnancy Risk Assessment Monitoring System (PRAMS). Results: The synthetic estimate (35.2%, 95% CI = 33.5%-36.7%) was not statistically different from the GWHS estimate (39.6%, 95% CI = 35.7%-43.5%), but was significantly lower than the Georgia PRAMS estimate (49.0%, 95% CI = 45.5%-52.5%). When we stratified by race, marital status, and age, the synthetic and GWHS estimates were statistically similar except for married females and females ages 25-34 years, for whom the synthetic estimates were lower. For all groups of females, the synthetic estimates were statistically lower than the Georgia PRAMS estimates. Conclusions: The synthetic estimate can be a useful method for states that need to know the overall magnitude of the percentage of live births resulting from unintended pregnancy for purposes such as program planning. abstract_id: PUBMED:23115878 Intended and unintended births in the United States: 1982-2010. Objectives: This report shows trends since 1982 in whether a woman wanted to get pregnant just before the pregnancy occurred. This is the most direct measure available of the extent to which women are able (or unable) to choose to have the number of births they want, when they want them. In this report, this is called the "standard measure of unintended pregnancy." Methods: The data used in this report are primarily from the 2006-2010 National Survey of Family Growth (NSFG), conducted by the Centers for Disease Control and Prevention's National Center for Health Statistics. The 2006-2010 NSFG included in-person interviews with 12,279 women aged 15-44. Some data in the trend analyses are taken from NSFG surveys conducted in 1982, 1988, 1995, and 2002. Results: About 37% of births in the United States were unintended at the time of conception. The overall proportion unintended has not declined significantly since 1982. The proportion unintended did decline significantly between 1982 and 2006-2010 among births to married, non-Hispanic white women. Large differences exist between groups in the percentage of births that are unintended. For example, unmarried women, black women, and women with less education or income are still much more likely to experience unintended births compared with married, white, college-educated, and high-income women. This report also describes some alternative measures of unintended births that give researchers an opportunity to study this topic in new ways. abstract_id: PUBMED:15519961 Differences between mistimed and unwanted pregnancies among women who have live births. Context: Mistimed and unwanted pregnancies that result in live births are commonly considered together as unintended pregnancies, but they may have different precursors and outcomes. Methods: Data from 15 states participating in the 1998 Pregnancy Risk Assessment Monitoring System were used to calculate the prevalence of intended, mistimed and unwanted conceptions, by selected variables. Associations between unintendedness and women's behaviors and experiences before, during and after the pregnancy were assessed through unadjusted relative risks. Results: The distribution of intended, mistimed and unwanted pregnancies differed on nearly every variable examined; risky behaviors and adverse experiences were more common among women with mistimed than intended pregnancies and were most common among those whose pregnancies were unwanted. The likelihood of having an unwanted rather than mistimed pregnancy was elevated for women 35 or older (relative risk, 2.3) and was reduced for those younger than 25 (0.8); the pattern was reversed for the likelihood of mistimed rather than intended pregnancy (0.5 vs. 1.7-2.7). Parous women had an increased risk of an unwanted pregnancy (2.1-4.0) but a decreased risk of a mistimed one (0.9). Women who smoked in the third trimester, received delayed or no prenatal care, did not breast-feed, were physically abused during pregnancy, said their partner had not wanted a pregnancy or had a low-birth-weight infant had an increased risk of unintended pregnancy; the size of the increase depended on whether the pregnancy was unwanted or mistimed. Conclusion: Clarifying the difference in risk between mistimed and unwanted pregnancies may help guide decisions regarding services to women and infants. abstract_id: PUBMED:32534471 Spontaneous pregnancies in female survivors of childhood allogeneic haemopoietic stem cell transplant for haematological malignancies. Objective: Spontaneous pregnancies and live births are rarely reported after haematopoietic stem cell transplant (HSCT). We report spontaneous pregnancy outcomes of sexually active female survivors of childhood allogeneic HSCT, to provide more data for future counselling. Design, Patients And Measurements: Retrospective review of all female survivors of childhood haematological malignancies who had allogeneic HSCT at the Royal Children Hospital between 1985 and 2011. Data were retrieved from medical records, updated from treating haematologist or endocrinologist, and were cross-referenced with self-reported questionnaires. Female survivors who were sexually inactive were excluded from analysis. Results: Six of 37 (16.2%) female survivors reported spontaneous pregnancies resulting in 8 live births. Amongst 22 women who received total body irradiation (n = 21) ± cranial irradiation or isolated cranial irradiation (n = 1), and high-dose cyclophosphamide, three reported pregnancy resulting in live births (14%), whilst three of 15 women who received chemotherapy alone had pregnancy with live births (20%). Conclusions: Our current finding, albeit a small sample size, reinforces the importance of counselling female survivors of HSCT about the possibility of spontaneous pregnancy occurring despite documented ovarian failure and for need of contraception to avoid unplanned pregnancy. abstract_id: PUBMED:23054454 Class III obesity and unwanted pregnancy among women with live births in New York City, 2004-2007. Obesity is associated with numerous adverse health effects for pregnant women and their newborns. Unintended pregnancy is associated with suboptimal prenatal health behaviors and adverse birth outcomes. While research has suggested a link between obesity and unintended pregnancy, the evidence has been contradictory. Research has not focused on women at the highest level of obesity, Class III (body mass index ≥40). Pregnancy Risk Assessment Monitoring System data for 4,161 women in New York City with a live birth from 2004 to 2007 and complete data on pregnancy intention, height, and weight were examined. The primary outcome, having a live birth that resulted from an unwanted pregnancy (not wanted at that time or at any time in the future), was compared across 6 groups of pre-pregnancy body mass index (BMI). Logistic regression models adjusting for sociodemographic factors and stressors during pregnancy were conducted. The proportion of women reporting their pregnancy was unwanted increased with increasing BMI level to a high of 24 % among women with Class III obesity. After adjustment for confounding sociodemographic factors, women classified as Class III obese were significantly more likely than women with normal BMI to report an unwanted pregnancy [AOR = 2.81 (95 % CI: 1.41-5.60)]; this relationship held after adjusting for stressors during pregnancy. No significant association was found for women of other BMI groups. Previous analyses may have masked a relationship between BMI and unwanted pregnancy among women with Class III obesity. Further research exploring underlying mechanisms which are amenable to intervention is of critical public health importance. abstract_id: PUBMED:2387033 The incidence of unplanned and unwanted pregnancies among live births from health visitor records. A survey of health visitor records has indicated that approximately one-third of pregnancies resulting in live births are unplanned. The data further revealed an association between planning of pregnancies and socio-demographic variables and use of preventive health care for infants. Answer: Yes, there is variation among states in the percentage of live births resulting from unintended pregnancies. For instance, the percentage ranged from 33% in New York to 49% in Alabama, Georgia, and South Carolina according to the Pregnancy Risk Assessment Monitoring System (PRAMS) data assessed for eight states (Alabama, Florida, Georgia, Michigan, New York (excluding New York City), Oklahoma, South Carolina, and West Virginia) (PUBMED:10379430). This variation persisted even after adjusting for maternal race, marital status, age, education, previous live birth, and participation in the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC). The study concluded that while data on which women have the greatest risk of delivering a live birth resulting from an unintended pregnancy may be extrapolated from one state to another, the rate of such births may overestimate or underestimate the problem from one state to another (PUBMED:10379430). Additionally, the study evaluating synthetic state-based estimates found that the synthetic estimate for Georgia (35.2%) was not statistically different from the Georgia Women's Health Survey (GWHS) estimate (39.6%), but was significantly lower than the Georgia PRAMS estimate (49.0%) (PUBMED:10728275). This suggests that while synthetic estimates can be useful for program planning in states lacking data, there may still be discrepancies when compared to actual collected data. Moreover, the incidence of unplanned and unwanted pregnancies among live births from health visitor records indicated that approximately one-third of pregnancies resulting in live births are unplanned, with variations associated with socio-demographic variables and use of preventive health care for infants (PUBMED:2387033). These findings demonstrate that there is indeed variation among states in the United States regarding the percentage of live births resulting from unintended pregnancies.
Instruction: Total cholesterol content of erythrocyte membranes is increased in patients with acute coronary syndrome: a new marker of clinical instability? Abstracts: abstract_id: PUBMED:17531656 Total cholesterol content of erythrocyte membranes is increased in patients with acute coronary syndrome: a new marker of clinical instability? Objectives: We hypothesized that cholesterol content is increased in the circulating erythrocytes of patients with acute coronary syndrome (ACS) and may be a marker of clinical instability. We therefore sought to investigate whether cholesterol content differs in erythrocyte membranes of patients presenting with ACS compared to patients with chronic stable angina (CSA). Background: Plaque rupture in ACS depends at least partly on the volume of the necrotic lipid core. Histopathologic studies have suggested that cholesterol transported by erythrocytes and deposited into the necrotic core of atheromatous plaques contributes to lipid core growth. Methods: Consecutive angina patients were prospectively assessed; 120 had CSA (83 men, age 64 +/- 11 years) and 92 ACS (67 men, 66 +/- 11 years). Total cholesterol content in erythrocyte membranes (CEM) was measured using an enzymatic assay, and protein content was assessed by the Bradford method. Results: The CEM (median and interquartile range) was higher (p &lt; 0.001) in ACS patients (184 microg/mg; range 130.4 to 260.4 microg/mg) compared with CSA patients (81.1 microg/mg; range 53.9 to 109.1 microg/mg) (analysis of covariance). Total plasma cholesterol concentrations did not correlate with CEM levels (r = -0.046, p = 0.628). Conclusions: This study shows, for the first time, that CEM is significantly higher in patients with ACS compared with CSA patients. These findings suggest a potential role of CEM as a marker of atheromatous plaque growth and vulnerability. Large ad hoc studies are required to establish the clinical importance and pathogenic significance of CEM measurement. abstract_id: PUBMED:23652364 Sphingomyelin in erythrocyte membranes increases the total cholesterol content of erythrocyte membranes in patients with acute coronary syndrome. Objectives: To investigate whether the sphingomyelin content of erythrocyte membranes (SEM) is changed in patients with acute coronary syndrome (ACS) and determine the correlation between SEM and the total cholesterol content of erythrocyte membranes (CEM). Methods: The SEM and CEM levels were measured in 354 patients undergoing coronary artery angiography in three different groups: ACS patients (n=199), patients with stable angina pectoris (SAP) (n=82), and controls (n=73). Results: The SEM levels in the ACS group were significantly higher than those of the SAP group. The SEM levels were correlated positively with the CEM levels in patients with coronary artery disease. Multivariable logistic regression analysis showed that patients with higher levels of both SEM and CEM had an 8.569-fold greater risk of developing ACS than other patients, after adjusting for all potential confounding variables. Conclusion: Elevated SEM and CEM levels showed both independent and combined correlations with the occurrence of ACS and were positively correlated with each other in patients with coronary artery disease. These data suggest that the increased levels of SEM may play a role in the progression to plaque instability in ACS and may be the mechanisms underlying elevated levels of CEM in patients with ACS. abstract_id: PUBMED:19838647 Statin use is associated with a significant reduction in cholesterol content of erythrocyte membranes. A novel pleiotropic effect? Purpose: High cholesterol content of erythrocyte membranes (CEM) levels is present in patients with acute coronary syndromes (ACS). Intraplaque hemorrhage and erythrocyte lysis contribute to the deposition of cholesterol on the atherosclerotic plaque and to plaque rupture. With the present study we assessed the effect of statin therapy on CEM levels, a novel marker of coronary artery disease (CAD) instability during a 1-year follow-up in CAD patients. Methods: 212 consecutive eligible (158 men, 62 +/- 10 years) patients undergoing diagnostic coronary angiography for the assessment of angina pectoris were assessed. The study population comprised of 84 chronic stable angina (CSA) patients and 128 ACS patients. All study participants were commenced on statin treatment in equipotent doses and were followed for up to 1 year (at - 1, - 3, - 6 and - 12 months). Results: Repeated measurements analysis of variance after appropriate adjustment showed a significant decrease (p &lt; 0.001) in CEM content during follow up. CEM levels were decreasing at each time point (1 month : 100 microg/mg 95%CI 94.3-105.6, 3 months : 78.1 microg/mg 95%CI 73.2-83, 6 months : 67.2 microg/mg 95%CI 63.1-71.2, 1 year : 45.3 microg/mg 95%CI 42.2-48.3) compared to admission (112.1 microg/mg 95% CI 105.9-118.3) and to all previous measurements. Conclusions: The present study showed, that use of statins is associated with a reduction in CEM, an emerging marker of clinical instability and plaque vulnerability in CAD patients. The pleiotropic effects of statins at the cell membrane level represent a promising novel direction for research in CAD. abstract_id: PUBMED:22277951 Red blood cell distribution width: a strong prognostic marker in cardiovascular disease: is associated with cholesterol content of erythrocyte membrane. Objectives: Red blood cell distribution width (RDW), a measure of the variability in size of circulating erythrocytes, has recently been shown to be a strong predictor of adverse outcomes in patients with a great spectrum of cardiovascular disease. Recently, cholesterol content of erythrocytes membranes (CEM) has been associated with clinical instability in coronary artery disease whilst it has been linked with red blood cells (RBC) size and shape. Since the biological mechanisms underlying the association of higher RDW with cardiovascular mortality risk are currently unclear, we studied the association of CEM with RDW. Methods: 296 consecutive angina patients (236 men, mean age 69 ± 2 years) were prospectively assessed; 160 had chronic stable angina (CSA) and 136 had an acute coronary syndrome (ACS). Results: Patients presenting with ACS had increased CEM levels (121.6 μg/mg (40.1) vs 74.4 μg/mg (26.6), p &lt; 0.001) as well as exhibited greater anisocytosis (13.9% (0.9) vs 13.3% (0.7), p &lt; 0.001) compared to patients with CSA. Simple correlation analysis showed that CEM levels were positively associated with RDW values (r = 0.320, p &lt; 0.001). Multivariable linear regression showed that CEM levels were associated with RDW values independently from possible confounders (inflammatory, nutritional renal or hematological). Conclusions: Data from the present study showed an independent association between cholesterol content of erythrocyte membranes and anisocytosis. Increased CEM levels -a novel biomarker of clinical instability in CAD - may facilitate our understanding why RDW is associated with increased morbidity and mortality in cardiovascular disease. abstract_id: PUBMED:19411120 Total cholesterol content of erythrocyte membranes levels are associated with the presence of acute coronary syndrome and high sensitivity C-reactive protein. In this study we assessed whether total cholesterol content of erythrocyte membranes (CEM) was associated with the presence of acute coronary syndrome (ACS) and high sensitivity C-reactive protein (hs-CRP). Consecutive angina patients were assessed; 98 had ACS and 45 had stable angina pectoris (SAP). CEM in the ACS group was significantly higher compared with the SAP group (p&lt; 0.05). Multiple logistic regression analyses revealed a significant independent relation between CEM and the presence of ACS (OR 24.990, p&lt;0.001). CEM was positively correlated with serum hs-CRP levels (r=0.328, p&lt;0.001). These findings suggest a potential role of CEM as a marker of vulnerable plaque. abstract_id: PUBMED:31060190 Prognostic value of total cholesterol content of erythrocyte membranes in patients with acute coronary syndrome Objective: Previous cross-sectional studies suggested that elevated levels of total cholesterol content of erythrocyte membrane (CEM) could significantly increase the risk of acute coronary syndrome (ACS). The purpose of the present study was to assess the predictive value of baseline CEM levels for the risk of clinical endpoint events in patients with ACS through prospective follow-up studies. Methods: This study is a prospective follow-up study, which consisted of 859 patients with first ACS (698 patients with unstable angina pectoris and 161 patients with acute myocardial infarction), diagnosed and hospitalized in the First and Second Affiliated Hospital of Anhui Medical University. The routine blood lipid levels and CEM were measured. Patients were divided into two groups according to the median of baseline CEM: CEM≤131.56 μg/mg group (n=430) and CEM&gt;131.56 μg/mg group (n=429). Patients were followed up at 6 months interval. The clinical endpoints were nonfatal myocardial infarction, nonfatal stroke, all-cause mortality, all-cause mortality, heart failure requiring hospitalization, and coronary artery revascularization. Kaplan-Meier curve analysis and Cox proportional hazard model were used to analyze the impact of elevated CEM on the occurrence of clinical end-point events. HR values and 95%CI of each variable were obtained. Cox regression analysis of all-cause mortality was performed according to whether patients had risk factors for coronary heart disease (hypertension, diabetes, smoking and elevated LDL-C) and whether they were treated with PCI. Results: The follow-up time was 1 640 (1 380, 2 189) days. Cox analysis after adjustment showed that an elevated baseline of CEM (&gt;131.56 μg/mg) was associated with an increased risk of all-cause mortality (HR=1.690, 95%CI 1.041-2.742, P=0.034), but had no significant predictive effect on the other clinical endpoints. Subgroup analysis showed that elevated baseline CEM levels in ACS patients with LDL-C&gt;1.8 mmol/L (HR=1.687, 95%CI 1.026-2.774, P=0.039), receiving in-hospital PCI (HR=2.365, 95%CI 1.054-5.307, P=0.037), or male (HR=1.794, 95%CI 1.010-3.186, P=0.046) were associated with an increased risk of all-cause mortality. Conclusion: The results showed that elevated CEM levels can increase the risk of all-cause mortality in ACS patients. abstract_id: PUBMED:19005293 Cholesterol composition of erythrocyte membranes and its association with clinical presentation of coronary artery disease. Objectives: Presence of free cholesterol in atherosclerotic plaques is a major determinant of plaque instability. It is hypothesized that extravasated erythrocytes may contribute to free cholesterol accumulation in atherosclerotic plaques through their rich in cholesterol membrane. In this study we assessed whether cholesterol in erythrocyte membranes (CEMs), that is, free (FCEM) versus esterified (ECEM), differs in patients with chronic stable angina (CSA) compared with patients presenting with acute coronary syndromes (ACSs). Methods: Consecutive angina patients were prospectively assessed; 154 had CSA (118 men, 63 years, 56-69 years) and 164 ACS (124 men, 63 years, 55-71 years). FCEM and ECEM were measured using an enzymatic assay, and protein content was assessed by the Bradford method. Results: FCEM was significantly higher (P&lt;0.001) in the ACS patients group (94.1 microg/mg, IQ 71-116.5 microg/mg) compared with patients with CSA (61.9 microg/mg, IQ 49.3-73.1 microg/mg). ECEM levels were also significantly higher (P&lt;0.001) in ACS patients (23.3 microg/mg, IQ 14.9-47.7 microg/mg) compared with CSA patients (10.8 microg/mg, IQ 8-22.3 microg/mg). In contrast, ratio of free-to-esterified cholesterol (P=0.110) as well as ratio of free-to-total CEM (P=0.109) were not different among CSA and ACS patients. Conclusion: Findings of this study show that although free cholesterol is the prevailing form of CEMs, both FCEM and ECEM levels are increased in patients with ACS compared with CSA patients. These findings suggest that it is the quantity of CEM rather than the type of cholesterol present in the erythrocyte membrane that determines plaque progression. abstract_id: PUBMED:20223535 Independent and additive predictive value of total cholesterol content of erythrocyte membranes with regard to coronary artery disease clinical presentation. Background: A new mechanism for clinical instability in coronary artery disease (CAD) has been proposed where erythrocytes could play an active role in atherosclerotic plaque growth and rupture. Clinical studies showed increased total cholesterol levels in the membrane of circulating erythrocytes (CEM) in acute coronary syndrome (ACS) patients compared to patients with chronic stable angina (CSA). We investigated the independent and incremental discriminating value of CEM along with N-terminal propeptide of BNP (NT-proBNP), high sensitivity C-reactive protein (hs CRP), myeloperoxidase (MPO) and apolipoprotein B (apoB) with regard to CAD clinical presentation. Methods: 519 consecutive angina patients were assessed; 252 had CSA (195 men, 62 ± 9 years) and 267 had ACS (213 men, 62 ± 10 years).CEM levels and serum concentrations of NT-proBNP, hs CRP, MPO and apoB were measured upon study admission. Results: Simple logistic regression models showed that all biomarkers could distinguish ACS, nevertheless CEM with greater potency (OR 9.26 95%CI 6.31-13.59, p&lt;0.001). Multiple logistic regression models after adjustment for all the variables that were different between the 2 groups as well as for other biomarkers showed that CEM continued to be a significant and an independent predictor of ACS (OR 22.27 95%CI 10.63-46.67, p&lt;0.001). An increment of the C-statistic was also shown when CEM levels were incorporated in the predictive model (including traditional vascular risk factors and new well established biomarkers i.e. hs CRP, MPO, apoB and NT-proBNP). Conclusions: The present study showed that CEM levels are associated with clinical instability in CAD patients in an independent and incremental manner. abstract_id: PUBMED:23009223 Total cholesterol content of erythrocyte membranes is associated with the severity of coronary artery disease and the therapeutic effect of rosuvastatin. Introduction: Numerous studies suggest that total cholesterol content of erythrocyte membranes (CEM) might play a critical role in atherosclerotic plaque progression and instability. However, the exact role of CEM in atherosclerosis remains obscure. Our study was designed to investigate the association between CEM and the severity of coronary artery disease (CAD), and to assess the effect of rosuvastatin on CEM levels. Methods: CEM levels were assessed in 136 participants, including acute coronary syndrome (ACS) (non-ST-segment elevation ACS (NSTEACS) and ST-segment elevation myocardial infarction (STEMI)), stable angina pectoris (SAP), and controls. The Gensini score was used to estimate the severity of CAD. Additionally, 54 patients with CAD were medicated with rosuvastatin, 5 or 10 mg once daily, and then checked at 6 months. Results: The highest level of CEM was found in the STEMI group, followed by the NSTEACS, the SAP, and the control groups. Gensini score in group IV (CEM &gt; 141.6 μg/mg) was markedly higher compared with group I (CEM ≤77.6 μg/mg). Gensini scores in group II (77.6 &lt; CEM ≤111.1 μg/mg) and group III (111.1 &lt; CEM ≤141.6 μg/mg) were also higher than in group I (all P &lt; 0.001). Furthermore, a positive correlation was found between CEM levels and Gensini score (r = 0.714, P &lt; 0.001). CEM levels were dose-dependently reduced by rosuvastatin therapy. Conclusions: CEM levels are positively associated with the severity of CAD, meaning that CEM might contribute to the development of CAD. Importantly, rosuvastatin could decrease CEM levels in patients with CAD and might effectively help to attenuate the progression of CAD. abstract_id: PUBMED:21317639 Total cholesterol content of erythrocyte membranes in acute coronary syndrome: correlation with apolipoprotein A-I and lipoprotein (a). Objectives: Recent studies reported that total cholesterol erythrocyte membrane (CEM) levels were associated with the presence of acute coronary syndrome (ACS). However, little is known about the mechanisms of CEM elevation in these patients. The aim of this study was to investigate the association between CEM and the circulating lipid profile to delineate the possible mechanisms of CEM elevation in patients with ACS. Methods: CEM levels, serum concentrations of triglyceride, high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, lipoprotein (a), apolipoprotein A-I (Apo A-I), apolipoprotein B (Apo B), and high-sensitive C-reactive protein levels were measured in 418 Chinese patients undergoing coronary artery angiography, including ACS (n=311) and stable angina pectoris (n=107). Results: CEM levels in the ACS group were significantly higher (median, 129.82; interquartile range, 110.99-156.54 μg/mg, P&lt;0.001) compared with the stable angina pectoris group (median, 80.88; interquartile range, 66.69-98.57 μg/mg). Multivariable logistic regression analyses showed a significantly independent relationship between CEM levels and the presence of ACS (odds ratio, 10.257; 95% confidence interval, 5.380-19.556, P&lt;0.001). CEM levels were positively correlated with plasma lipoprotein (a) levels (r=0.175; P&lt;0.001) and negatively correlated with serum Apo A-I levels (r=-0.149; P=0.002). Conclusion: CEM levels are closely associated with the occurrence of ACS as an independent determinant. The correlation of CEM with lipoprotein (a) and Apo A-I suggests that changes to these lipid proteins could be one possible mechanism for CEM increase in patients with ACS. Answer: The total cholesterol content of erythrocyte membranes (CEM) is indeed increased in patients with acute coronary syndrome (ACS) compared to those with chronic stable angina (CSA), suggesting that it could serve as a new marker of clinical instability. Studies have shown that CEM is significantly higher in patients with ACS, which indicates a potential role of CEM as a marker of atheromatous plaque growth and vulnerability (PUBMED:17531656). Additionally, sphingomyelin in erythrocyte membranes, which is positively correlated with CEM levels, is also elevated in ACS patients, further supporting the role of CEM in plaque instability (PUBMED:23652364). Statin therapy has been associated with a significant reduction in CEM levels, suggesting a novel pleiotropic effect of statins that could contribute to their beneficial impact in CAD patients (PUBMED:19838647). Moreover, CEM has been linked with red blood cell distribution width (RDW), a strong predictor of adverse outcomes in cardiovascular disease, providing a possible explanation for the association between higher RDW and cardiovascular mortality risk (PUBMED:22277951). CEM levels are also associated with the presence of ACS and high sensitivity C-reactive protein (hs-CRP), further suggesting its potential role as a marker of vulnerable plaque (PUBMED:19411120). Prognostic value has been attributed to CEM, as elevated baseline levels have been shown to increase the risk of all-cause mortality in ACS patients (PUBMED:31060190). Furthermore, both free and esterified cholesterol in erythrocyte membranes are increased in ACS patients, indicating that the quantity of CEM rather than the type of cholesterol is a determinant of plaque progression (PUBMED:19005293). CEM has been shown to have independent and additive predictive value with regard to CAD clinical presentation, even when considered alongside other biomarkers (PUBMED:20223535). Lastly, CEM is associated with the severity of CAD and the therapeutic effect of rosuvastatin, which can reduce CEM levels and potentially attenuate CAD progression (PUBMED:23009223). The correlation of CEM with lipoprotein (a) and apolipoprotein A-I (Apo A-I) suggests that changes in these lipid proteins could be one mechanism for the increase in CEM in ACS patients (PUBMED:21317639). In summary, the evidence suggests that CEM is elevated in ACS patients and may serve as a marker of clinical instability and a predictor of adverse outcomes in cardiovascular disease.
Instruction: Tolerability aspects in duloxetine-treated patients with depression: Should one use a lower starting dose in clinical practice? Abstracts: abstract_id: PUBMED:22712514 Tolerability aspects in duloxetine-treated patients with depression: Should one use a lower starting dose in clinical practice? Objective: This study questions whether a lower starting dose of duloxetine (DLX) could be beneficial for patients with depression, in terms of tolerability and safety in routine clinical care. Research Design And Methods: Post-hoc analyses of a multicenter, prospective, non-interventional, 6-month study in adult outpatients with a depressive episode was undertaken. Main Outcome Measures: Treatment-emergent adverse events (TEAEs), serious adverse events (SAEs), discontinuations due to TEAEs and hospitalizations due to depression, were all documented at 2 weeks, 4 weeks, 3 months and 6 months after treatment initiation/switch to DLX. Results: Of 4517 patients enrolled, 4313 were included for TEAE evaluation. TEAEs occurred in 17.2% of patients, and SAEs occurred in 0.79% of patients, including one case of suicidal ideation. 1404 patients discontinued within 6 months (TEAEs: n = 119). Starting treatment with 30 mg/day DLX (72.7%) was favored in females, or after inadequate efficacy of previous antidepressant treatment; 60 mg/day DLX was favored in more severe depression and patients receiving concomitant pain medication. Conclusion: Initiating treatment with 60 mg/day DLX was not associated with poorer tolerability in this study. Physicians may be guided by their clinical experience to carefully consider the individual benefit/risk ratio and TEAE susceptibility when deciding to start treatment with a higher or a lower dose of DLX. abstract_id: PUBMED:24678074 Clinical consequences of initial duloxetine dosing strategies: Comparison of 30 and 60 mg QD starting doses. Background: To reduce the risk for treatment-emergent adverse events and increase patient compliance, clinicians frequently prescribe a suboptimal starting dose of antidepressants, with the goal of increasing the dose once the patient has demonstrated tolerability. Objective: The aim of this study was to examine the tolerability and effectiveness associated with an initial week of duloxetine hydrochloride treatment at 30 mg QD and subsequent dose increase to 60 mg QD, compared with a starting dose of 60 mg QD. Methods: In this open-label study, all patients met the criteria for major depressive disorder (MDD) described in the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision. Patients were required to wash out from previous antidepressant medications for 21 days, and were then randomized to receive duloxetine 30 or 60 mg QD for 1 week. After 1 week, patients receiving duloxetine 30 mg QD had their dose increased to 60 mg QD. Patients returned for assessments at weeks 2, 4, 6, 8, and 12. During the remainder of the 12-week study period, the duloxetine dose could be titrated based on the degree of response from 60 mg QD (minimum) to 120 mg QD (maximum), with 90 mg QD as an intermediate dose. Tolerability was assessed by means of discontinuation rates, spontaneously reported adverse events, changes in vital signs, and laboratory tests. Effectiveness measures included the 17-item Hamilton Rating Scale for Depression (HAMD17) total score, HAMD17 core and Maier subscales, individual HAMD17 items, the Hamilton Rating Scale for Anxiety total score, and the Clinical Global Impression of Severity. Results: One hundred thirty-seven patients were enrolled (82 women, 55 men; mean age, 42 years; duloxetine 30 mg QD, 67 patients; duloxetine 60 mg QD, 70 patients). The rate of discontinuation due to adverse events did not differ significantly between patients starting duloxetine at 30 mg QD and 60 mg QD (13.4% vs 18.6%). The most frequently reported adverse events across both treatment groups were nausea, headache, dry mouth, insomnia, and diarrhea. In the first week of treatment, patients receiving duloxetine 30 mg QD had a significantly lower rate of nausea compared with patients receiving 60 mg QD (16.4% vs 32.9%; P = 0.03). Over the 12-week acute-treatment phase, patients starting duloxetine treatment at 30 mg QD had a significantly lower rate of nausea compared with patients initiating treatment at 60 mg QD (P = 0.047). Although between-group differences in the HAMD17 total score were not statistically significant at any visit, patients starting at 30 mg QD experienced significantly less improvement in HAMD17 core and Maier subscales at week 1 compared with patients starting at 60 mg QD (core, P= 0.044; Maier, P = 0.047). After 2 weeks of treatment, the magnitude of improvement among patients starting at 30 mg QD did not differ significantly from that observed in patients who started treatment at 60 mg QD, and there were no significant between-group differences in effectiveness at any subsequent visit. Conclusions: Results from this open-label study in patients with MDD suggest that starting duloxetine treatment at 30 mg QD for 1 week, followed by escalation to 60 mg QD, might reduce the risk for treatment-emergent nausea in these patients while producing only a transitory impact on effectiveness compared with a starting dose of 60 mg QD. abstract_id: PUBMED:26811681 Treatment discontinuation and tolerability as a function of dose and titration of duloxetine in the treatment of major depressive disorder. Purpose: We sought to better understand how dose and titration with duloxetine treatment may impact tolerability and treatment discontinuation in patients with major depressive disorder. Patients And Methods: We investigated Phase III duloxetine trials. Group 1 was a single placebo-controlled study with a 20 mg initial dose and a slow titration to 40 and 60 mg. Group 2 was a single study with a 40 mg initial dose and final "active" doses of 40 and 60 mg (5 mg control group), with 1-week titration. Group 3 consisted of eight placebo-controlled studies with starting doses of 40, 60, and 80 mg/day with minimal titration (final dose 40-120 mg/day). Tolerability was measured by rate of discontinuation due to adverse events (DCAE). Results: The DCAE in Group 1 were 3.6% in the 60 mg group, 3.3% in the 40 mg group, and 3.2% in the placebo group. In Group 2, the DCAE were 15.0% in the 60 mg group, 8.1% in the 40 mg group, and 4.9% in the 5 mg group. In Group 3, the DCAE were 9.7% and 4.2% in the duloxetine and placebo groups, respectively. Conclusion: This study suggests that starting dose and titration may have impacted tolerability and treatment discontinuation. A lower starting dose of duloxetine and slower titration may contribute to improving treatment tolerability for patients with major depressive disorder. abstract_id: PUBMED:22816871 Observational study on safety and tolerability of duloxetine in the treatment of female stress urinary incontinence in German routine practice. Aims: To evaluate the safety and tolerability of duloxetine during routine clinical care in women with stress urinary incontinence (SUI) in Germany, and in particular, to identify previously unrecognized safety issues as uncommon adverse reactions, and the influence of confounding factors present in clinical practice on the safety profile of duloxetine. Methods: Office-based urologists, gynaecologists and primary care physicians were asked to document women newly started on treatment for moderate to severe symptoms of SUI. Six thousand eight hundred and fifty-four patients from urologist/gynaecologist practices and 5879 primary care patients were assessed. In a two-armed, observational study with parallel 12 week (urologists and gynaecologists) or 24 week (primary care physicians) design, patients were treated with duloxetine or other conservative treatment. The main outcome measure was the occurrence of adverse events (AEs). Results: Baseline characteristics differed slightly between patient groups and studies. Duloxetine doses in most patients were lower than recommended. Overall, AE frequency with duloxetine was lower than in controlled studies (15.9% (95% CI 14.9, 16.9) and 9.1% (95% CI 8.2, 10.0) in the 12 and 24 week treatment groups, respectively), but exhibited a similar qualitative spectrum. In the logistic regression models, the following factors were associated with greater AE risk: investigator specialization (gynaecologist vs. urologist and primary care physician), initial duloxetine dose (80 vs. 20 mg day(-1) ) and use of any concomitant medication. Within the 24 week study, a positive screen for depressive disorder was surprisingly common, but no case of attempted suicide was reported in either study. Conclusions: Our results from German clinical practice show that women with SUI were often treated with duloxetine doses lower than recommended. This was associated with a low incidence of AEs. Suicide attempts were not reported. abstract_id: PUBMED:19930333 How are women with SUI-symptoms treated with duloxetine in real life practice? - preliminary results from a large observational study in Germany. Background: Duloxetine was found safe and effective in the treatment of moderate to severe female stress urinary incontinence (SUI) in controlled clinical trials; complementary data from routine clinical practice are still wanted. Objectives: To explore the use of various initial duloxetine doses by physicians in the treatment of female SUI in routine clinical practice and its implications on drug safety and patients' subjective impression of effectiveness. Methods: Adult women treated with duloxetine for SUI symptoms were documented as part of an ongoing large-scale observational study in Germany. Data collected at baseline, after 4 and 12 weeks, were evaluated by initial doses. Statistics were descriptive, 95% confidence intervals were calculated for adverse event (AE) rates. Results: A total of 7888 adult women were treated with duloxetine; their mean age was 61.4 years, body mass index 27 kg/m(2), incontinence episode frequency (IEF) 14.0 per week. Previous SUI treatments were observed in 52.2%, comorbidities in 60.4% of the patients. A total of 90.7% reported reduced frequency of SUI-episodes, 12.1% any AE; nausea (5.7%) and vertigo (1.6%) were reported most frequently. In all, 52.2% of patients were initiated on a duloxetine dose of 40 mg/day. Only minor differences in patient characteristics, effectiveness and tolerability were associated with varying initial duloxetine doses. Conclusions: Many women received lower duloxetine doses than expected based on evidence-based dosing recommendations. Although SUI patients in this study had a higher health risk because of old age and multiple comorbidities than in previous controlled clinical trials, AE rates were lower, possibly because of the observational character of the study and/or the use of rather low doses. Similar AE rates for varying initial doses possibly reflect sensible dose-adjustment to individual needs. abstract_id: PUBMED:16934840 Use of effect size to determine optimal dose of duloxetine in major depressive disorder. Objective: At effective doses, patients with major depressive disorder (MDD) treated with duloxetine have been found to experience significant symptom improvement as measured by HAMD(17) total score. In addition, duloxetine-treated patients have significantly higher remission and response rates compared with placebo. The objective of this analysis is to determine the optimal dose of duloxetine in MDD. Materials And Methods: Effect size for duloxetine 40mg, 60mg, 80mg, and 120mg per day were estimated using all 6 acute phase III clinical trials in patients with MDD. The tolerability of duloxetine 40mg, 60mg, 80mg, and 120mg were evaluated using pooled data from the 6 studies. The primary efficacy measure in all trials was the HAMD(17) total score, from which were determined the effect size for HAMD(17) change scores, response rates (50% reduction from baseline to endpoint), and remission rates (HAMD(17) total score &lt; or =7). Results: A total of 1619 randomized patients were included in these studies, of which 632 were treated with placebo; 177 with duloxetine 40mg/day; 251 with 60mg/day; 363 with 80mg/day; and 196 with 120mg/day. An evaluation of increments in effect size between doses consistently showed that the most notable gain in effect size for efficacy was the 40-60mg/day dosage range. All dosages from 60 to 120mg were effective. The tolerability assessment indicated duloxetine at 40-120mg/day is well tolerated. Furthermore, the initial doses of 40-80mg/day were found to have comparable tolerability. Conclusions: The effect size analyses demonstrate that duloxetine 40mg has minimum efficacy, and that duloxetine 60-120mg/day is effective in the treatment of patients with MDD. An initial dose less than 60mg/day might provide better tolerability for some patients diagnosed with MDD. abstract_id: PUBMED:33192668 A Systematic Review of Efficacy, Safety, and Tolerability of Duloxetine. Duloxetine is a serotonin-norepinephrine reuptake inhibitor approved for the treatment of patients affected by major depressive disorder (MDD), generalized anxiety disorder (GAD), neuropathic pain (NP), fibromyalgia (FMS), and stress incontinence urinary (SUI). These conditions share parallel pathophysiological pathways, and duloxetine treatment might be an effective and safe alternative. Thus, a systematic review was conducted following the 2009 Preferred Reporting Items (PRISMA) recommendations and Joanna Briggs Institute Critical (JBI) Appraisals guidelines. Eighty-five studies focused on efficacy, safety, and tolerability of duloxetine were included in our systematic review. Studies were subdivided by clinical condition and evaluated individually. Thus, 32 studies of MDD, 11 studies of GAD, 19 studies of NP, 9 studies of FMS, and 14 studies of SUI demonstrated that the measured outcomes indicate the suitability of duloxetine in the treatment of these clinical conditions. This systematic review confirms that the dual mechanism of duloxetine benefits the treatment of comorbid clinical conditions, and supports the efficacy, safety, and tolerability of duloxetine in short- and long-term treatments. abstract_id: PUBMED:31793229 The Effect of Food on the Single-Dose Bioavailability and Tolerability of the Highest Marketed Strength of Duloxetine. Duloxetine is a combined serotonin and norepinephrine reuptake inhibitor indicated in adults for the treatment of major depressive disorder, diabetic peripheral neuropathic pain, and generalized anxiety disorder. The aim of these studies was to evaluate the effect of food on the pharmacokinetics and safety of duloxetine 60-mg gastroresistant hard capsules following single-dose administration. The data were obtained from 2 phase 1 bioequivalence studies, 1 in a fasting state and the other under fed conditions. Both studies have shown that, when administered as a single dose in the same prandial state, the test and reference duloxetine treatments were bioequivalent and exhibited similar safety profiles. The mean fed and fasting pharmacokinetic parameters and drug-related adverse events from the 2 studies were compared in order to assess the effect of food on the duloxetine bioavailability and respectively, tolerability. Administration of duloxetine in fed conditions increased peak plasma concentration by up to 30% and delayed mean time to peak concentration by an average of 1.15 hours while having an insignificant effect on extent of absorption (area under the plasma concentration-time curve in fed state within ±6% as compared with fasting conditions). Even though peak plasma levels were substantially higher in the fed state, there was no negative impact on the drug's safety profile. Actually, administration with food resulted in a lower average number of adverse events per single dose exposure. The negligible variation in overall systemic exposure suggests that efficacy remains unchanged irrespective of administration conditions; however, a better tolerability of the 60-mg dose is expected when the drug is taken with food. abstract_id: PUBMED:17355360 Tolerability and efficacy of duloxetine in a nontrial situation. Objective: To assess the tolerability and efficacy of duloxetine in a nontrial situation. Design: Prospective observational study. Setting: Urogynaecology Unit, District General Hospital, UK. Population: Two hundred and twenty-two women with a diagnosis of urodynamic stress incontinence (USI) or mixed USI and detrusor overactivity (DOA) took duloxetine for 4 weeks. Methods: The results of therapy were assessed with a Patient Global Impression of Improvement (PGI-I) questionnaire. One hundred and forty-eight (67%) women were initially treated with 40 mg twice a day, 67 (30%) women were treated with an escalating dose initially at 20 mg twice a day increasing to 40 mg twice a day after 2 weeks and seven (3%) women were started on a dose of 20 mg twice a day which they continued. Main Outcome Measures: Discontinuation rates and PGI-I scores. Results: Overall 146/222 (66%) women discontinued therapy due to adverse effects or lack of efficacy. Significantly more women starting on the 40 mg twice a day dose stopped due to adverse effects when compared with the escalating dose (P &lt; 0.025). Of the women who tolerated therapy, 80 out of 120 (67%) had a PGI-I score indicating an improvement. However, the overall rate of improvement was 37%. PGI-I scores and discontinuation rates were not significantly different between the group with USI and the group with mixed USI and DOA (P &gt; 0.05). Conclusion: In a nontrial situation duloxetine is poorly tolerated. Introducing an escalating dose may improve tolerability. A similar number of women with USI and mixed incontinence had a PGI-I score indicating improvement. abstract_id: PUBMED:16780958 Duloxetine in the treatment of major depressive disorder: comparisons of safety and tolerability in male and female patients. Background: While some studies have suggested sex differences in the efficacy of antidepressant medications, there have been few investigations into potential sex differences related to safety and/or tolerability. Pooled data from double-blind, placebo-controlled studies were utilized to assess the safety and tolerability of duloxetine in the treatment of major depressive disorder (MDD) in male and female patients. Methods: Safety data were pooled from seven double-blind, placebo-controlled clinical trials of duloxetine. Patients (aged &gt;or=18 years) meeting DSM-IV criteria for MDD received duloxetine (40-120 mg/day, male: N=318, female: N=578) or placebo (male: N=242, female: N=484) for up to 9 weeks. Safety was assessed using discontinuation rates, spontaneously reported treatment-emergent adverse events, changes in vital signs and laboratory analyses. Results: Discontinuation rates due to adverse events among duloxetine-treated patients were 18.6% for males and 13.5% for females. The most common treatment-emergent adverse events in both male and female patients included nausea, headache, dry mouth, diarrhea and constipation. The only event occurring at significantly different rates in male and female patients was nausea (Breslow Day p-value=0.008), and the significant difference was driven by a placebo nausea rate that was almost three times greater in females compared with males. No significant differential sex effects were found for pulse, blood pressure or weight. No laboratory analyte had an incidence of abnormal high or low values that differed significantly between male and female patients. Limitations: This was a post-hoc analysis of pooled data from acute phase clinical trials. Plasma concentrations of duloxetine were not obtained. Adverse event rates were based on spontaneous reports and differential dose-response effects were not evaluated. Conclusions: No evidence of clinically meaningful sex differences in the safety and tolerability of duloxetine were uncovered. Answer: The question of whether to use a lower starting dose of duloxetine in clinical practice for patients with depression to improve tolerability has been explored in several studies. A post-hoc analysis of a multicenter, prospective, non-interventional study found that initiating treatment with 60 mg/day of duloxetine was not associated with poorer tolerability, suggesting that physicians may consider individual benefit/risk ratios and susceptibility to treatment-emergent adverse events (TEAEs) when deciding on the starting dose (PUBMED:22712514). Another study compared the tolerability and effectiveness of starting duloxetine at 30 mg QD for one week and then increasing to 60 mg QD, versus starting directly at 60 mg QD. The study found that starting at 30 mg QD reduced the risk of treatment-emergent nausea while only having a transitory impact on effectiveness compared with starting at 60 mg QD (PUBMED:24678074). Further research indicated that a lower starting dose and slower titration of duloxetine may contribute to improving treatment tolerability for patients with major depressive disorder (PUBMED:26811681). Additionally, an observational study on the safety and tolerability of duloxetine in the treatment of female stress urinary incontinence in German routine practice found that women were often treated with doses lower than recommended, which was associated with a low incidence of adverse events (PUBMED:22816871). Moreover, preliminary results from a large observational study in Germany showed that many women received lower duloxetine doses than expected based on evidence-based dosing recommendations, and adverse event rates were lower, possibly due to the observational nature of the study and/or the use of lower doses (PUBMED:19930333). An analysis to determine the optimal dose of duloxetine in major depressive disorder suggested that an initial dose less than 60 mg/day might provide better tolerability for some patients diagnosed with MDD (PUBMED:16934840). In summary, the evidence suggests that starting duloxetine at a lower dose, such as 30 mg QD, and then titrating up to 60 mg QD may improve tolerability, particularly in reducing the risk of nausea, without significantly impacting the effectiveness of the treatment for depression. However, the decision to use a lower starting dose should be individualized based on the patient's risk of adverse events and the severity of their depression.
Instruction: Autonomic dysfunction: a link between depression and cardiovascular mortality? Abstracts: abstract_id: PUBMED:17311213 The role played by the autonomic nervous system in the relation between depression and cardiovascular disease Introduction: Findings from several epidemiological studies have revealed that major depression is associated with an increased risk of developing cardiovascular diseases (CVD) and presenting complications and new events in subjects with already-established CVD. The pathophysiological mechanisms responsible for this increased cardiovascular risk in major depression remain unclear. Development: The aim of this work is to review the literature on the possible pathophysiological mechanisms involved in the relation between major depression and CVD, with special emphasis on the studies dealing with cardiovascular autonomic dysfunction and heart rate variability. Likewise, recent hypotheses concerning the neural mechanisms underlying autonomic dysfunction in subjects with major depression are also discussed. Conclusions: The evidence that is currently available allows us to hypothesise that there are anomalies in the functioning of the central autonomic neural network in subjects with major depression, and more specifically in the hippocampus, prefrontal cortex and the brain stem nuclei. Such abnormalities, in association with lower central levels of serotonin give rise to a predominance of the sympathetic flow and a loss of cardiac vagal tone. The resulting cardiovascular autonomic dysfunction could be the main cause of the increased cardiovascular risk observed in major depression. In the future, studying the autonomic nervous system may be a useful tool in the development of new therapeutic strategies aimed at reducing cardiovascular morbidity and mortality in subjects with depression. abstract_id: PUBMED:18043302 Autonomic dysfunction: a link between depression and cardiovascular mortality? The FINE Study. Background: Depression is associated with an increased risk of cardiovascular diseases (CVD) in vascular patients as well as in the general population. We investigated whether autonomic dysfunction could explain this relationship. Design: The Finland, Italy and The Netherlands Elderly (FINE) Study is a prospective cohort study. Methods: Depressive symptoms were measured with the Zung Self-rating Depression Scale in 870 men, aged 70-90 years, free of CVD and diabetes in 1990. Resting heart rate was determined from a 15-30-s resting electrocardiogram in The Netherlands and Italy and as pulse rate in Finland. In addition, in The Netherlands, heart-rate variability (HRV) and QTc interval were determined. Results: At baseline, depressive symptoms were associated with an increase in resting heart rate, and nonsignificantly with low HRV and prolonged QTc interval. After 10 years of follow-up, 233 (27%) men died from CVD. Prospectively, an increase in resting heart rate with 1 SD was associated with an increased risk of cardiovascular mortality [hazard ratio (HR), 1.22; 95% confidence interval (CI), 1.08-1.38]. In addition, low HRV (HR, 0.78; 95% CI, 0.61-1.01) and prolonged QTc interval (HR, 1.28; 95% CI, 1.06-1.53) per SD were associated with cardiovascular mortality. The increased risk of depressive symptoms for cardiovascular mortality (HR, 1.38; 95% CI, 1.21-1.58) did not change after adjustments for several indicators of autonomic dysfunction. Conclusion: This study suggests that mild depressive symptoms are associated with autonomic dysfunction in elderly men. The increased risk of cardiovascular mortality with increasing magnitude of depressive symptoms could, however, not be explained by autonomic dysfunction. abstract_id: PUBMED:20639389 Autonomic nervous system dysfunction and inflammation contribute to the increased cardiovascular mortality risk associated with depression. Objective: To investigate prospectively whether autonomic nervous system (ANS) dysfunction and inflammation play a role in the increased cardiovascular disease (CVD)-related mortality risk associated with depression. Methods: Participants in the Cardiovascular Health Study (n = 907; mean age, 71.3 ± 4.6 years; 59.1% women) were evaluated for ANS indices derived from heart rate variability (HRV) analysis (frequency and time domain HRV, and nonlinear indices, including detrended fluctuation analysis (DFA(1)) and heart rate turbulence). Inflammation markers included C-reactive protein, interleukin-6, fibrinogen, and white blood cell count). Depressive symptoms were assessed, using the 10-item Centers for Epidemiological Studies Depression scale. Cox proportional hazards models were used to investigate the mortality risk associated with depression, ANS, and inflammation markers, adjusting for demographic and clinical covariates. Results: Depression was associated with ANS dysfunction (DFA(1), p = .018), and increased inflammation markers (white blood cell count, p = .012, fibrinogen p = .043) adjusting for covariates. CVD-related mortality occurred in 121 participants during a median follow-up of 13.3 years. Depression was associated with an increased CVD mortality risk (hazard ratio, 1.88; 95% confidence interval, 1.23-2.86). Multivariable analyses showed that depression was an independent predictor of CVD mortality (hazard ratio, 1.72; 95% confidence interval, 1.05-2.83) when adjusting for independent HRV and inflammation predictors (DFA(1), heart rate turbulence, interleukin-6), attenuating the depression-CVD mortality association by 12.7% (p &lt; .001). Conclusion: Autonomic dysfunction and inflammation contribute to the increased cardiovascular mortality risk associated with depression, but a large portion of the predictive value of depression remains unexplained by these neuroimmunological measures. abstract_id: PUBMED:11959673 Cardiovascular alterations and autonomic imbalance in an experimental model of depression. Depressed patients with and without a history of cardiovascular pathology display signs, such as elevated heart rate, decreased heart rate variability, and increased physiological reactivity to environmental stressors, which may indicate a predisposition to cardiovascular disease. The specific physiological mechanisms associating depression with such altered cardiovascular parameters are presently unclear. The current study investigated cardiovascular regulation in the chronic mild stress rodent model of depression and examined the specific autonomic nervous system mechanisms underlying the responses. Sprague-Dawley rats exposed to a series of mild, unpredictable stressors over 4 wk displayed anhedonia (an essential feature of human depression), along with elevated resting heart rate, decreased heart rate variability, and exaggerated pressor and heart rate responses to air jet stress. Results obtained from experiments studying autonomic blockade suggest that cardiovascular alterations in the chronic mild stress model are mediated by elevated sympathetic tone to the heart. The present findings have implications for the study of pathophysiological links between affective disorders and cardiovascular disease. abstract_id: PUBMED:32598983 Depression and cardiovascular autonomic control: a matter of vagus and sex paradox. Depression is a well-established stress-related risk factor for several diseases, mainly for those with cardiovascular outcomes. The mechanisms that link depression disorders with cardiovascular diseases (CVD) include dysfunctions of the autonomic nervous system. Heart rate variability analysis is a widely-used non-invasive method that can simultaneously quantify the activity of the two branches of cardiac autonomic neural control and provide insights about their pathophysiological alterations. Recent scientific literature suggests that sex influences the relationship between depressive symptoms and cardiac autonomic dysfunction. Moreover, a few studies highlight a possible sex paradox: depressed women, despite a greater vagal tone, experience a higher risk of adverse cardiovascular events than depressed men. Although there are striking sex differences in the incidence of depression, scanty data on this topic are available. Lastly, studies on the heart-brain axis bidirectionality and the role of sex are fundamental not only to clarify the biological bases of depression-CVD comorbidity, but also to develop alternative therapies, where vagus nerve appears to be a promising target of non-invasive neuromodulation techniques. abstract_id: PUBMED:12427117 Cardiovascular autonomic dysfunction in uremia. Cardiovascular morbidity and mortality is common in chronic renal failure patients, and may be explained in part by abnormalities in cardiovascular autonomic regulation. This review discusses the results of cardiovascular autonomic function studies in chronic renal failure patients. While covering most methods of assessing autonomic function, we focus particularly on power spectral analysis methods. These newer techniques are non-invasive, reproducible, and allow the rapid assessment of the integrity of cardiovascular autonomic reflexes at the bedside. The abnormalities of parasympathetic, sympathetic and cardiac baroreceptor function seen in dialysis-dependent patients are highlighted, and their significance in intra-dialytic hypotension and cardiovascular mortality as well as the effects of dialysis and transplantation on these parameters examined. Importantly, studies of cardiovascular autonomic dysfunction in pre-dialysis chronic renal failure patients, when abnormalities may be amenable to intervention to prevent progression and premature cardiovascular morbidity and mortality, are reviewed. abstract_id: PUBMED:26183482 Impact of combined exercise training on cardiovascular autonomic control and mortality in diabetic ovariectomized rats. The purpose of this study was to compare the effects of aerobic, resistance, or combined exercise training on cardiovascular autonomic control and mortality in diabetic ovariectomized rats. Female Wistar rats were divided into one of five groups: euglycemic sedentary (ES), diabetic ovariectomized sedentary (DOS), diabetic ovariectomized aerobic-trained (DOTA), diabetic ovariectomized resistance-trained (DOTR), or diabetic ovariectomized aerobic+resistance-trained (DOTC). Arterial pressure (AP) was directly recorded and baroreflex sensitivity was evaluated by heart rate responses to AP changes. Cardiovascular autonomic modulation was evaluated by spectral analyses. No differences were observed in body weight and glycemia between diabetic rats. Animals in the DOTC and DOTA groups exhibited an increase in running time, whereas animals in the DOTC and DOTR groups showed greater strength. Trained groups exhibited improvement in total power and the high-frequency band of pulse interval and reduced mortality (vs. DOS). Animals in the DOTC (bradycardic and tachycardic responses) and DOTA (tachycardic responses) groups exhibited attenuation in baroreflex dysfunction that was observed in DOS and DOTR animals, and an improvement in AP variance. In conclusion, all training protocols led to reduced mortality, which may be due to an increase in physical capacity and to cardiovascular and autonomic benefits following training, regardless of any improvement in glycemic control. In this model, the aerobic and combined trainings seem to promote additional cardiovascular autonomic benefits when compared with resistance training alone. abstract_id: PUBMED:27544842 Cardiovascular autonomic neuropathy in patients with diabetes mellitus. Cardiovascular autonomic neuropathy associated with diabetes mellitus is caused by an impairment of the autonomic system. The prevalence of this condition ranges from 20% to 65%, depending on the duration of the diabetes mellitus. Clinically, the autonomic function disorder is associated with resting tachycardia, exercise intolerance, orthostatic hypotension, intraoperative cardiovascular instability, silent myocardial ischemia and increased mortality. For the diagnosis, the integrity of the parasympathetic and sympathetic nervous system is assessed. Parasympathetic activity is examined by measuring heart rate variability in response to deep breathing, standing and the Valsalva manoeuvre. Sympathetic integrity is examined by measuring blood pressure in response to standing and isometric exercise. The treatment includes the metabolic control of diabetes mellitus and of the cardiovascular risk factors. Treating symptoms such as orthostatic hypotension requires special attention. abstract_id: PUBMED:30968781 The association of late-life depression with all-cause and cardiovascular mortality among community-dwelling older adults: systematic review and meta-analysis. Background: Late-life depression has become an important public health problem. Available evidence suggests that late-life depression is associated with all-cause and cardiovascular mortality among older adults living in the community, although the associations have not been comprehensively reviewed and quantified.AimTo estimate the pooled association of late-life depression with all-cause and cardiovascular mortality among community-dwelling older adults. Method: We conducted a systematic review and meta-analysis of prospective cohort studies that examine the associations of late-life depression with all-cause and cardiovascular mortality in community settings. Results: A total of 61 prospective cohort studies from 53 cohorts with 198 589 participants were included in the systematic review and meta-analysis. A total of 49 cohorts reported all-cause mortality and 15 cohorts reported cardiovascular mortality. Late-life depression was associated with increased risk of all-cause (risk ratio 1.34; 95% CI 1.27, 1.42) and cardiovascular mortality (risk ratio 1.31; 95% CI 1.20, 1.43). There was heterogeneity in results across studies and the magnitude of associations differed by age, gender, study location, follow-up duration and methods used to assess depression. The associations existed in different subgroups by age, gender, regions of studies, follow-up periods and assessment methods of late-life depression. Conclusion: Late-life depression is associated with higher risk of both all-cause and cardiovascular mortality among community-dwelling elderly people. Future studies need to test the effectiveness of preventing depression among older adults as a way of reducing mortality in this population. Optimal treatment of late-life depression and its impact on mortality require further investigation.Declaration of interestNone. abstract_id: PUBMED:23416033 Central autonomic network mediates cardiovascular responses to acute inflammation: relevance to increased cardiovascular risk in depression? Inflammation is a risk factor for both depression and cardiovascular disease. Depressed mood is also a cardiovascular risk factor. To date, research into mechanisms through which inflammation impacts cardiovascular health rarely takes into account central effects on autonomic cardiovascular control, instead emphasizing direct effects of peripheral inflammatory responses on endothelial reactivity and myocardial function. However, brain responses to inflammation engage neural systems for motivational and homeostatic control and are expressed through depressed mood state and changes in autonomic cardiovascular regulation. Here we combined an inflammatory challenge, known to evoke an acute reduction in mood, with neuroimaging to identify the functional brain substrates underlying potentially detrimental changes in autonomic cardiovascular control. We first demonstrated that alterations in the balance of low to high frequency (LF/HF) changes in heart rate variability (a measure of baroreflex sensitivity) could account for some of the inflammation-evoked changes in diastolic blood pressure, indicating a central (rather than solely local endothelial) origin. Accompanying alterations in regional brain metabolism (measured using (18)FDG-PET) were analysed to localise central mechanisms of inflammation-induced changes in cardiovascular state: three discrete regions previously implicated in stressor-evoked blood pressure reactivity, the dorsal anterior and posterior cingulate and pons, strongly mediated the relationship between inflammation and blood pressure. Moreover, activity changes within each region predicted the inflammation-induced shift in LF/HF balance. These data are consistent with a centrally-driven component originating within brain areas supporting stressor evoked blood pressure reactivity. Together our findings highlight mechanisms binding psychological and physiological well-being and their perturbation by peripheral inflammation. Answer: The literature suggests that there is a connection between autonomic dysfunction, depression, and cardiovascular mortality. Major depression is associated with an increased risk of developing cardiovascular diseases (CVD) and complications in subjects with established CVD. The pathophysiological mechanisms responsible for this increased cardiovascular risk in major depression are not entirely clear, but cardiovascular autonomic dysfunction is hypothesized to be a significant contributor. Anomalies in the functioning of the central autonomic neural network, particularly in the hippocampus, prefrontal cortex, and brain stem nuclei, along with lower central levels of serotonin, may lead to a predominance of sympathetic flow and a loss of cardiac vagal tone, which could be the main cause of the increased cardiovascular risk observed in major depression (PUBMED:17311213). Furthermore, studies have shown that depressive symptoms are associated with autonomic dysfunction, such as an increase in resting heart rate and low heart rate variability (HRV), and these symptoms could not be fully explained by autonomic dysfunction alone (PUBMED:18043302). Autonomic nervous system (ANS) dysfunction and inflammation have been found to contribute to the increased cardiovascular mortality risk associated with depression, but a large portion of the predictive value of depression remains unexplained by these neuroimmunological measures (PUBMED:20639389). In experimental models, cardiovascular alterations in depression have been mediated by elevated sympathetic tone to the heart, suggesting a link between affective disorders and cardiovascular disease (PUBMED:11959673). Additionally, sex differences have been observed in the relationship between depressive symptoms and cardiac autonomic dysfunction, with depressed women experiencing a higher risk of adverse cardiovascular events than depressed men, despite a greater vagal tone (PUBMED:32598983). Overall, the evidence indicates that autonomic dysfunction is a significant link between depression and cardiovascular mortality, although it does not fully account for the increased risk, suggesting that other factors may also play a role.
Instruction: Testicular microlithiasis: is there a need for surveillance in the absence of other risk factors? Abstracts: abstract_id: PUBMED:22710430 Testicular microlithiasis: is there a need for surveillance in the absence of other risk factors? Objective: Ultrasound surveillance of patients with testicular microlithiasis (TM) has been advocated following the reported association with testicular cancer. The aim of this study was to assess the evidence base supporting such surveillance. Methods: Formal literature review identified cohort studies comprising at least 15 patients followed up for at least 24 months. Combining an institutional audit with the identified studies in a pooled analysis the incidence of new cancers during the surveillance period was evaluated. Results: Literature review identified eight studies. Our institutional audit comprised 2,656 men referred for scrotal ultrasound. Fifty-one men (1.92 %) with TM were identified, none of whom developed testicular cancer (mean follow-up: 33.3 months). In a combined population of 389 men testicular cancer developed in 4. Excluding 3 who had additional risk factors, only 1 of 386 developed testicular cancer during follow-up (95 % CI 0.05-1.45 %). Conclusions: Ultrasound surveillance is unlikely to benefit patients with TM in the absence of other risk factors. In the presence of additional risk factors (previous testicular cancer, a history of maldescent or testicular atrophy) patients are likely to be under surveillance; nonetheless monthly self-examination should be encouraged, and open access to ultrasound and formal annual surveillance should be offered. Key Points : • The literature reports a high association between testicular microlithiasis and testicular cancer. • Our study and meta-analysis suggest no causal link between microlithiasis and cancer. • In the absence of additional risk factors surveillance is not advocated. • In the presence of additional risk factors surveillance is recommended. • Such surveillance is primarily aimed at engaging patients in regular follow-up. abstract_id: PUBMED:31588363 Association between risk factors and testicular microlithiasis. Background: Testicular microlithiasis and its clinical significance are not fully understood. Testicular microlithiasis and risk factors have been associated with testicular cancer. The role of testicular microlithiasis is investigated. Purpose: To investigate the association between testicular microlithiasis and socioeconomic and other pre-diagnostic factors. Material And Methods: All men who had a scrotal ultrasound examination at the Department of Radiology, Vejle Hospital, during 2001-2013 were included. They were categorized as patients with and without testicular microlithiasis and compared with pre-diagnostic data from a nationwide registry. A total of 2404 men (283 [11.8%] with testicular microlithiasis and 2121 [88.2%] without testicular microlithiasis) were included. The association between testicular microlithiasis and pre-diagnostic conditions was investigated with logistic regression. Results: Overall, we found no statistically significant differences in demographics, socioeconomic characteristics, or testicular diseases in men with and without testicular microlithiasis. Men with testicular microlithiasis had more often been treated for infertility (odds ratio [OR] 2.09, 95% confidence interval [CI] 0.84-5.24) and testicular torsion (OR 1.58, 95% CI 0.34-7.36) compared to men without testicular microlithiasis. We found no association between sexually transmitted diseases and testicular microlithiasis. Conclusion: Treatment for infertility and torsion was non-significantly associated with testicular microlithiasis and no other association was found. These data do not suggest early exposure is related to testicular microlithiasis. abstract_id: PUBMED:29843681 Prevalence and risk factors of testicular microlithiasis in patients with hypospadias: a retrospective study. Background: It has been described that the incidence of testicular microlithiasis is high in several congenital disorders which may be associated with testicular impairment and infertility. Several reports have shown that a prepubertal or pubertal hormonal abnormality in the pituitary-gonadal axis was identified in some patients with hypospadias that is one of the most common disorders of sex development. However, exact prevalence or risk factors of testicular microlithiasis in patients with hypospadias have not reported so far. In the present study, to clarify the prevalence and risk factors of testicular microlithiasis in patients with hypospadias, a retrospective chart review was performed. Methods: Children with hypospadias who underwent testicular ultrasonography between January 2010 and April 2016 were enrolled in the present study. Severity of hypospadias was divided into mild and severe. The prevalence and risk factors of testicular microlithiasis or classic testicular microlithiasis were examined. Results: Of 121 children, mild and severe hypospadias were identified in 66 and 55, respectively. Sixteen children had undescended testis. Median age at ultrasonography evaluation was 1.7 years old. Testicular microlithiasis and classic testicular microlithiasis were documented in 17 children (14.0%) and 8 (6.6%), respectively. Logistic regression analysis revealed that presence of undescended testis was only a significant factor for testicular microlithiasis and classic testicular microlithiasis. The prevalence of testicular microlithiasis or classic testicular microlithiasis was significantly higher in children with undescended testis compared to those without undescended testis (testicular microlithiasis; 43.8% versus 9.5% (p = 0.002), classic testicular microlithiasis; 37.5% versus 1.9% (p &lt; 0.001). Conclusions: The current study demonstrated that the presence of undescended testis was only a significant risk factor for testicular microlithiasis or classic testicular microlithiasis in patients with hypospadias. As co-existing undescended testis has been reported as a risk factor for testicular dysfunction among patients with hypospadias, the current findings suggest that testicular microlithiasis in children with hypospadias may be associated with impaired testicular function. Conversely, patients with isolated HS seem to have lower risks for testicular impairment. Further investigation with longer follow-up will be needed to clarify these findings. abstract_id: PUBMED:22258667 Urolithiasis in infants: evaluation of risk factors. Objective: Urolithiasis in infants is not a very rare situation in Turkey, and the incidence has been increasing in recent years. The purpose of this paper was to investigate the clinical characteristics, metabolic and anatomic risk factors for urolithiasis and microlithiasis in infants. Methods: The cases of 178 infants (63 girls, 115 boys), who were referred to our department between 1999 and 2009 with urolithiasis, were evaluated. Results: The mean age at diagnosis of stone disease was 11.5 months (range, 10 days-24 months). The mean follow-up duration was 33.6 months (1.2-110 months). The major clinical symptoms of our patients were restlessness in 24 children (13.5%) and vomiting in 23 (13%). Thirty-five infants (19.7%) had a urinary tract abnormality; vesico-ureteral reflux was the most common abnormality (12.9%). Hypercalciuria and hyperuricosuria were detected in 46 and 56%, respectively. Stone analysis was performed in 56 infants, and calcium oxalate was determined in 36 patients (64.3%). A family history of urolithiasis, presenting symptoms and underlying metabolic abnormalities were similar for patients with microlithiasis and those with larger stones. However, infants with microlithiasis had higher ratios for history of vitamin D administration and feeding with formula. Surgical treatment was performed in 42 infants and extracorporeal shock wave lithotripsy in 30 infants. Conclusion: Our results showed that urolithiasis in infants may present nonspecific symptoms and may even be asymptomatic and that a positive family history for urolithiasis, urologic abnormalities, metabolic disorders, urinary tract infections, vitamin D administration and feeding with formula may increase the occurrence of urolithiasis in infants. abstract_id: PUBMED:31853884 Incidence characteristics of testicular microlithiasis and its association with risk of primary testicular tumors in children: a systematic review and meta-analysis. Background: To systematically evaluate the incidence characteristics of testicular microlithiasis (TM) in children and its association with primary testicular tumors (PTT). Methods: A systematic review and meta-analysis were conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) statement. A priori protocol was registered in the PROSPERO database (CRD42018111119), and a literature search of all relevant studies published until February 2019 was performed. Prospective, retrospective cohort, or cross-sectional studies containing ultrasonography (US) data on the incidence of TM or the association between TM and PTT were eligible for inclusion. Results: Of the 102 identified articles, 18 studies involving 58,195 children were included in the final analysis. The overall incidence of TM in children with additional risk factors for PTT was 2.7%. In children, the proportion of left TM in unilateral cases was 55.7%, the frequency of bilateral TM was 69.0%, and proportion of classic TM was 71.8% [95% confidence interval (CI) 62.4-81.1%, P = 0.0, I2 = 0.0%]. About 93.5% of TM remained unchanged, and newly detected PTT rate was very low (4/296) during follow-up. The overall risk ratio of TM in children with a concurrent diagnosis of PTT was 15.46 (95% CI 6.93-34.47, P &lt; 0.00001). Conclusions: The incidence of TM in children is highly variable. Nonetheless, TM is usually bilateral, of the classic type, and remains stable or unchanged at follow-up. Pediatric patients with TM and contributing factors for PTT have an increased risk for PTT; however, there is no evidence to support mandatory US surveillance of children with TM. abstract_id: PUBMED:18362304 Risk factors for post-treatment hypogonadism in testicular cancer patients. Objectives: Testicular germ-cell cancer (TGCC) patients are at risk of developing hypogonadism but no risk factors have yet been defined. Methods: Blood was collected from 143 TGCC patients (after orchidectomy, prior to further therapy (T0) and 6, 12, 24, 36 and 60 months (T6, T12, T24, T36 and T60) after therapy). Biological hypogonadism (BH) was defined as: serum testosterone below 10 nmol/l and/or LH &gt;10 IU/l; odds ratios (ORs) for BH with BH at T0, age, stage of disease, testicular characteristics, and androgen receptor polymorphism as predictors were calculated as well as the OR for developing BH post-treatment (one to two cycles of adjuvant chemotherapy (ACT) versus three to four cycles of higher dose chemotherapy (HCT) versus adjuvant radiotherapy (RT)). Results: HCT increased the OR for BH at T6 (OR 22, 95% confidence interval (CI) 4.4-118) and T12 (OR 5.8, 95% CI 1.5-22). RT increased the OR at T6 (OR 10, 95% CI 2.1-47) and at T12 (OR 3.9, 95% CI 1.1-14). Microlithiasis predicted BH at T0 (OR 11, 95% CI 1.2-112), T12 (OR 3.9, 95% CI 1.1-13), T24 (OR 3.0, 95% CI 1.0-8.8), T36 (OR 5.4, 95% CI 1.7-17) and T60 (OR 4.4, 95% CI 1.2-16). BH at T0 was a risk for BH at T6 (OR 53, 95% CI 19-145), T12 (OR 125, 95% CI 37-430), T24 (OR 88, 95% CI 26-300) and T36 (OR 121, 95% CI 32-460). Conclusions: It is clinically relevant that BH at T0 and testicular microlithiasis were predictive factors for post-treatment BH. HCT and RT gave temporary BH. abstract_id: PUBMED:24348387 Incidental discovery of testicular microlithiasis: what is the importance of ultrasound surveillance? Two case reports. Many studies have demonstrated an association between diffuse bilateral testicular microlithiasis (TM) and gonadal and extragonadal germ cell tumors. Nevertheless, it is still uncertain whether ultrasound surveillance is really necessary in patients with TM in the absence of other risk factors such as previous testicular cancer, a history of cryptorchidism or testicular atrophy. We report the cases of a 33- and a 39-year-old man presenting with a retroperitoneal extragonadal tumor. The first patient underwent an MRI examination in order to rule out a lumbosacral hernia: MRI images showed no slipped disks but a voluminous retroperitoneal solid mass. The histological analysis revealed an immature teratoma. The second patient came to the emergency department complaining of abdominal pain, vomiting, weight loss and mild jaundice: ultrasound examination showed a large, ill-defined heterogeneous abdominal mass, confirmed by CT and MRI examination. The histology diagnosed a yolk sac tumor. In both patients, the testicular sonography was performed to rule out a focal lesion, but it displayed bilateral TM without a focal testicular mass. Based on our direct experience, we highlight the importance of annual ultrasonographic surveillance of the testis and the retroperitoneal space in patients with occasionally detected TM. abstract_id: PUBMED:16539727 Surveillance of testicular microlithiasis? Results of an UK based national questionnaire survey. Background: The association of testicular microlithiasis with testicular tumour and the need for follow-up remain largely unclear. Methods: We conducted a national questionnaire survey involving consultant BAUS members (BAUS is the official national organisation (like the AUA in USA) of the practising urologists in the UK and Ireland), to provide a snapshot of current attitudes towards investigation and surveillance of patients with testicular microlithiasis. Results: Of the 464 questionnaires sent to the BAUS membership, 263(57%) were returned. 251 returns (12 were incomplete) were analysed, of whom 173(69%) do and 78(31%) do not follow-up testicular microlithiasis. Of the 173 who do follow-up, 119(69%) follow-up all patients while 54(31%) follow-up only a selected group of patients. 172 of 173 use ultra sound scan while 27(16%) check tumour makers. 10(6%) arrange ultrasound scan every six months, 151(88%) annually while 10(6%) at longer intervals. 66(38%) intend to follow-up these patients for life while, 80(47%) until 55 years of age and 26(15%) for up to 5 years. 173(68.9%) believe testicular microlithiasis is associated with CIS in &lt; 1%, 53(21%) think it is between 1&amp;10% while 7(3%) believe it is &gt; 10%. 109(43%) believe those patients who develop a tumour, will have survival benefit with follow-up while 142(57%) do not. Interestingly, 66(38%) who follow-up these patients do not think there is a survival benefit. Conclusion: There is significant variability in how patients with testicular microlithiasis are followed-up. However a majority of consultant urologists nationally, believe surveillance of this patient group confers no survival benefit. There is a clear need to clarify this issue in order to recommend a coherent surveillance policy. abstract_id: PUBMED:25316054 Testicular microlithiasis imaging and follow-up: guidelines of the ESUR scrotal imaging subcommittee. Objectives: The subcommittee on scrotal imaging, appointed by the board of the European Society of Urogenital Radiology (ESUR), have produced guidelines on imaging and follow-up in testicular microlithiasis (TML). Methods: The authors and a superintendent university librarian independently performed a computer-assisted literature search of medical databases: MEDLINE and EMBASE. A further parallel literature search was made for the genetic conditions Klinefelter's syndrome and McCune-Albright syndrome. Results: Proposed guidelines are: follow-up is not advised in patients with isolated TML in the absence of risk factors (see Key Points below); annual ultrasound (US) is advised for patients with risk factors, up to the age of 55; if TML is found with a testicular mass, urgent referral to a specialist centre is advised. Conclusion: Consensus opinion of the scrotal subcommittee of the ESUR is that the presence of TML alone in the absence of other risk factors is not an indication for regular scrotal US, further US screening or biopsy. US is recommended in the follow-up of patients at risk, where risk factors other than microlithiasis are present. Risk factors are discussed and the literature and recommended guidelines are presented in this article. Key Points: • Follow up advised only in patients with TML and additional risk factors. • Annual US advised for patients with risk factors up to age 55. • If TML is found with testicular mass, urgent specialist referral advised. • Risk factors - personal/ family history of GCT, maldescent, orchidopexy, testicular atrophy. abstract_id: PUBMED:11090407 Testicular microlithiasis: prevalence and tumor risk in a population referred for scrotal sonography. Objective: Considerable accrued evidence points to an association between testicular microlithiasis, intratubular germ cell neoplasia, and testicular tumor. This study assesses both the prevalence of testicular microlithiasis revealed on sonography in a referred population and the concurrent tumor risk. Materials And Methods: Over a 32-month period (April 1996 through November 1998), 4892 scrotal sonographic examinations were performed in 4819 patients at four referral centers. All patients underwent high-resolution (7- to 10-MHz) imaging. Using a computerized word search (n = 4102; testicular microlithiasis, calcification, microliths, calcific foci, tumor, neoplasm, cancer, hyperecho, hypoecho, heterogen, and carcinoma) and manual retrieval (n = 790), cases of tumor, testicular microlithiasis (&gt;5 microliths per sonogram), and testicular microlithiasis plus tumor were pulled and retrospectively reviewed. The presence and type of tumor were confirmed at histology after orchidectomy. Results: Fifty-four tumors were found among 4892 scrotal sonograms (28 seminomas, 14 teratomas, 8 mixed germ cell tumors, 2 Leydig cell tumors, and 2 non-Hodgkin's lymphomas). Testicular microlithiasis was present in 33 patients, giving a prevalence of 0.68%. Concurrent tumor and testicular microlithiasis were detected in seven patients, a relative risk of tumor in testicular microlithiasis was 21.6-fold (95% confidence limits: 10. 6-fold, 44.2-fold). In one patient with testicular microlithiasis, a previous orchidectomy for mixed germ cell tumor had been performed (not included in the relative risk calculation). Conclusion: In a referred population of 4819 patients the prevalence of testicular microlithiasis was 0.68% and the relative risk of concurrent tumor was 21.6-fold. Sonographic surveillance of testicular microlithiasis cases for tumor is mandatory. Answer: The need for surveillance of testicular microlithiasis (TM) in the absence of other risk factors is a subject of debate. A study by Richenberg et al. suggests that ultrasound surveillance is unlikely to benefit patients with TM if they do not have additional risk factors such as previous testicular cancer, a history of maldescent, or testicular atrophy. In their study, only 1 of 386 men developed testicular cancer during follow-up when these additional risk factors were excluded (PUBMED:22710430). This suggests that in the absence of other risk factors, the likelihood of developing testicular cancer is low, and routine surveillance may not be necessary. Furthermore, a study by Pedersen et al. found no statistically significant differences in demographics, socioeconomic characteristics, or testicular diseases in men with and without TM. They did find a non-significant association between treatment for infertility and torsion with TM, but no other associations were found, suggesting that early exposure is not related to TM (PUBMED:31588363). Additionally, the European Society of Urogenital Radiology (ESUR) scrotal imaging subcommittee guidelines recommend that follow-up is not advised in patients with isolated TM in the absence of risk factors. They advise annual ultrasound for patients with risk factors up to the age of 55, and if TM is found with a testicular mass, urgent referral to a specialist is advised (PUBMED:25316054). In summary, the current evidence and guidelines suggest that surveillance for TM in the absence of other risk factors is not generally advocated. However, in the presence of additional risk factors, surveillance is recommended, and such surveillance is primarily aimed at engaging patients in regular follow-up (PUBMED:22710430; PUBMED:25316054).
Instruction: Day surgery: are we transferring the burden of care? Abstracts: abstract_id: PUBMED:36442254 Burden of Care for Patients With In-Transit Melanoma. Introduction: Patient burden of cancer care can be significant, especially for cancers like melanoma where patients are living longer, even with advanced disease. The purpose of this study is to compare the burden of treatment of melanoma patients with in-transit metastases (ITM). There are multiple treatment options for ITM, but no standard due to lack of large cohort comparative studies; thus, the anticipated burden of care may influence therapy choice. Methods: Included patients had in-transit melanoma without distant metastasis and were managed at our institution from July 1, 2015 through December 31, 2020 using a combination of surgery, systemic, intralesional, and radiation therapy. We compared treatment burden, (number of treatments, clinic visits, inpatient hospital days, and distance traveled) and response rates using Kruskal-Wallis and chi-squared tests. Recurrence-free survival and estimated charges were exploratory endpoints. Results: There were 42 patients who met the inclusion criteria. As initial treatment, patients had surgery (n = 20), surgery with adjuvant (n = 6), systemic (n = 9), and intralesional therapy (n = 2). Surgery had the lowest treatment burden (median of 1 treatment, 3 clinic visits, and 0 inpatient days) while surgery with adjuvant systemic therapy had the highest burden (median of 13 treatments, 12 clinic visits, and 0 inpatient days). Systemic, intralesional, and radiation therapy were used more often for recurrent ITM. Travel distance (P = 0.88) and response rates did not statistically differ between the four options for first line therapy (P = 0.99). At a median follow-up time of 8.8 mo, 22 (52%) of the cohort required more than 1 therapy to manage recurrent or progressive disease and 14 (33%) progressed to distant disease. Conclusions: Treatment of in-transit melanoma is associated with high burden of care and often requires multiple therapies, even with maximally effective first treatment choice. Factors evaluated in this study may be used to set expectations of treatment course for newly diagnosed patients and may aid in patients' decisions on therapy selection. abstract_id: PUBMED:38477303 The effect of caregivers' care burden and psychological resilience on the psychosocial adjustment of patients with open heart surgery in Turkey. This cross-sectional study investigated the effect of caregivers' care (n = 100) burden and psychological resilience on the psychosocial adjustment of patients (n = 100) with open heart surgery. Patients had poor psychosocial adjustment. Caregivers who felt incompetent in providing care had a higher care burden and a lower psychological resilience than those who did not. In addition, patients whose caregivers had higher resilience and lower burden of care had better psychosocial adjustment. The results of this study compellingly demonstrate the importance and necessity of supportive and preventive clinical social work interventions to enhance patients' adaptation to a new lifestyle and compliance with treatment during the cardiac rehabilitation process, and reduce the burden on caregivers. abstract_id: PUBMED:27713900 The Burden of Care: Mothers' Experiences of Children with Congenital Heart Disease. Background: Mothers play a key role in caring for their sick children. Their experiences of care were influenced by culture, rules, and the system of health and care services. There are few studies on maternal care of children with congenital heart disease. Also, each of them has studied a particular aspect of care. The present research aimed to understand care experiences of mothers of children with congenital heart disease. Methods: A conventional content analysis was used to obtain rich data. The goal of content analysis is "to provide knowledge and deeper understanding of the phenomenon under the study". The study was conducted in Kerman, Iran in 2014, on mothers of children with CHD. The purposive sampling technique was used to select the participants. Participants were 14 mothers of children with CHD and one father and one nurse of open heart surgery unit, from two hospitals affiliated with Kerman University of Medical Sciences. Eighteen semi-structured interviews were constructed. Data were analyzed using conventional content analysis. MAXQDA 2007 software (VERBI GmbH, Berlin, Germany) was used to classify and manage the coding. Constant comparative method was done for data analysis. The reliability and validity of the findings, including the credibility, confirm ability, dependability, and transferability, were assessed. Results: According to the content analysis, the main theme was the catastrophic burden of child care on mothers that included three categories: 1) the tension resulting from the disease, 2) involvement with internal thoughts, and 3) difficulties of care process. Conclusion: The results of this study may help health care professionals to provide supportive and educational packages to the patients, mothers and Family members until improving the management of patient's care. abstract_id: PUBMED:28427793 Contribution of health care factors to the burden of skin disease in the United States. The American Academy of Dermatology has developed an up-to-date national Burden of Skin Disease Report on the impact of skin disease on patients and on the US population. In this second of 3 manuscripts, data are presented on specific health care dimensions that contribute to the overall burden of skin disease. Through the use of data derived from medical claims in 2013 for 24 skin disease categories, these results indicate that skin disease health care is delivered most frequently to the aging US population, who are afflicted with more skin diseases than other age groups. Furthermore, the overall cost of skin disease is highest within the commercially insured population, and skin disease treatment primarily occurs in the outpatient setting. Dermatologists provided approximately 30% of office visit care and performed nearly 50% of cutaneous surgeries. These findings serve as a critical foundation for future discussions on the clinical importance of skin disease and the value of dermatologic care across the population. abstract_id: PUBMED:35283258 Effect of personalized musical intervention on burden of care in dental implant surgery: A pilot randomized controlled trial. Objective: To explore a personalized musical intervention's effect on burden of care during dental implants placement. Methods: Randomized Controlled Trial in 24 dental implant surgery patients. A personalized music intervention (Music Care© application) or an audiobook control condition was administered. Burden of care (a composite outcome including self-reported anxiety, pain, and dissatisfaction felt during surgery), expected pain prior to surgery, pre- and post-surgery affect, memory of pain felt during surgery, and participants' emotional judgments of the music and audiobook listening were assessed. Results: The personalized music intervention significantly reduced the burden of care for dental implant surgery (p = 0.02; d = 1.07). Both groups reported positive affect after surgery, but the music group felt better. The pain remembered after seven postoperative days was significantly lower in the music group (p = 0.02). Participants judged the music listened to during surgery as more relaxing and pleasant than the audiobook (p = 0.002 and p = 0.001, respectively). Conclusions: Personalized music intervention could be effective in decreasing patients' burden of care during dental implant surgery. These results need to be confirmed by a rigorous randomized control trial. Clinical Significance: The burden of care associated with the pain and anxiety experienced during dental implant surgery can be reduced using a personalized and standardized music intervention. This approach may provide a simple complementary approach to improve surgical care in various settings. abstract_id: PUBMED:24267547 The global burden of cancer. The global burden of cancer is increasing. By 2020, the global cancer burden is expected to rise by 50% owing to the increasingly elderly population. The delivery of cancer care is likely to increase the need for perioperative physicians for both operative procedures and pain management, offering new professional challenges. Specifically, these challenges will include volume and financial management, as well coordination of cancer treatment and pain management. Coordinated, team-based cancer care will be essential to ensure value-based care. Short and long-term outcome measurement is an integral part of the process. abstract_id: PUBMED:35726884 Health Care Burden Associated With Adolescent Prolonged Opioid Use After Surgery. Background: Prolonged opioid use after surgery (POUS), defined as the filling of at least 1 opioid prescription filled between 90 and 180 days after surgery, has been shown to increase health care costs and utilization in adult populations. However, its economic burden has not been studied in adolescent patients. We hypothesized that adolescents with POUS would have higher health care costs and utilization than non-POUS patients. Methods: Opioid-naive patients 12 to 21 years of age in the United States who received outpatient prescription opioids after surgery were identified from insurance claim data from the Optum Clinformatics Data Mart Database from January 1, 2003, to June 30, 2019. The primary outcomes were total health care costs and visits in the 730-day period after the surgical encounter in patients with POUS versus those without POUS. Multivariable regression analyses were used to determine adjusted health care cost and visit differences. Results: A total of 126,338 unique patients undergoing 132,107 procedures were included in the analysis, with 4867 patients meeting criteria for POUS for an incidence of 3.9%. Adjusted mean total health care costs in the 730 days after surgery were $4604 (95% confidence interval [CI], $4027-$5181) higher in patients with POUS than that in non-POUS patients. Patients with POUS had increases in mean adjusted inpatient length of stay (0.26 greater [95% CI, 0.22-0.30]), inpatient visits (0.07 greater [95% CI, 0.07-0.08]), emergency visits (0.96 greater [95% CI, 0.89-1.03]), and outpatient/other visits (5.78 greater [95% CI, 5.37-6.19]) in the 730 days after surgery ( P &lt; .001 for all comparisons). Conclusions: In adolescents, POUS was associated with increased total health care costs and utilization in the 730 days after their surgical encounter. Given the increased health care burden associated with POUS in adolescents, further investigation of preventative measures for high-risk individuals and additional study of the relationship between opioid prescription and outcomes may be warranted. abstract_id: PUBMED:28967510 Evaluating the direct economic burden of health care-associated infections among patients with colorectal cancer surgery in China. Background: Little is known about the direct economic burden associated with health care-associated infection (HAI) in patients undergoing colorectal cancer surgery in China. This study aims to fill this knowledge gap. Methods: This study was a prospective monitoring case-control study. The direct economic burden was presented as the median of the 1:1 pair differences of various hospitalization fees and hospital length of stay. Wilcoxon signed-rank tests were used to explore the differences in the direct economic burden. Results: Out of 448 patients, 38 had acquired HAIs, with the infection incidence being 8.93%. The total direct economic burden of HAIs was $1,589.30 (P &lt;.05). Among various infection sites, deep surgical site infection had the highest direct economic burden of $8,654.44, followed by multisite infections ($5,946.52). When it comes to various hospitalization costs, the cost for Western medicine ($846.13) constituted the highest proportion of economic burden followed by treatment cost ($145.73) and bed charge ($126.75). On average, the length of hospital stay in the infection group was 6 days longer than that in the control group (P &lt;.05). Conclusions: HAI was associated with considerable economic burden for patients who underwent colorectal cancer surgery in China. The study highlights the necessity of taking effective measures to decrease incidence of HAIs to reduce economic burden. abstract_id: PUBMED:32838652 The Unseen Global Burden of Disease. Due to geographic-specific patient and institutional-related barriers to care, data extrapolation and expert opinion on global burden of disease in otolaryngology-head and neck surgery may under- or overestimate the presence and effect of common head and neck conditions. The group of conditions that fail to present to local physicians and/or missed in data extrapolation methods is the unseen burden of disease. This article presents opinions from otolaryngology-head and neck surgery physicians in high- and low/middle-income countries to help explain the contributing factors and ultimately how to use this unseen burden of disease. abstract_id: PUBMED:34517776 Caregiver burden of families of children with corrosive esophageal injuries. This cross-sectional and descriptive study aimed to determine care burden of families of children with corrosive esophageal injuries. The study involved 112 parents who had children with corrosive esophageal injury. Caregiver's Stress Scale and Reactions to Helping Family Members Scale were used to assess care burden of families. Caregiver's Stress Scale median score was 7.0 [interquartile range 5.0], and Reactions to Helping Family Members Scale median score was 45.0 [interquartile range 14.0]. The burden of family caregivers was thus found to be high. Factors affecting the care burden were found to be child's age, parents' age, child's weight, substance type and pH, surgery status, and number of surgeries. Nurses should consider these factors when advising and educating these families. Additionally, initiatives including support groups should be planned with regard to the needs of these families with a high care burden. Answer: The abstracts provided do not directly address the question of whether day surgery transfers the burden of care. However, they do offer insights into various aspects of the burden of care in different medical contexts, which can be used to infer potential implications for day surgery. For instance, the study on the burden of care for patients with in-transit melanoma (PUBMED:36442254) highlights that treatment often requires multiple therapies and is associated with a high burden of care, which may include numerous treatments and clinic visits. This suggests that even if a surgery is performed on an outpatient basis, the overall care process can still be demanding for patients. The research on caregivers' care burden and psychological resilience in Turkey (PUBMED:38477303) shows that caregivers with a higher burden and lower resilience can negatively impact the psychosocial adjustment of patients. This could imply that day surgeries, which may rely more on caregivers for postoperative care, could increase the burden on caregivers, potentially affecting patient outcomes. The study on mothers caring for children with congenital heart disease (PUBMED:27713900) reveals the extensive burden of care on mothers, including emotional and care process difficulties. This could suggest that day surgeries, which may require significant postoperative care at home, could add to the existing burden on primary caregivers. The report on the burden of skin disease (PUBMED:28427793) and the study on the economic burden of health care-associated infections in colorectal cancer surgery in China (PUBMED:28967510) do not directly relate to day surgery but highlight the broader context of health care burdens, including costs and resource utilization. The study on the effect of personalized musical intervention on the burden of care in dental implant surgery (PUBMED:35283258) suggests that interventions can reduce the burden of care during specific procedures, which might be applicable to day surgeries to improve patient experiences. Lastly, the study on the health care burden associated with adolescent prolonged opioid use after surgery (PUBMED:35726884) indicates that certain postoperative outcomes, such as prolonged opioid use, can lead to increased health care costs and utilization. This could imply that day surgeries, if not managed properly, might contribute to such burdens. In summary, while the abstracts do not directly answer the question, they collectively suggest that day surgery could potentially transfer the burden of care to patients and their caregivers, particularly in terms of postoperative care and support.
Instruction: Travel Burden to Breast MRI and Utilization: Are Risk and Sociodemographics Related? Abstracts: abstract_id: PUBMED:27026577 Travel Burden to Breast MRI and Utilization: Are Risk and Sociodemographics Related? Purpose: Mammography, unlike MRI, is relatively geographically accessible. Additional travel time is often required to access breast MRI. However, the amount of additional travel time and whether it varies on the basis of sociodemographic or breast cancer risk factors is unknown. Methods: The investigators examined screening mammography and MRI between 2005 and 2012 in the Breast Cancer Surveillance Consortium by (1) travel time to the closest and actual mammography facility used and the difference between the two, (2) women's breast cancer risk factors, and (3) sociodemographic characteristics. Logistic regression was used to examine the odds of traveling farther than the closest facility in relation to women's characteristics. Results: Among 821,683 screening mammographic examinations, 76.6% occurred at the closest facility, compared with 51.9% of screening MRI studies (n = 3,687). The median differential travel time among women not using the closest facility for mammography was 14 min (interquartile range, 8-25 min) versus 20 min (interquartile range, 11-40 min) for breast MRI. Differential travel time for both imaging modalities did not vary notably by breast cancer risk factors but was significantly longer for nonurban residents. For non-Hispanic black compared with non-Hispanic white women, the adjusted odds of traveling farther than the closest facility were 9% lower for mammography (odds ratio, 0.91; 95% confidence interval, 0.87-0.95) but more than two times higher for MRI (odds ratio, 2.64; 95% confidence interval, 1.36-5.13). Conclusions: Breast cancer risk factors were not related to excess travel time for screening MRI, but sociodemographic factors were, suggesting the possibility that geographic distribution of advanced imaging may exacerbated disparities for some vulnerable populations. abstract_id: PUBMED:37340257 Travel Burden as a Measure of Healthcare Access and the Impact of Telehealth within the Veterans Health Administration. Background: Travel is a major barrier to healthcare access for Veteran Affairs (VA) patients, and disproportionately affects rural Veterans (approximately one quarter of Veterans). The CHOICE/MISSION acts' intent is to increase timeliness of care and decrease travel, although not clearly demonstrated. The impact on outcomes remains unclear. Increased community care increases VA costs and increases care fragmentation. Retaining Veterans within the VA is a high priority, and reduction of travel burdens will help achieve this goal. Sleep medicine is presented as a use case to quantify travel related barriers. Objective: The Observed and Excess Travel Distances are proposed as two measures of healthcare access, allowing for quantification of healthcare delivery related to travel burden. A telehealth initiative that reduced travel burden is presented. Design: Retrospective, observational, utilizing administrative data. Subjects: VA patients with sleep related care between 2017 and 2021. In-person encounters: Office visits and polysomnograms; telehealth encounters: virtual visits and home sleep apnea tests (HSAT). Main Measures: Observed distance: distance between Veteran's home and treating VA facility. Excess distance: difference between where Veteran received care and nearest VA facility offering the service of interest. Avoided distance: distance between Veteran's home and nearest VA facility offering in-person equivalent of telehealth service. Key Results: In-person encounters peaked between 2018 and 2019, and have down trended since, while telehealth encounters have increased. During the 5-year period, Veterans traveled an excess 14.1 million miles, while 10.9 million miles of travel were avoided due to telehealth encounters, and 48.4 million miles were avoided due to HSAT devices. Conclusions: Veterans often experience a substantial travel burden when seeking medical care. Observed and excess travel distances are valuable measures to quantify this major healthcare access barrier. These measures allow for assessment of novel healthcare approaches to improve Veteran healthcare access and identify specific regions that may benefit from additional resources. abstract_id: PUBMED:35589860 Impact of educational level and travel burden on breast cancer stage at diagnosis in the state of Sao Paulo, Brazil. We describe the characteristics of cases of breast cancer among women assisted at hospitals affiliated to the public health system in the state of São Paulo (Brazil), analysing the effects of level of education and travel burden to point of treatment. We conducted a retrospective analysis of invasive breast cancer among women diagnosed between 2000 and 2015. Data were extracted from the hospital-based cancer registries of Fundação Oncocentro de São Paulo-FOSP. The outcome was clinical stage at diagnosis (stage III-IV versus I-II). The explanatory variables were educational level and travel burden. Odds ratios (OR) and 95% confidence intervals (95% CI) were estimated. Multiple imputations were used for missing educational level (31%). The study included 81,669 women with invasive breast cancer diagnosed between 2000 and 2015. The mean age of patients at diagnosis was 56.8 years (standard deviation 13.6 years). 38% of patients were at an advanced stage at diagnosis (stage III-IV). Women with lower levels of education and those who received cancer care in municipalities other than where they lived were more likely to be diagnosed at an advanced stage. In conclusion, promotion of breast cancer awareness and improving pathways to expedite breast cancer diagnosis and treatment could help identify breast tumors at earlier stages. abstract_id: PUBMED:35288785 Impact of travel burden on clinical outcomes in lung cancer. Purpose: Our study explores the influence of travel burden (measured as travel distance and travel time) on clinical outcomes in lung cancer patients. Methods: A retrospective analysis of a single Bulgarian center was performed. A total of 9240 lung cancer patients were included in the study. Travel distance and travel time between patients' city of residence and the treating facility were calculated with an online tool to determine the shortest route for travel using the existing road network. The probability of survival was estimated using the Kaplan-Meier method, and differences in survival in each subgroup were evaluated with a log-rank test. Results: About one third of all included patients were living in the same city as the treating facility (n = 2746, 29.7%). Overall survival in our patient population was significantly lower with increasing travel distance (p &lt; 0.001, Mantel-Cox log rank) and travel time (p &lt; 0.001, Mantel-Cox log rank). The 1-year OS rate according to travel distance was 27.1% in the same city group, 22.4% in &lt; 50-km group, and 20.5% in ≥ 50-km group (p &lt; 0.001). The corresponding values for the 5-year OS rate were 2.9%, 2.6%, and 1.4% (p &lt; 0.001). Conclusion: In this retrospective study, we discovered significant differences in the overall survival of patients with lung cancer depending on travel distance and travel time to the treating oncological facility. Despite having similar clinical and pathological characteristics (age, sex, stage at initial diagnosis, histologic subtype), the median overall survival was significantly lower in those subgroups of patients with a higher travel burden. abstract_id: PUBMED:37692832 Travel burden for patients with multimorbidity - Proof of concept study in a Dutch tertiary care center. Objectives: To explore travel burden in patients with multimorbidity and analyze patients with high travel burden, to stimulate actions towards adequate access and (remote) care coordination for these patients. Design: A retrospective, cross-sectional, explorative proof of concept study. Setting And Participants: Electronic health record data of all patients who visited our academic hospital in 2017 were used. Patients with a valid 4-digit postal code, aged ≥18 years, had &gt;1 chronic or oncological condition and had &gt;1 outpatient visits with &gt;1 specialties were included. Methods: Travel burden (hours/year) was calculated as: travel time in hours × number of outpatient visit days per patient in one year × 2. Baseline variables were analyzed using univariate statistics. Patients were stratified into two groups by the median travel burden. The contribution of travel time (dichotomized) and the number of outpatient clinic visits days (dichotomized) to the travel burden was examined with binary logistic regression by adding these variables consecutively to a crude model with age, sex and number of diagnosis. National maps exploring the geographic variation of multimorbidity and travel burden were built. Furthermore, maps showing the distribution of socioeconomic status (SES) and proportion of older age (≥65 years) of the general population were built. Results: A total of 14 476 patients were included (54.4% female, mean age 57.3 years ([± standard deviation] = ± 16.6 years). Patients travelled an average of 0.42 (± 0.33) hours to the hospital per (one-way) visit with a median travel burden of 3.19 hours/year (interquartile range (IQR) 1.68 - 6.20). Care consumption variables, such as higher number of diagnosis and treating specialties in the outpatient clinic were more frequent in patients with higher travel burden. High travel time showed a higher Odds Ratio (OR = 578 (95% Confidence Interval (CI) = 353 - 947), p &lt; 0.01) than having high number of outpatient clinic visit days (OR = 237, 95% CI = 144 - 338), p &lt; 0.01) to having a high travel burden in the final regression model. Conclusions And Implications: The geographic representation of patients with multimorbidity and their travel burden varied but coincided locally with lower SES and older age in the general population. Future studies should aim on identifying patients with high travel burden and low SES, creating opportunity for adequate (remote) care coordination. abstract_id: PUBMED:30060899 Centralisation of cancer surgery and the impact on patients' travel burden. Recent years have seen increasing trends towards centralisation of complex medical procedures, including cancer surgery. The impact of these trends on patients' travel burden is often ignored. This study charts the effects of different scenarios of centralising surgery on the travel burden for patients with cancer of the digestive tract, particularly among vulnerable patient groups. Our analyses include all surgically treated Dutch patients with colorectal, stomach or oesophageal cancer diagnosed in 2012-2013. After determining each patient's actual travel burden, simulations explored the impact of continued centralisation of cancer surgery under four hypothetical scenarios. Compared to patients' actual travelling, simulated travel distances under relatively 'conservative' scenarios did not necessarily increase, most likely due to current hospital bypassing. Using multivariable regression analyses, as a first exercise, it is examined whether the potential effects on travel burden differ across patient groups. For some cancer types, under more extreme scenarios increases in travel distances are significantly higher for older patients and those with a low SES. Given the potential impact on vulnerable patients' travel burden, our analysis suggests a thorough consideration of non-clinical effects of centralisation in health policy. abstract_id: PUBMED:33309922 Improved discrimination of molecular subtypes in invasive breast cancer: Comparison of multiple quantitative parameters from breast MRI. Purpose: To compare multiple quantitative parameters from breast magnetic resonance imaging (MRI) with the synthetic MRI sequence included for discrimination of molecular subtypes of invasive breast cancer. Materials And Methods: Between March 2019 and September 2020, two hundred breast cancer patients underwent preoperative breast multiparametric MRI examinations including synthetic MRI, diffusion weighted imaging (DWI) and dynamic contrast enhancement (DCE)-MRI sequences. MRI morphological features, T1 and T2 relaxation times (T1, T2) and proton density (PD) values from synthetic MRI, Ktrans, Kep, and Ve from DCE-MRI, mean apparent diffusion coefficient (ADC) from DWI and tumor volume were measured. Quantitative parameters were compared according to molecular markers and subtypes. Logistic regression were performed to find the related MRI parameters and establish combined parameters. The comparison between single and combined quantitative parameters by using DeLong tests. Results: T1, T2 values were significantly higher in hormone receptor (HR)- negative and Ki67 &gt; 14% tumors (p &lt; 0.05). Human epidermal growth factor receptor 2 (HER2)-positive tumors demonstrated significantly higher Ktrans and Kep (p &lt; 0.01). Mean ADC values were significantly decreased in HR-positive and Ki67 &gt; 14% tumors (p &lt; 0.01). Tumor volumes were significantly higher in HER2-positive and Ki67 &gt; 14% tumors (p &lt; 0.05). Independent influencing factors were lower T2 values (p &lt; 0.001), smaller tumor volume (p = 0.031) and higher mean ADC (p = 0.002) associated with luminal A subtype, while T1 values (p = 0.007) was the only quantitative parameter associated with triple-negative subtype. The diagnostic efficiency of combined parameters (T2 + mean ADC + volume) (AUC = 0.765) was significantly higher than that of mean ADC (AUC = 0.666, p = 0.031 by DeLong test) and volume (AUC = 0.650, p = 0.008 by DeLong test) for separating luminal A subtype. Conclusions: MRI quantitative parameters could help distinguish molecular markers and subtypes. The emerging synthetic MRI parameters - T1 values were associated with the TN subtype, and combined parameters with added T2 values might improve the discrimination of the luminal A subtype. Application of synthetic MRI can enrich quantitative descriptors from breast MRI. abstract_id: PUBMED:37562985 Impact of travel burden on the treatment of stage I and II breast cancer: A National Cancer Database analysis. Background: Although historic studies of state registries have demonstrated decreased radiation therapy use for patients with breast cancer living further away from radiation facilities, the association between travel distance and breast cancer treatment in a modern national cohort remains unknown. Methods: Female patients with estrogen receptor/progesterone receptor positive and human epidermal growth factor receptor 2 negative pathologic stages I to II breast cancer were identified from the National Cancer Database (2018-2020) and dichotomized by distance ≤20 miles or &gt;20 miles (75th percentile) from the treatment facility. The association between travel distance and type of surgery and treatment administered was analyzed by univariate and multivariate logistic regression and after 1:1 propensity matching. Results: Of the 293,318 patients identified for inclusion, the median age was 63 years, and most patients (n = 190,567, 65%) lived ≤20 miles of the treatment facility. Patients with a travel burden &gt;20 miles were more likely to receive a mastectomy (≤20 miles 30.4% vs &gt;20 miles 34.0%, P &lt; .001; odds ratio 1.14, P = .016), and less likely to receive radiation (≤20 miles 63.3% vs &gt;20% miles 60.1%, P &lt; .001; odds ratio 0.81, P &lt; .001). These findings persisted after propensity score matching (n = 33,544 per cohort), with patients living further being more likely to undergo a mastectomy (≤20 miles 30.3% vs &gt;20 miles 35.3%, P &lt; .001) and less likely to receive radiation (≤ 20 miles 65.4% vs. &gt;20 miles 58.5%, P &lt; .001). Conclusion: Patients with hormone receptor-positive stage I to II breast cancer with a larger travel burden are more likely to receive a mastectomy and less likely to undergo radiation therapy to treat their disease. abstract_id: PUBMED:33786524 Travel, Treatment Choice, and Survival Among Breast Cancer Patients: A Population-Based Analysis. Background: Travel distance to care facilities may shape urban-rural cancer survival disparities by creating barriers to specific treatments. Guideline-supported treatment options for women with early stage breast cancer involves considerations of breast conservation and travel burden: Mastectomy requires travel for surgery, whereas breast-conserving surgery (BCS) with adjuvant radiation therapy (RT) requires travel for both surgery and RT. This provides a unique opportunity to evaluate the impact of travel distance on surgical decisions and receipt of guideline-concordant treatment. Materials and Methods: We included 61,169 women diagnosed with early stage breast cancer between 2004 and 2013 from the Surveillance Epidemiology and End Results (SEER)-Medicare database. Driving distances to the nearest radiation facility were calculated by using Google Maps. We used multivariable regression to model treatment choice as a function of distance to radiation and Cox regression to model survival. Results: Women living farthest from radiation facilities (&gt;50 miles vs. &lt;10 miles) were more likely to undergo mastectomy versus BCS (odds ratio [OR]: 1.48, 95% confidence interval [CI]: 1.22-1.79). Among only those who underwent BCS, women living farther from radiation facilities were less likely to receive guideline-concordant RT (OR: 1.72, 95% CI: 1.32-2.23). These guideline-discordant women had worse overall (hazards ratio [HR]: 1.50, 95% CI: 1.42-1.57) and breast-cancer specific survival (HR: 1.44, 95% CI: 1.29-1.60). Conclusions: We report two breast cancer treatments with different clinical and travel implications to show the association between travel distance, treatment decisions, and receipt of guideline-concordant treatment. Differential access to guideline-concordant treatment resulting from excess travel burden among rural patients may contribute to rural-urban survival disparities among cancer patients. abstract_id: PUBMED:28691102 DCE-MRI Texture Features for Early Prediction of Breast Cancer Therapy Response. This study investigates the effectiveness of hundreds of texture features extracted from voxel-based dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) parametric maps for early prediction of breast cancer response to neoadjuvant chemotherapy (NAC). In total, 38 patients with breast cancer underwent DCE-MRI before (baseline) and after the first of the 6-8 NAC cycles. Quantitative pharmacokinetic (PK) parameters and semiquantitative metrics were estimated from DCE-MRI time-course data. The residual cancer burden (RCB) index value was computed based on pathological analysis of surgical specimens after NAC completion. In total, 1043 texture features were extracted from each of the 13 parametric maps of quantitative PK or semiquantitative metric, and their capabilities for early prediction of RCB were examined by correlating feature changes between the 2 MRI studies with RCB. There were 1069 pairs of feature-map combinations that showed effectiveness for response prediction with 4 correlation coefficients &gt;0.7. The 3-dimensional gray-level cooccurrence matrix was the most effective feature extraction method for therapy response prediction, and, in general, the statistical features describing texture heterogeneity were the most effective features. Quantitative PK parameters, particularly those estimated with the shutter-speed model, were more likely to generate effective features for prediction response compared with the semiquantitative metrics. The best feature-map pair could predict pathologic complete response with 100% sensitivity and 100% specificity using our cohort. In conclusion, breast tumor heterogeneity in microvasculature as measured by texture features of voxel-based DCE-MRI parametric maps could be a useful biomarker for early prediction of NAC response. Answer: The study identified in abstract PUBMED:27026577 investigated the relationship between travel burden to breast MRI and its utilization, focusing on whether this burden varies based on sociodemographic or breast cancer risk factors. The study found that while breast cancer risk factors were not significantly related to excess travel time for screening MRI, sociodemographic factors were indeed related. Specifically, nonurban residents experienced significantly longer differential travel times for both mammography and breast MRI. Moreover, non-Hispanic black women had a lower likelihood of traveling farther than the closest facility for mammography but were more than twice as likely to do so for MRI compared to non-Hispanic white women. These findings suggest that the geographic distribution of advanced imaging facilities may exacerbate disparities for some vulnerable populations, indicating that sociodemographics are related to the travel burden for breast MRI utilization.
Instruction: Efficacy of secondary isoniazid preventive therapy among HIV-infected Southern Africans: time to change policy? Abstracts: abstract_id: PUBMED:14502009 Efficacy of secondary isoniazid preventive therapy among HIV-infected Southern Africans: time to change policy? Objective: To determine the efficacy of secondary preventive therapy against tuberculosis (TB) among gold miners working in South Africa. Design: An observational study. Setting: Health service providing comprehensive care for gold miners. Methods: The incidence of recurrent TB was compared between two cohorts of HIV-infected miners: one cohort (n = 338) had received secondary preventive therapy with isoniazid (IPT) and the other had not (n = 221). Results: The overall incidence of recurrent TB was reduced by 55% among men who received IPT compared with those who did not (incidence rates 8.6 and 19.1 per 100 person-years, respectively; incidence rate ratio, 0.45; 95% confidence interval 0.26-0.78). The efficacy of isoniazid preventive therapy was unchanged after controlling for CD4 cell count and age. The number of person-years of IPT required to prevent one case of recurrent TB among individuals with a CD4 cell count &lt; 200 x 106 cells/l, and &gt; or = 200 x 106 cells/l was 5 and 19, respectively. Conclusion: Secondary preventive therapy reduces TB recurrence: the absolute impact appears to be greatest among individuals with low CD4 cell counts. International TB preventive therapy guidelines for HIV-infected individuals need to be expanded to include recommendations for secondary preventive therapy in settings where TB prevalence is high. abstract_id: PUBMED:36718019 Initiation and adherence to isoniazid preventive therapy in children under 5 years of age in Manhiça, Southern Mozambique. The WHO recommends preventive treatment for all pediatric contacts of a confirmed TB case, but coverage remains low in many high TB burden countries. We aimed to assess the coverage and adherence of the isoniazid preventive therapy (IPT) program among children under 5 years of age with household exposure to an adult pulmonary TB case in a rural district of Southern Mozambique. The estimated IPT coverage was 11.7%. A longer distance to the health center and lower age of the children hindered IPT initiation. Among patients who started IPT, 12/18 (69.9%) were adherent to the 6-month treatment. abstract_id: PUBMED:33107219 Cost-effectiveness of a 12 country-intervention to scale up short course TB preventive therapy among people living with HIV. Introduction: In 2017, the Aurum Institute, with support from Unitaid, launched an initiative to expand short-course therapy for the prevention of tuberculosis (TB) in 12 high-burden countries. This study aimed to investigate the importance of "catalytic" effects beyond the original project timeframe when estimating cost-effectiveness of such large investments. Methods: We estimated the cost-effectiveness of the IMPAACT4TB (I4TB) initiative from a health system perspective, using a 10-year time horizon. We first conservatively estimated costs using a "top-down" approach considering only the direct health benefits of providing TB preventive therapy to people initiating antiretroviral therapy (ART) through I4TB activities. We then re-estimated the incremental cost-effectiveness of I4TB incorporating the costs and health benefits of potential catalytic effects beyond the program itself. Results: We estimated that TB preventive therapy through the I4TB initiative alone would prevent 14 201 cases of active TB and 1562 TB deaths over 10 years with an up-front investment of $52.5 million; the estimated incremental cost-effectiveness was $1580 per disability-adjusted life year (DALY) averted. If this initiative could achieve its desired catalytic effects, an additional 375 648 cases and 41 321 deaths could be averted, at an incremental cost of $546 million and cost-effectiveness of $713 per DALY averted. Conclusions: Our findings provide donors with reasonable evidence of value for money to support investment in short-course TB preventive therapy for people initiating ART in high-burden settings. Our study also illustrates the importance of considering long-term secondary ("catalytic") effects when evaluating the cost-effectiveness of large-scale initiatives designed to change a global policy landscape. abstract_id: PUBMED:34527139 Low prevalence of isoniazid preventive therapy uptake among HIV-infected patients attending tertiary health facility in Lagos, Southwest Nigeria. Introduction: the burden of HIV and tuberculosis co-infection is a global public health challenge. Despite the benefit of isoniazid preventive therapy (IPT) in reducing the rate of co-infection, the uptake is generally limited in developing countries. This study aimed to determine the prevalence of IPT use and the factors affecting the uptake among HIV-infected patients attending our Teaching Hospital. Methods: this cross-sectional survey involved 300 HIV-infected individuals attending the AIDS prevention initiatives in Nigeria clinic of the Lagos University Teaching Hospital. A self-designed and well-structured questionnaire was used to document the demographic data, patients' exposure to tuberculosis, and IPT uptake. Clinical data of eligible patients were also extracted from their case notes. The main outcome measure was the prevalence of IPT use and non-use. Results: out of the respondents evaluated, (72.7%, n = 218) were females. Tuberculosis was the predominant comorbidity (15.7%, n = 47) and majority (53.0%, n = 159) had a CD4 count of &lt; 500 cells/ml. Overall prevalence of IPT uptake was very low (7.1%, n = 18) among HIV-infected patients. Major factors affecting uptake were lack of awareness of benefit (44.4%, n = 8) and lack of fear of contracting tuberculosis (22.2%, n = 4). However, lack of awareness of IPT benefit was the only independent factor associated with poor IPT uptake (adjusted odds 1168.75, 95% confidence interval: 85.05-16060.33; p = 0.001). Conclusion: isoniazid preventive therapy uptake was found to be very low in this study. Increased awareness and policy implementation of IPT by the healthcare provider is necessary. abstract_id: PUBMED:27911140 Effect of secondary preventive therapy on recurrence of tuberculosis in HIV-infected individuals: a systematic review. Human immunodeficiency virus (HIV)-infected individuals successfully treated for tuberculosis (TB) remain at risk of recurrence of the disease, especially in high TB incidence settings. We performed a systematic review, investigating whether secondary preventive therapy (sPT) with anti-TB drugs (preventive therapy in former TB patients with treatment success) is an effective strategy to prevent recurrence of TB in this patient group. We searched the databases PubMed, Cochrane Library, EMBASE, Web of Science and Google Scholar using the keywords HIV-infections, HIV, human immunodeficiency virus, AIDS, isoniazid, isoniazid preventive therapy (IPT), tuberculosis, TB, recurrence and recurrent disease, resulting in 253 potential publications. We identified eight publications for full text assessment, after which four articles qualified for inclusion in this systematic review. The quality of the included articles was rated using the GRADE system. All but one study were rated as having a high quality. In all included studies, sPT significantly decreased the incidence of recurrent TB in HIV-infected individuals to a substantial degree in comparison to non-treatment or placebo. Relative reductions varied from 55.0% to 82.1%. These data showed that the use of sPT to prevent recurrent TB in HIV-infected individuals was highly beneficial. These findings need to be confirmed in prospective studies with an adequate assessment of the effect of antiretroviral therapy (ART) and the occurrence of drug resistance. abstract_id: PUBMED:36434517 Assessment of contextual factors shaping delivery and uptake of isoniazid preventive therapy among people living with HIV in Dar es salaam, Tanzania. Background: Tuberculosis has remained a leading cause of death among people living with HIV (PLHIV) globally. Isoniazid preventive therapy (IPT) is the recommended strategy by the World Health Organization to prevent TB disease and related deaths among PLHIV. However, delivery and uptake of IPT has remained suboptimal particularly in countries where HIV and TB are endemic such as Tanzania. This study sought to assess contextual factors that shape delivery and uptake of IPT in Dar es Salaam region, Tanzania. Methodology: We employed a qualitative case study design comprising of in-depth interviews with people living with HIV (n = 17), as well as key informant interviews with clinicians (n = 7) and health administrators (n = 7). We used thematic data analysis approach and reporting of the results was guided by the Consolidated Framework for Implementation Research (CFIR). Results: Characteristics of IPT such as aligning the therapy to individual patient schedules and its relatively low cost facilitated its delivery and uptake. On the contrary, perceived adverse side effects negatively affected the delivery and uptake of IPT. Characteristics of individuals delivering the therapy including their knowledge, good attitudes, and commitment to meeting set targets facilitated the delivery and uptake of IPT. The process of IPT delivery comprised collective planning and collaboration among various facilities which facilitated its delivery and uptake. Organisational characteristics including communication among units and supportive leadership facilitated the delivery and uptake of IPT. External system factors including HIV stigma, negative cultural and religious values, limited funding as well as shortage of skilled healthcare workers presented as barriers to the delivery and uptake of IPT. Conclusion: The factors influencing the delivery and uptake of IPT among people living with HIV are multifaceted and exist at different levels of the health system. Therefore, it is imperative that IPT program implementers and policy makers adopt multilevel approaches that address the identified barriers and leverage the facilitators in delivery and uptake of IPT at both community and health system levels. abstract_id: PUBMED:33978529 Isoniazid preventive therapy use among adult people living with HIV in Zimbabwe. We assessed the prevalence of isoniazid preventive therapy (IPT) uptake and explored factors associated with IPT non-uptake among people living with HIV (PLHIV) using nationally representative data from the Zimbabwe Population-based HIV Impact Assessment (ZIMPHIA) 2015-2016. This was a cross-sectional study of 3418 PLHIV ZIMPHIA participants eligible for IPT, aged ≥15 years and in HIV care. Logistic regression modeling was performed to assess factors associated with self-reported IPT uptake. All analyses accounted for multistage survey design. IPT uptake among PLHIV was 12.7% (95% confidence interval (CI): 11.4-14.1). After adjusting for sex, age, rural/urban residence, TB screening at the last clinic visit, and hazardous alcohol use, rural residence was the strongest factor associated with IPT non-uptake (adjusted OR (aOR): 2.39, 95% CI: 1.82-3.12). Isoniazid preventive therapy non-uptake having significant associations with no TB screening at the last HIV care (aOR: 2.07, 95% CI: 1.54-2.78) and with hazardous alcohol use only in urban areas (aOR: 10.74, 95% CI: 3.60-32.0) might suggest suboptimal IPT eligibility screening regardless of residence, but more so in rural areas. Self-reported IPT use among PLHIV in Zimbabwe was low, 2 years after beginning national scale-up. This shows the importance of good TB screening procedures for successful IPT implementation. abstract_id: PUBMED:33332462 Predictors of suboptimal adherence to isoniazid preventive therapy among adolescents and children living with HIV. This study identified factors associated with adherence to a 6-month isoniazid preventive therapy (IPT) course among adolescents and children living with HIV. Forty adolescents living with HIV and 48 primary caregivers of children living with HIV completed a Likert-based survey to measure respondent opinions regarding access to care, quality of care, preferred regimens, perceived stigma, and confidence in self-efficacy. Sociodemographic data were collected and adherence measured as the average of pill counts obtained while on IPT. The rates of suboptimal adherence (&lt; 95% adherent) were 22.5% among adolescents and 37.5% among the children of primary caregivers. Univariate logistic regression was used to model the change in the odds of suboptimal adherence. Independent factors associated with suboptimal adherence among adolescents included age, education level, the cost of coming to clinic, stigma from community members, and two variables relating to self-efficacy. Among primary caregivers, child age, concerns about stigma, and location preference for meeting a community-health worker were associated with suboptimal adherence. To determine whether these combined factors contributed different information to the prediction of suboptimal adherence, a risk score containing these predictors was constructed for each group. The risk score had an AUC of 0.87 (95% CI: 0.76, 0.99) among adolescents and an AUC of 0.76 (95% CI: 0.62, 0.90), among primary caregivers suggesting that these variables may have complementary predictive utility. The heterogeneous scope and associations of these variables in different populations suggests that interventions aiming to increase optimal adherence will need to be tailored to specific populations and multifaceted in nature. Ideally interventions should address both long-established barriers to adherence such as cost of transportation to attend clinic and more nuanced psychosocial barriers such as perceived community stigma and confidence in self-efficacy. abstract_id: PUBMED:33014257 Knowledge and adherence to isoniazid preventive therapy among people living with HIV in multilevel health facilities in South-East, Nigeria: baseline findings from a quasi-experimental study. Introduction: isoniazid preventive therapy is a crucial component of TB/HIV collaborative program and patient good knowledge and adherence to this preventive treatment are essential in improving implementation. The aim of this study was to determine the knowledge and adherence to isoniazid preventive therapy among patients receiving HIV care. Methods: this is a baseline result of a quasi-experimental study which was carried out among 200 patients receiving HIV care in six high patient load health facilities providing comprehensive HIV care in Ebonyi State. This included a tertiary health facility and five secondary level health facilities. We used structured interviewer-administered questionnaire to collect information from the participants. Adherence was assessed by self-reports. Descriptive, bivariate and multivariate logistic regression analyses were conducted using SPSS version 20 at 5% level of significance. Results: majority (65%) of the respondents were between 30 and 49 years and most (73.5%) were females. Majority (85%) had been on antiretroviral therapy (ART) for more than one year. More than half of the respondents had ever received and had been counselled on IPT (55%, 62% respectively) while only 17.5% were on IPT during the study. More than half (60.5%) of the respondents had low level of knowledge. Marital status was the only predictor of knowledge. Unmarried respondents were 2 times more likely to have knowledge of IPT compared with the married (AOR = 2.11, CI = 1.10-4.06). Among the 35 patients who were on IPT, 32 (91%) reported good adherence in the 30 days preceding the survey. Conclusion: there was poor knowledge of IPT among the respondents however self-reported adherence was high. We recommend intensification of general and personalized education of PLHIV on IPT by health workers. abstract_id: PUBMED:31077133 The protective effect of isoniazid preventive therapy on tuberculosis incidence among HIV positive patients receiving ART in Ethiopian settings: a meta-analysis. Background: Tuberculosis (TB) and HIV makeup a deadly synergy of infectious disease, and the combined effect is apparent in resource limited countries like Ethiopia. Previous studies have demonstrated inconsistent results about the protective effect of isoniazid preventive therapy (IPT) on active TB incidence among HIV positive patients receiving ART. Therefore, the aim of this meta-analysis was, first, to determine the protective effect of IPT on active tuberculosis incidence, and second, to assess the pooled incidence of active TB among HIV positive patients taking ART with and without IPT intervention in Ethiopia. Methods: PubMed, Google scholar and Cochran library databases were searched from April 1 to 30, 2018. Two independent authors explored and assessed studies for eligibility, and extracted data based on predefined criteria. Studies that reported TB incidence among HIV positive patients taking ART in Ethiopia with and without IPT concomitant intervention, and with a clear stratified data on the incidence of TB based on the duration of IPT intervention were selected. A random effects model was used to estimate risk ratios and the pooled incident TB with the respective 95% confidence intervals. Results: We identified 7 suitable studies in this analysis. Accordingly, IPT reduced the risk of TB incidence by 74%, risk ratio (RR) 0.26 (95% CI; 0.16-0.43%), compared to no IPT group. Moreover, IPT for 12 months reduced incident TB by 91% (RR: 0.09, 95% CI: 0.04 to 0.21), whereas 6 months IPT averted TB incidence by 63% (RR: 0.37, 95% CI: 0.26 to 0.52). The overall pooled incident TB among HIV infected patients receiving ART was 10.30% (95% CI; 7.57-13.02%). Specifically, incident TB among study cohorts with and without IPT was 3.79% (95% CI; 2.03-5.55%) and 16.32% (95% CI; 11.57-21.06%) respectively. Conclusion: IPT reduced the risk of incident TB among HIV positive patients receiving ART in Ethiopian settings. Moreover, the duration of IPT intervention has effect on its protective role. Thus, scaling up the isoniazid preventive therapy program and its strict compliance is necessary to avert HIV fueled tuberculosis. Study Protocol Registration: CRD42018090804. Answer: The efficacy of secondary isoniazid preventive therapy (IPT) among HIV-infected Southern Africans has been demonstrated to be significant, suggesting that it may be time to change policy. An observational study among gold miners working in South Africa showed that the overall incidence of recurrent tuberculosis (TB) was reduced by 55% among men who received IPT compared with those who did not. The efficacy of IPT was unchanged after controlling for CD4 cell count and age, indicating that secondary preventive therapy reduces TB recurrence, with the absolute impact appearing to be greatest among individuals with low CD4 cell counts (PUBMED:14502009). This finding is supported by a systematic review which concluded that secondary preventive therapy with anti-TB drugs significantly decreased the incidence of recurrent TB in HIV-infected individuals, with relative reductions varying from 55.0% to 82.1% (PUBMED:27911140). Given these results, there is a strong case for international TB preventive therapy guidelines for HIV-infected individuals to be expanded to include recommendations for secondary preventive therapy in settings where TB prevalence is high. The evidence suggests that such a policy change could have a substantial impact on reducing the burden of TB among HIV-infected populations in high-prevalence areas like Southern Africa. Moreover, the cost-effectiveness of scaling up short-course TB preventive therapy among people living with HIV in high-burden countries has been demonstrated, providing further support for policy change. The IMPAACT4TB (I4TB) initiative showed that TB preventive therapy could prevent thousands of cases of active TB and TB deaths over 10 years, with an estimated incremental cost-effectiveness of $1580 per disability-adjusted life year (DALY) averted. If the initiative achieves its desired catalytic effects, the cost-effectiveness could be even more favorable (PUBMED:33107219). Therefore, considering the evidence of the efficacy and cost-effectiveness of secondary IPT, it may indeed be time to change policy to better prevent TB among HIV-infected individuals in high TB burden settings like Southern Africa.
Instruction: Can vibroacoustic stimulation improve the efficiency of a tertiary care antenatal testing unit? Abstracts: abstract_id: PUBMED:22873632 Can vibroacoustic stimulation improve the efficiency of a tertiary care antenatal testing unit? Objective: Our primary objective was to determine whether vibroacoustic stimulation (VAS) decreases time to fetal reactivity in the antenatal testing unit (ATU) of a tertiary care center. Methods: We performed a prospective, quality assurance initiative to determine whether VAS could increase the efficiency of our ATU. On pre-specified "VAS days," VAS was applied for 3 s, if the non-stress test was non-reactive in the first 10 min. Generalized estimating equations models were used to account for within subject correlation due to multiple appointments per patient. Results: VAS use was associated with a 3.76-min reduction in time to reactivity (21.79 vs 25.55, p = 0.011) and a 56% reduction in the need for a biophysical profile (OR: 0.44, 95% CI: 0.21-0.90). Overall, however, we found no significant decrease in time spent on the monitor or in the ATU. Conclusion: Compliance with a strict VAS protocol may improve the efficiency of increasingly busy ATUs. abstract_id: PUBMED:24318543 Fetal vibroacoustic stimulation for facilitation of tests of fetal wellbeing. Background: Acoustic stimulation of the fetus has been suggested to improve the efficiency of antepartum fetal heart rate testing. Objectives: To assess the advantages and disadvantages of the use of fetal vibroacoustic stimulation in conjunction with tests of fetal wellbeing. Search Methods: We searched the Cochrane Pregnancy and Childbirth Group's Trials Register (30 September 2013). Selection Criteria: All published and unpublished randomised controlled trials assessing the merits of the use of fetal vibroacoustic stimulation in conjunction with tests of fetal wellbeing. Data Collection And Analysis: All review authors independently extracted data and assessed trial quality. Authors of published and unpublished trials were contacted for further information. Main Results: Altogether 12 trials with a total of 6822 participants were included. Fetal vibroacoustic stimulation reduced the incidence of non-reactive antenatal cardiotocography test (nine trials; average risk ratio (RR) 0.62, 95% confidence interval (CI) 0.48 to 0.81). Vibroacoustic stimulation compared with mock stimulation evoked significantly more fetal movements when used in conjunction with fetal heart rate testing (one trial, RR 0.23, 95% CI 0.18 to 0.29). Authors' Conclusions: Vibroacoustic stimulation offers benefits by decreasing the incidence of non-reactive cardiotocography and reducing the testing time. Further randomised trials should be encouraged to determine not only the optimum intensity, frequency, duration and position of the vibroacoustic stimulation, but also to evaluate the efficacy, predictive reliability, safety and perinatal outcome of these stimuli with cardiotocography and other tests of fetal wellbeing. abstract_id: PUBMED:11279788 Fetal vibroacoustic stimulation for facilitation of tests of fetal wellbeing. Background: Acoustic stimulation of the fetus has been suggested to improve the efficiency of antepartum fetal heart rate testing. Objectives: The objective of this review was to assess the merits or adverse effects of the use of fetal vibroacoustic stimulation in conjunction with tests of fetal wellbeing. Search Strategy: We searched the Cochrane Pregnancy and Childbirth Group trials register. Date of last search: October 2000. Selection Criteria: All published and unpublished randomized controlled trials assessing the merits of the use of fetal vibroacoustic stimulation in conjunction with tests of fetal wellbeing. Data Collection And Analysis: Both reviewers independently extracted data and assessed trial quality. Authors of published and unpublished trials were contacted for further information. Main Results: A total of seven trials with a total of 4325 participants were included. Fetal vibroacoustic stimulation reduced the incidence of non-reactive antenatal cardiotocography test (odds ratio (OR) 0.61, 95% confidence interval (CI) 0.49-0.75) and reduced the overall mean cardiotocography testing time (weighted mean difference (WMD) -4.55 minutes, 95% CI -5.96 minutes to -3.14 minutes). Vibroacoustic stimulation evoked more than mock stimulation when used in conjunction with fetal heart rate testing (OR 0.08, 95% CI 0.06-0.12). Reviewer's Conclusions: Vibroacoustic stimulation offers benefits by decreasing the incidence of non-reactive cardiotocography and reducing the testing time. Further randomized trials should be encouraged to determine not only the optimum intensity, frequency, duration and position of the vibroacoustic stimulation, but also to evaluate the efficacy, predictive reliability, safety and perinatal outcome of these stimuli with cardiotocography and other tests of fetal wellbeing. abstract_id: PUBMED:7796554 Vibroacoustic stimulation. Vibroacoustic stimulation of the human fetus profoundly alters fetal behavior and heart rate. Many authors have reported success using this technique to improve the efficiency of antepartum fetal heart rate testing without changing the predictive reliability of the tests. Vibroacoustic stimulation has other potential advantages in the antepartum assessment of fetal well-being and in provoking fetal activity to improve ultrasonic visualization. From an experimental standpoint, vibroacoustic stimulation offers a unique opportunity to assess how the fetus responds to the external environment. The available information suggests that exposure of the fetus to vibroacoustic stimulation is clinically safe. Additional research is needed to characterize the optimal frequency, duration, intensity, and choice of stimulus to provide consistent responses. The literature presents a confusing array of studies using different methods, which makes comparison of results among institutions and investigators difficult. Vibroacoustic stimulation appears to be a reasonable and safe clinical technique. Additional prospective investigation is necessary to characterize further how this technique can be more useful clinically. abstract_id: PUBMED:8119607 The role of vibroacoustic stimulation in antenatal fetal assessment The value of antepartum fetal heart rate testing has been debated in the last few years. According to several research works, the fetal sleeping periods lead to falsely nonreactive tests. These increase the risk and costs of obstetric care. A randomized prospective clinical trial was undertaken in high risk pregnancies to compare the standard nonstress test with the fetal vibroacoustic stimulation. Acoustic stimulation during 5 seconds with a 75 Hz frequency and 74 dB intensity device was applied to the patients in the study group. Nonreactive results were obtained in 11% of the control group and 3.4% in the study group (z = 2.07, p = 0.00116). A reduction of 5 minutes in the length of the test was observed in the study group. Fetal acoustic stimulation should be considered an alternative to improve the efficacy of nonstress testing, by reducing falsely nonreactive tests results and the time it takes to perform them. abstract_id: PUBMED:23983739 Comparison of halogen light and vibroacoustic stimulation on nonreactive fetal heart rate pattern. Background: One of the first-line assessment tools for fetal surveillance is nonstress test (NST), although it is limited by a high rate of false-nonreactive results. This study was performed to investigate if external stimulation from vibroacoustic and halogen light could help in provoking fetal responsiveness and altering NST results. Materials And Methods: This is a clinical trial. Sampling was done from April to July 2010. One hundred pregnant women with nonreactive NST for 20 min were allocated in two groups: Vibroacoustic stimulated NST (VNST, n = 50) who received vibration from a standard fetal vibratory stimulator and halogen light stimulated NST (LNST, n = 50) who received a halogen light source for 3 and 10 sec, respectively. Results were compared together and then compared to biophysical profile (BPP) scores as a backup test. We used Mann-Whitney U test, Chi-square test, and Fisher's exact test to compare the variables in the two groups through SPSS version 14. P &lt; 0.05 was considered as statistically significant. Results: Following stimulations, 68% nonreactive subjects in halogen light stimulation group and 62% in vibroacoustic stimulation group changed to reactive patterns. Time to onset of the first acceleration (VNST: 2.17 min; LNST: 2.27 min) and the test duration (VNST: 4.91 min; LNST: 5.26 min) were the same in the two groups. In VNST 89.5% and in LNST 87.5% of nonreactivity followed by score 8 in BPP. There was no significant relation between stimulus NSTs and BPPs. Conclusion: Vibroacoustic and light stimulation offer benefits by decreasing the incidence of nonreactive results and reducing the test time. Both halogen light stimulation and vibroacoustic stimulation are safe and efficient in fetal well-being assessment services. abstract_id: PUBMED:27011399 Antenatal Depression in a Tertiary Care Hospital. Context: Antenatal depression is not easily visible, though the prevalence is high. The idea of conducting this study was conceived from this fact. Aims And Objectives: The aim of this study was to estimate the prevalence of antenatal depression and identify the risk factors, for early diagnosis and intervention. Settings And Design: The study conducted in a Tertiary Care Hospital was prospective and cross-sectional. Materials And Methods: Pregnant women between 18 and 40 years of age were studied. The sample size comprised 318 women. They were assessed using Edinburgh Postnatal Depression Scale (EPDS) score, Structured Clinical Interview for Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition Axis I Disorders, Life Event Stress Scale (LESS), and Life Distress Inventory (LDI). Statistical Analysis Used: The Statistical Package for Social Sciences (SPSS) Version 15 software was used to measure percentages, mean, correlation, and P &lt; 0.05 were considered significant. Results: Prevalence of antenatal depression in the study was 12.3%. Correlation of the sociodemographic factors, obstetric factors, LDI, and LESS with EPDS scores showed statistical significance for unplanned pregnancy, distress associated with relationships, physical health, financial situation, social life, presence of personality disorder, being a homemaker, and higher educational status. Conclusion: The study showed a high prevalence rate of depression and identified risk factors. abstract_id: PUBMED:36705138 Irregular Antenatal Care Attendance among Pregnant Women during COVID-19 Pandemic in a Tertiary Care Centre: A Descriptive Cross-sectional Study. Introduction: The coronavirus disease 2019 pandemic has made access to antenatal care services difficult, which could lead to serious implications for the health of mothers and fetus. There is limited study regarding its impact on pregnant women. This study aimed to find out the prevalence of irregular antenatal care attendance among pregnant women during the COVID-19 pandemic in a tertiary care centre. Methods: A descriptive cross-sectional study was carried out among pregnant women attending antenatal care visits at the Department of Gynaecology and Obstetrics in a tertiary care centre from 23 July 2021 to 5 September 2021. Ethical approval was granted by the Institutional Review Committee (Reference number: 077/078/67). Convenience sampling was done. Point estimate and 95% Confidence Interval were calculated. Results: Among 196 pregnant women, 49 (25%) (18.96-31.06, 95% Confidence Interval) had irregular antenatal care attendance during the COVID-19 pandemic. Conclusions: The prevalence of irregular antenatal care attendance during the COVID-19 pandemic was lower than other studies done in similar settings. Antenatal care is crucial to prevent maternal, fetal morbidity and mortality, hence uninterrupted antenatal care services should be provided even during crisis situation like COVID-19 pandemics. abstract_id: PUBMED:10577612 The effect of antenatal steroid administration on the fetal response to vibroacoustic stimulation. Background: Betamethasone transiently suppresses multiple fetal biophysical activities, including breathing movements, limb and trunk movements, heart rate variability, and heart rate accelerations. Unnecessary iatrogenic delivery of preterm fetuses due to the false diagnosis of fetal compromise has been described in this setting. The sonographically observed startle response of the fetus to vibroacoustic stimulation has been described as another modality to provide reassurance about fetal well-being. It is unknown, however, whether the startle response is also suppressed by betamethasone. The purpose of this study was to examine the effect of betamethasone on this biophysical parameter. Methods: A prospective cohort study. Vibroacoustic stimulation was applied to the maternal abdomen and fetal movement responses were sonographically observed prior to (0 hours), 48 hours after, and 96 hours after betamethasone administration. We recorded the presence or absence of the fetal startle response, and, if a response was present, graded semi-quantitatively the intensity of the movements (vigorous versus sluggish). Results: Twenty-two of 26 fetuses (84.6%) displayed a vigorous vibroacoustic startle response prior to betamethasone administration, in comparison to three of 26 fetuses (11.5%) at 48 hours after exposure (p&lt;0.0001). Eleven fetuses and eight fetuses displayed no startle response at all (p&lt;0.0005), or a sluggish response only (p&lt;0.0005) at 48 hours, respectively. At 96 hours after betamethasone exposure, no differences in the number of fetuses with a vigorous, sluggish, or absent response were observed in comparison to 0 hours. Stratification of cases by gestational age groups of 28-30 weeks versus 31-34 weeks showed similar response patterns. Conclusion: Antenatal betamethasone exposure transiently suppresses the sonographically observed fetal startle response to vibroacoustic stimulation. Accordingly, this modality cannot be used for the ascertainment of fetal well-being of steroid exposed fetuses. Betamethasone seems to suppress central nervous system dependent biophysical activities. including the brain-stem dependent vibroacoustic startle reflex. abstract_id: PUBMED:8437804 Fetal vibroacoustic stimulation. Objective: To review the literature on fetal vibroacoustic stimulation as a method of changing the fetal sleep state during nonstress testing. Data Sources: The MEDLINE data base and reference lists were used to select articles pertaining to human and animal fetal vibroacoustic stimulation. Methods Of Study Selection: We reviewed the English-language literature with respect to sound intensity, stimulus duration, and repetition rate as possible means of altering fetal sleep-wake cycles. Data Extraction And Synthesis: Several criteria have been used to measure fetal response: gestational age, fetal heart rate, fetal movements, fetal breathing movements, fetal habituation, and fetal behavioral states. The use of vibroacoustic stimulation for antepartum and intrapartum fetal assessment and safety issues are also reviewed. Conclusion: Although further research is needed, the ability of vibroacoustic stimulation to elicit FHR accelerations has been established, thus decreasing the false-positive rate associated with nonreactive nonstress testing. Answer: Yes, vibroacoustic stimulation (VAS) can improve the efficiency of a tertiary care antenatal testing unit. Studies have shown that VAS is associated with a reduction in time to fetal reactivity during non-stress tests (NSTs), which are a component of antenatal testing. Specifically, the use of VAS was linked to a 3.76-minute reduction in time to reactivity and a 56% reduction in the need for a biophysical profile (BPP), although there was no significant decrease in overall time spent on the monitor or in the antenatal testing unit (ATU) (PUBMED:22873632). Furthermore, a review of multiple trials indicated that VAS reduced the incidence of non-reactive antenatal cardiotocography tests and reduced the overall mean cardiotocography testing time, suggesting that VAS can make antenatal heart rate testing more efficient (PUBMED:11279788). Another study found that fetal vibroacoustic stimulation during NSTs reduced falsely nonreactive test results and shortened the duration of the tests (PUBMED:8119607). Additionally, a comparison of different stimulation methods showed that both vibroacoustic and halogen light stimulation decreased the incidence of nonreactive results and reduced the test time, indicating that VAS is a safe and efficient method for fetal well-being assessment (PUBMED:23983739). In conclusion, compliance with a strict VAS protocol may improve the efficiency of increasingly busy ATUs by decreasing the time to fetal reactivity and reducing the need for additional testing such as BPPs (PUBMED:22873632).
Instruction: Is poor mental health a risk factor for retirement? Abstracts: abstract_id: PUBMED:21461932 Is poor mental health a risk factor for retirement? Findings from a longitudinal population survey. Purpose: Poor mental health may influence people's decisions about, and ability to, keep working into later adulthood. The identification of factors that drive retirement provides valuable information for policymakers attempting to mitigate the effects of population ageing. This study examined whether mental health predicts subsequent retirement in a general population sample, and whether this association varied with the timing of retirement. Methods: Longitudinal data from 2,803 people aged 45-75 years were drawn from five waves of the Household Income and Labour Dynamics in Australia (HILDA) survey. Discrete-time survival analyses were used to estimate the association between mental health and retirement. Mental health was measured using the Mental Health Index (MHI-5). The relative influences of other health, social, financial, and work-related predictors of retirement were considered to determine the unique contribution of mental health to retirement behaviour. Results: Poor mental health was associated with higher rates of retirement in men (hazard rate ratio, HRR 1.19, 95% CI 1.01-1.29), and workforce exit more generally in women (HRR 1.14, 95% CI 1.07-1.22). These associations varied with the timing of retirement and were driven by early retirees specifically. Physical functioning, income, social activity, job conditions (including job stress for women and job control for men), and aspects of job satisfaction also predicted subsequent retirement. Conclusions: Poor mental and physical health predict workforce departure in mid-to-late adulthood, particularly early retirement. Strategies to accommodate health conditions in the workplace may reduce rates of early retirement and encourage people to remain at work into later adulthood. abstract_id: PUBMED:34062013 The impact of retirement on mental health. After 2020, with Chinese baby boomers growing old, more and more working people will step into retirement. What kind of influence retirement behaviour will have on the mental health of the older adults and whether the existing findings of retirement on the mental health of the older adults are applicable to China's current conditions? The answers are related to the improvement of the well being of older adults and future policy orientation. Based on the China Family Tracking Survey data in 2016 and 2018, the paper employed the Ordinary Least Squares, Two Stage Least Squares, and Propensity Score Matching methods to investigate the effect of retirement on the mental health of older adults in China. Results show that retirement can significantly reduce the depression and has a positive impact on the their mental health, and no chronic diseases, poor economic status and shorter education years are conducive to improve mental health of the elderly. Further, the mechanisms differ between the sexes that while exercise is a positive mediator for both sexes, reading and family dinners are positive mediators for men but not for women. abstract_id: PUBMED:30822608 Saved by retirement: Beyond the mean effect on mental health. We analyze the causal effect of retirement on mental health, exploiting differences in retirement eligibility ages across countries and over time using data from the Survey of Health, Ageing and Retirement in Europe. We estimate not only average effects, but also use distributional regression to examine whether these effects are unequally distributed across the mental health distribution. We find unequally distributed protective effects of retirement on mental health. These gains are larger among those just below and above the clinically defined threshold of being at risk of depression. The preserving effects are larger for women and blue collar workers. Our results suggest that the magnitude of the protective effect is independent of the availability of family support. abstract_id: PUBMED:28484145 Comparison the effects of poor health and low income on early retirement: a systematic review and meta-analysis. The main aim of this study was to estimate the effects of poor health and low income on early retirement. For this purpose systematic review and meta-analysis were conducted. Web of Science, PUBMED and Scopus databases were searched systematically. Finally 17 surveys were added in meta-analysis. These studies were conducted in 13 countries. At the end a Meta regression was done to show the effects of welfare system type on effect sizes of poor health and low income. The results of this study showed that poor health had effect on the risk of early retirement. (Poor health pooled effect sizes: 1.279 CI: (1.15 1.41), low income pooled effect sizes: 1.042 CI: (0.92 1.17), (poor health pooled marginal effects: 0.046 CI: (-0.03 0.12), low income pooled marginal effects: -0.002 CI: (-0.003 0.000). The results of this study showed that association between poor health and early retirement was stronger in comparison with low income and early retirement. abstract_id: PUBMED:16171915 Retirement and mental health: analysis of the Australian national survey of mental health and well-being. Nation-wide research on mental health problems amongst men and women during the transition from employment to retirement is limited. This study sought to explore the relationship between retirement and mental health across older adulthood, whilst considering age and known risk factors for mental disorders. Data were from the 1997 National Survey of Mental Health and Well-being, a cross-sectional survey of 10,641 Australian adults. The prevalence of depression and anxiety disorders was analysed in the sub-sample of men (n = 1928) and women (n = 2261) aged 45-74 years. Mental health was assessed using the Composite International Diagnostic Instrument. Additional measures were used to assess respondents' physical health, demographic and personal characteristics. The prevalence of common mental disorders diminished across increasing age groups of men and women. Women aged 55-59, 65-69, and 70-74 had significantly lower rates of mental disorders than those aged 45-49. In contrast, only men aged 65-69 and 70-74 demonstrated significantly lower prevalence compared with men aged 45-49. Amongst younger men, retirees were significantly more likely to have a common mental disorder relative to men still in the labour force; however, this was not the case for retired men of, or nearing, the traditional retirement age of 65. Men and women with poor physical health were also more likely to have a diagnosable mental disorder. The findings of this study indicate that, for men, the relationship between retirement and mental health varies with age. The poorer mental health of men who retire early is not explained by usual risk factors. Given current policy changes in many countries to curtail early retirement, these findings highlight the need to consider mental health, and its influencing factors, when encouraging continued employment amongst older adults. abstract_id: PUBMED:16614785 Mental health and the timing of men's retirement. Background: Analysis of the Psychiatric Morbidity Survey of Great Britain showed that the prevalence of common mental disorders was lower amongst men at or above Britain's state pension age of 65, relative to younger men. Retirees below this age had consistently higher rates of mental disorders than working men. In contrast, the low prevalence of mental disorders amongst retirees aged 65 and older was similar to that of their working peers. The aim of this analysis was to investigate this pattern of results in a national sample of Australian men, and the mediating role of socio-demographic factors. Method: Data were from the Household, Income and Labour Dynamics (HILDA) in Australia survey (2003). The analyses included men aged 45-74 years who were active in the labour force (n = 1309), or retired (n = 635). Mental health was assessed using the mental health scale from the Short-Form 36 Health Questionnaire. Results: Retirees were more likely to have mental health problems than their working peers, however this difference was progressively smaller across age groups. For retirees above, though not below, the age of 55 this difference was explained by poorer physical functioning. When age at retirement was considered it was found that early retirees who were now at or approaching the conventional retirement age did not display the substantially elevated rates of mental health problems seen in their younger counterparts. Further, men who had retired at age 60 or older did not display an initially elevated rate of mental health problems. Conclusions: The association between retirement and mental health varies across older adulthood. Retired British and Australian men below the conventional retirement age of 65 are more likely to have mental health problems relative to their working peers, and retirees above this age. However, poor mental health appears to be linked to being retired below this age rather than an enduring characteristic of those who retire early. abstract_id: PUBMED:29975397 Women's Mental Health After Retirement. The aim of the current study was to examine mental health outcomes in retired women and determine whether relationships existed among mental health outcomes, sociodemographic characteristics, and type of retirement (i.e., voluntary or forced). A cross-sectional study was conducted with 80 women ages 55 and older residing in five southeastern states. Women had retired at least part-time from working outside of the home. Sociodemographic variables, diagnosis of depression, diagnosis of cognitive impairment, and health-related quality of life were assessed. Women with forced retirement had worse mental health compared to those who retired voluntarily. Minority women had higher rates of forced retirement compared with White women. Poorer mental health outcomes for women with forced retirement suggest the need for careful consideration of this transition as a socially determined health factor for retired women, especially minority women. Clinicians need to assess women for mental health indicators during the transition to retirement and provide educational and therapeutic resources to promote mental health during the transition from working life to retirement. [Journal of Psychosocial Nursing and Mental Health Services, 56(7), 37-45.]. abstract_id: PUBMED:25271125 Retirement, age, gender and mental health: findings from the 45 and Up Study. Objectives: To examine the relationships of retirement and reasons for retirement with psychological distress in men and women at the age of 45-79 years. Method: Data from 202,584 Australians participating in the large-scale 45 and Up Study was used. Psychological distress was measured by the Kessler psychological distress scale. Associations between different work status and reasons for retirement with psychological distress were assessed for men and women at different ages using logistic regression. Results: Being fully retired or unemployed was associated with the high levels of psychological distress compared to being in paid work for men and women aged 45-64 (p &lt; 0.0001), and for men aged 65-74 years (p ≤ 0.0014). At the age of 75-79 years, there was no difference in psychological distress between different work statuses. Among retirees, retirement due to ill health, being made redundant or caring duty was associated with the high level of psychological distress. Conclusion: The association between work and mental health underscores the importance of policies and strategies to encourage and enable people to continue in the workforce after age 55, particularly for men. Important reasons for retirement with worse mental health outcomes include redundancy, ill health and needing to care for family or a friend. These circumstances will affect whether a person can continue working and their risk of poor mental health, and both considerations should be addressed in developing approaches for maintaining older workers or assisting them with their retirement transition. abstract_id: PUBMED:11995739 Retirement and mental health. Objectives: This research examines whether retirement is associated with mental health and how one's daily pursuits mediate this association. It tests two perspectives from the sociology of work and the sociology of mental health. Methods: Using data from two surveys, the 1995 Aging, Status, and Sense of Control and the 1987-1988 National Survey of Families and Households, regression analysis was used to examine retirement, activities, and well-being. Results: In support of the view that work is alienating and retirement liberating, retirees experienced less anxiety and distress and higher positive affect. Retirees' lower anxiety and distress were explained by activity characteristics. In support of the view that work is empowering and retirement demoralizing, retirement is associated with lower sense of control in both data sets, in part because of the daily pursuits. Retirement was not associated with depressive symptomatology. Discussion: Suggestions for creating opportunities that enhance well-being are discussed. abstract_id: PUBMED:37403767 Involuntary delayed retirement and mental health of older adults. Objective: This study examined the association of multiple mental health measures with involuntary delayed retirement (IDR) in working older adults (≥ 65 years) in the USA. Methods: Data were derived from the Health and Retirement Study, focusing on two combined waves of working older adults in 2010 and 2012. IDR was measured as the desire to stop working but the inability to do so due to financial constraints. In addition, mental health outcomes included depression, anxiety, anger-in, and anger-out. Using Stata 16.0, primary analyses were conducted for descriptive statistics and multivariable logistic regression. The odds ratios were reported with 95% confidence intervals (CIs). Results: Older adults who reported IDR were more likely to have depression (OR = 3.20, CI = 1.03-9.88), anxiety (OR = 2.12, CI = 1.00-5.18), and anger-in (OR = 1.71, CI = 1.12-2.60) compared to those who did not report IDR. However, IDR was not significantly associated with anger-out in older adults who worked past the traditional retirement age. Conclusion: The findings suggest that IDR is likely to act as a stressor and affects the mental health of older adults aged 65 and more. Policymakers should pay more attention to helping older adults maintain positive mental health even if they are required to work past retirement age. Answer: Yes, poor mental health is a risk factor for retirement. Studies have found that poor mental health can predict workforce departure in mid-to-late adulthood, particularly early retirement. For instance, one study using longitudinal data from the Household Income and Labour Dynamics in Australia (HILDA) survey found that poor mental health was associated with higher rates of retirement in men and workforce exit more generally in women, especially among early retirees (PUBMED:21461932). Another study that analyzed the Psychiatric Morbidity Survey of Great Britain and the HILDA survey found that retirees below the conventional retirement age of 65 were more likely to have mental health problems compared to their working peers, suggesting that poor mental health is linked to being retired below this age (PUBMED:16614785). Additionally, a systematic review and meta-analysis showed that poor health, which includes mental health, had a stronger effect on the risk of early retirement compared to low income (PUBMED:28484145). However, the relationship between retirement and mental health can be complex. For example, some research indicates that retirement can have a positive impact on mental health, reducing depression among older adults in China (PUBMED:34062013). Another study found that retirement had protective effects on mental health, particularly among those at risk of depression, with larger gains for women and blue-collar workers (PUBMED:30822608). Moreover, the association between retirement and mental health can vary with age, gender, and the circumstances of retirement, such as whether it was voluntary or forced (PUBMED:29975397, PUBMED:25271125). In summary, while poor mental health is a risk factor for retirement, the effects of retirement on mental health can vary depending on individual circumstances and the context of the retirement.
Instruction: Prevalence of malocclusion among mouth breathing children: do expectations meet reality? Abstracts: abstract_id: PUBMED:19282036 Prevalence of malocclusion among mouth breathing children: do expectations meet reality? Objective: The aim of this study was to report epidemiological data on the prevalence of malocclusion among a group of children, consecutively admitted at a referral mouth breathing otorhinolaryngological (ENT) center. We assessed the association between the severity of the obstruction by adenoids/tonsils hyperplasia or the presence of allergic rhinitis and the prevalence of class II malocclusion, anterior open bite and posterior crossbite. Methods: Cross-sectional, descriptive study, carried out at an Outpatient Clinic for Mouth-Breathers. Dental inter-arch relationship and nasal obstructive variables were diagnosed and the appropriate cross-tabulations were done. Results: Four hundred and one patients were included. Mean age was 6 years and 6 months (S.D.: 2 years and 7 months), ranging from 2 to 12 years. All subjects were evaluated by otorhinolaryngologists to confirm mouth breathing. Adenoid/tonsil obstruction was detected in 71.8% of this sample, regardless of the presence of rhinitis. Allergic rhinitis alone was found in 18.7% of the children. Non-obstructive mouth breathing was diagnosed in 9.5% of this sample. Posterior crossbite was detected in almost 30% of the children during primary and mixed dentitions and 48% in permanent dentition. During mixed and permanent dentitions, anterior open bite and class II malocclusion were highly prevalent. More than 50% of the mouth breathing children carried a normal inter-arch relationship in the sagital, transversal and vertical planes. Univariate analysis showed no significant association between the type of the obstruction (adenoids/tonsils obstructive hyperplasia or the presence of allergic rhinitis) and malocclusions (class II, anterior open bite and posterior crossbite). Conclusions: The prevalence of posterior crossbite is higher in mouth breathing children than in the general population. During mixed and permanent dentitions, anterior open bite and class II malocclusion were more likely to be present in mouth breathers. Although more children showed these malocclusions, most mouth breathing children evaluated in this study did not match the expected "mouth breathing dental stereotype". In this population of mouth breathing children, the obstructive size of adenoids or tonsils and the presence of rhinitis were not risk factors to the development of class II malocclusion, anterior open bite or posterior crossbite. abstract_id: PUBMED:24653601 An epidemiological study to know the prevalence of deleterious oral habits among 6 to 12 year old children. Background: This study was taken to assess the prevalence of deleterious oral habits among 6-12 year old school going children. Materials & Methods: A sample size of 832 children was finalized with simple random sampling technique including 444 males and 388 females. To get the demographic information and presence of harmful oral habits a closed-ended questionnaire was developed. Clinical evaluation was also done using mirror and water tests. Chi-square test was done to compare the prevalence of oral habits among different age groups and gender at p&lt;0.05. Results: Bruxism (17.3%) was most commonly seen followed by bottle feeding (10.1%), thumb sucking (8.7%), nail biting (5.8%), tongue thrusting (4.9%) and mouth breathing (4.3%). Prevalence of all deleterious habits were more among female children and it also showed significant differences according to age. Conclusion: The data showed high prevalence of these oral habits. This highlighted the need for preventive orthodontic treatment at early age of life so that future occurrence of malocclusion can be avoided. How to cite the article: Garde JB, Suryavanshi RK, Jawale BA, Deshmukh V, Dadhe DP, Suryavanshi MK. An epidemiological study to know the prevalence of deleterious oral habits among 6 to 12 year old children. J Int Oral Health 2014;6(1):39-43. abstract_id: PUBMED:30131643 Prevalence of Deleterious Oral Habits among 3- to 5-year-old Preschool Children in Bhubaneswar, Odisha, India. Aim: Oral habits during and beyond preschool age are one of the important etiological factors in developing malocclusion and other ill effects on orofacial structures. The objective of the present study was to know the prevalence of deleterious oral habits among 3- to 5-year-old preschool children in Bhubaneswar, Odisha, India. Materials And Methods: This cross-sectional study was conducted among preschool children, in the age group of 3 to 5 years in the city of Bhubaneswar, Odisha, India. To carry out this study, six private schools, two from each of the three electoral constituency, were selected using cluster sampling technique. A total of 500 students, studying in LKG and UKG and their respective mothers/caregivers were selected for the study as per the inclusion/exclusion criteria. Prevalence of different oral habits in children was calculated from the data obtained. Using Statistical Package for the Social Sciences (SPSS), version 17.0 software, Chi-square test was applied to compare the differences present between boys and girls and their significant values (p &lt; 0.05). Results: The result of this study showed a high prevalence of oral habits (36%) among preschool children in Bhubaneswar, Odisha, India. Lip biting was found to be the most prevalent habit (13.4%), followed closely by thumb sucking (12.8%), bruxism (12.8%), and mouth breathing (11%). Conclusion: The study revealed a great dearth of a well-established dental education program for preschool children as well as their parents, caretakers, teachers, and pediatricians in order to provide an effective and timely care to the children.How to cite this article: Dhull KS, Verma T, Dutta B. Prevalence of Deleterious Oral Habits among 3- to 5-year-old Preschool Children in Bhubaneswar, Odisha, India. Int J Clin Pediatr Dent 2018;11(3):210-213. abstract_id: PUBMED:25344319 The prevalence of malocclusion and oral habits among 5-7-year-old children. Background: Digit sucking, tongue thrust swallowing, and mouth breathing are potential risk factors for development of malocclusion. The purpose of this study was to verify the prevalence of different occlusal traits among 5-7-year-old children and assess their relationship with oral habits. Material And Methods: The study included 503 pre-school children (260 boys and 243 girls) with a mean age of 5.95 years. Different occlusal traits were verified by intraoral examination. Oral habits were diagnosed using data gathered from clinical examination of occlusion and extra-oral assessment of the face, combined with a questionnaire for parents. Results: The study demonstrated that 71.4% of the children presented with 1 or more attributes of malocclusion and 16.9% had oral habits. The vertical and sagittal malrelation of incisors, as well as spacing, were the predominant features. This study showed that digit suckers have higher incidence of anterior open bite (P=0.013) and posterior crossbite (P=0.005). The infantile type of swallowing demonstrated strong association (P=0.001) with anterior open bite. Conclusions: Non-nutritive sucking habits and tongue thrust swallowing are significant risk factors for the development of anterior open bite and posterior crossbite in pre-school children. abstract_id: PUBMED:33658172 The prevalence of malocclusion is higher in schoolchildren with signs of hyperactivity. Introduction: Attention deficit-hyperactivity disorder is a behavioral disorder characterized by a lack of focus, impulsive behavior, and or excessive activity. This research aimed to evaluate the association between signs of attention deficit-hyperactivity disorder and malocclusion in schoolchildren. Methods: A cross-sectional study was conducted with a representative sample of 633 children aged 7-12 years. The children were clinically examined for malocclusion using the Dental Aesthetic Index. The predominant breathing pattern was also determined. Parents answered a questionnaire addressing socioeconomic characteristics and the presence of nonnutritive sucking habits. The Swanson, Nolan, and Pelham Scale-IV was filled out by both parents and teachers to compare behavioral patterns. The children were submitted to a neuropsychological evaluation using the Raven's Colored Progressive Matrix Test. Data analysis involved the chi-square test and Poisson regression analysis. Results: The prevalence of malocclusion was 42% higher among children with signs of hyperactivity reported by both parents and teachers (prevalence ratio [PR], 1.42; 95% confidence interval [CI], 1.11-1.81; P = 0.004). In the final Poisson regression model, the prevalence of malocclusion was lower among schoolchildren aged 11 and 12 years (PR, 0.62; 95% CI. 0.52-0.73; P &lt;0.001) and higher among those who used a pacifier for at least 4 years (PR, 1.25; 95% CI, 1.02-1.54; P = 0.029) as well as those classified as mouth breathers (PR, 1.28; 95% CI, 1.09-1.51; P = 0.003). Conclusions: The prevalence of malocclusion was higher among children with signs of hyperactivity independently of age, pacifier use, and mouth breathing. abstract_id: PUBMED:22276479 News on prevalence of vicious habits in children Aim: The purpose of this study was to determine the prevalence of oral habits among children according to sex and location. Material And Methods: This study was conducted on 416 children aged 6-11 years old. The children were selected from urban and rural schools of Bacau and Iasi, Romania. Sex and location differences were calculated by using Fisher's exact test. Results: The results showed that the prevalence of oral habits was 38,70%; girls (61,5%) were more affected than boys and also the children from urban schools 65,8%. Mouth breathing was the commenest habit (59,5%), followed by thumb thrust (21,5%) and finger sucking (14,1%). The presence or absence of a habit was recording by interview and clinical examination. Conclusions: The prevalence of vicious habits in children in our study was 38.70% - representative of the population prevalence of children aged 6-11 years in Romania. abstract_id: PUBMED:38313577 Malocclusion among children in Vietnam: Prevalence and associations with different habits. Background: This study aimed to measure the prevalence of malocclusion and identify associated factors among elementary school students in Vietnam. Method: A cross-sectional study was conducted from March to December 2022 at six primary schools located in the province of Thai Binh, Vietnam. A total of 873 students were recruited for research purposes. Students were classified into normal, malocclusion classes I, II and III. Bad habits were examined. Multivariate logistic regression was used to detect associations. Results: The prevalence of malocclusion was 60.7 %; 19.0 % had Class I, 31.0 % had Class II and 10.7 % had Class III. Having finger sucking habit was associated with Class I malocclusion (OR: 3.28), and Class II malocclusion (OR: 3.22). Having lip biting habit was related to a higher odds of having Class II malocclusion (OR = 4.37) Class III malocclusion (OR = 6.83). Having tongue thrusting habit was associated with higher odds of having Class I (OR: 5.25), and Class II malocclusion (OR: 6.42). Mouth breathing was related to a higher likelihood of having Class II malocclusion (OR = 2.71). Having early loss of deciduous teeth was associated with a higher odds of having Class III malocclusion (OR = 3.83). Conclusion: Findings showed high prevalence of malocclusion, mostly class II, in elementary students in Vietnam. Bad habits such as finger sucking, biting the lower lip, tongue thrusting, mouth breathing, and early loss of deciduous teeth play important roles in developing malocclusion, which should be considered in the development of interventions. abstract_id: PUBMED:17653409 Prevalence of malocclusion and its association with functional alterations of the stomatognathic system in schoolchildren The aim of this research was to estimate the prevalence of malocclusion among 12-year-old schoolchildren in Camaragibe, Pernambuco State, Brazil. Malocclusions were stratified by the degree of severity, and their association with alterations of the following functions was also analyzed: speech articulation, respiration, and deglutition. Occlusion was assessed by means of the Treatment Priority Index (TPI) and the functions referred to by means of the criteria used in routine clinical speech therapy by a single calibrated examiner (kappa values ranging from 0.64 to 1.00). Schoolchildren were selected randomly from 11 public schools. Of the 173 selected children, 82.1% presented malocclusion (95%CI: 76.4-87.8), with 38.2% classified as minor manifestations of malocclusion; 20.8% definite malocclusions; 13.3% severe malocclusions; and 9.8% very severe malocclusions. The conclusion was that there is a high repressed demand for orthodontic treatment, and that the greater the severity of the malocclusion, the stronger the possibility of association with functional alterations, which must be taken into consideration when planning appropriate public services for these conditions. abstract_id: PUBMED:26604539 Prevalence of Oral Habits among Eleven to Thirteen Years Old Children in Jaipur. Aim: Oral habits that are prevalent well beyond the normal age frequently result in facial deformity and malocclusions. The aim of the present study was to know the prevalence of oral habits in 11 to 13 years old children of Jaipur city. Methodology: The study included 1,000 children of age 11 to 13 years, belonging to different government and private schools of Jaipur city who were screened for any deleterious habits at their school site. The statistical analysis was done using Chi-square test. Results: The result showed that 18% children had a habit of tongue thrusting, 17% mouth breathing and 3% nail biting. Sex-wise prevalence showed 18% females had oral habits and 20% of male had oral habit. Conclusion: The distribution of children aged 11 to 13 years having oral habits was evaluated with tongue thrusting being most prevalent and exhibiting minimal sexual predilection. How to cite this article: Sharma S, Bansal A, Asopa K. Prevalence of Oral Habits among Eleven to Thirteen Years Old Children in Jaipur. Int J Clin Pediatr Dent 2015;8(3):208-210. abstract_id: PUBMED:36119303 Prevalence of Dental Caries and Gingivitis among children with Intellectual Disability in India. Introduction: Down syndrome (DS) is one of the most common genetic abnormalities, with highly variable prognosis. Oral diseases such as periodontal disease, malocclusion, mouth breathing, macroglossia, delayed teeth eruption, missing and malformed teeth, microdontia, diastema, and bruxism are common among individuals with DS. Hence, a study was planned to assess the caries experience and periodontal status of subjects with DS. Materials And Methods: A cross-sectional study was conducted among 92 DS patients in two different centers in Chennai and Pondicherry to assess the prevalence of dental caries and gingivitis, using dft/DMFT and Modified Loe and Silness Gingival Index, respectively. The subjects were examined by three trained and calibrated examiners using a pre-tested and pre-validated proforma. Data were entered in a Microsoft Excel spreadsheet and analyzed using SPSS software (version 20). Descriptive statistics were used. A value of P &lt; 0.05 was considered significant. Results: The mean DMFT was low among male (1.47 ± 2.31) when compared to female (2.57 ± 4.57) which was found to be significant statistically. The gingival score was low among male (.85 ± 0.36) when compared to female (1 ± 1.00) which was found to be significant statistically. The mean dft among male and female were (.49 ± 1.37;.43 ± 1.09), respectively, which was found to be not significant statistically. Conclusion: Dental caries and gingival diseases are higher among children with DS. Dental caries was found both in primary dentition and permanent dentition. Answer: The study reported in abstract PUBMED:19282036 aimed to evaluate the prevalence of malocclusion among mouth breathing children and to assess the association between the severity of obstruction by adenoids/tonsils hyperplasia or the presence of allergic rhinitis and the prevalence of class II malocclusion, anterior open bite, and posterior crossbite. The results showed that posterior crossbite was detected in almost 30% of the children during primary and mixed dentitions and 48% in permanent dentition. During mixed and permanent dentitions, anterior open bite and class II malocclusion were highly prevalent. However, more than 50% of the mouth breathing children carried a normal inter-arch relationship in the sagittal, transversal, and vertical planes. The study concluded that the prevalence of posterior crossbite is higher in mouth breathing children than in the general population, and during mixed and permanent dentitions, anterior open bite and class II malocclusion were more likely to be present in mouth breathers. Despite these findings, most mouth breathing children evaluated did not match the expected "mouth breathing dental stereotype," and the obstructive size of adenoids or tonsils and the presence of rhinitis were not risk factors for the development of the malocclusions studied. In summary, while certain types of malocclusion, such as posterior crossbite, anterior open bite, and class II malocclusion, were more prevalent among mouth breathing children, the expectation that all mouth breathing children would have malocclusions was not met by reality. Many mouth breathing children had a normal dental arch relationship, and the presence of adenoid/tonsil obstruction or allergic rhinitis was not significantly associated with these malocclusions.
Instruction: Arrested pneumatization: witness of paranasal sinuses development? Abstracts: abstract_id: PUBMED:24709406 Arrested pneumatization: witness of paranasal sinuses development? Objectives: Recent radiological studies have demonstrated that formation of the sphenoid sinus is preceded by a phase of fatty transformation of the bone marrow, and then by a phase of fat involution prior to the appearance of an aerated cavity and that this process can sometimes be interrupted, resulting in the persistence of images of arrested pneumatisation. The objective of the study was to confirm the existence of arrested pneumatisation in the sphenoid bone, and to investigate the presence of similar images in the maxilla, frontal and ethmoid bones. Material And Methods: In this single-centre, retrospective study, 207 CT scans with no signs of mucosal opacity or sinus retention performed for assessment of septorhinoplasty or chronic nasal dysfunction were reviewed according to Welker's criteria to detect images of arrested pneumatisation. Results: Twenty-two patients presented 30 images suggestive of arrested pneumatisation of the maxilla (13/30), sphenoid (10/30) and frontal (7/30) bones. No images of arrested pneumatisation were observed in the ethmoid bone. Conclusions: The results of this study question the classical mechanisms of formation of the paranasal sinuses. According to the hypothesis of postnatal bone cavitation resulting from bone marrow involution and centripetal gas production, paranasal sinuses would constitute distinct organs that develop independently of the ethmoidal olfactory organ, which is formed from the embryonic cartilaginous olfactory capsule. abstract_id: PUBMED:7996621 The cartilaginous nasal capsule and embryonic development of human paranasal sinuses. Embryology is of importance to the surgeon both for the study of human developmental anatomy and for the analysis of congenital conditions resulting from malformed or arrested development. The embryonic development of the nose, and especially of the paranasal sinuses, is not yet fully understood. This histologic study of 23 fetal heads aged from 8 to 40 weeks of gestation demonstrates that all four pairs of paranasal sinuses are developed from the cartilaginous nasal capsule. The outpouching of the nasal mucous membranes is only a secondary phenomenon, rather than the primary force. This observation helps to elucidate the following clinical observations: (1) the association of maxillary sinus hypoplasia with hypoplasia of the uncinate process, (2) the origin of chondrosarcoma of the maxillary bone, and (3) pneumatization of the paranasal sinuses. abstract_id: PUBMED:37882847 The development of paranasal sinuses in patients with cystic fibrosis: sinuses volume analysis. Background: Cystic fibrosis (CF) is a severe systemic disease that affects many aspects of patients' lives. It is known that the progression of the disease adversely affects lower and upper airways including the paranasal sinuses. However, its impact on sinus development in the pediatric population is not fully examined. The purpose of this study was to evaluate the development of the paranasal sinuses in a pediatric population with CF and compare it to a control group consisting of healthy children. Methods: The results of computed tomography (CT) scans of children with the disease and the control group were evaluated. The study included 114 CT images of children in the study group and 126 images of healthy children aged 0-18 years. The volumes of maxillary, frontal, and sphenoid sinuses were analyzed. The obtained results were compared with those of the control group and analyzed statistically. Results: The volume and the development of the paranasal sinuses in both groups increased with age, but statistically significant differences were found between the study and the control group. Conclusions: The obtained results provide valuable knowledge regarding the impact of the CF on sinuses development. Also, they may be important in understanding the progression of the disease and its influence on the quality and length of life of patients. The results may contribute to enhanced diagnostics and have implications for improving therapy for patients with chronic sinusitis associated with CF. abstract_id: PUBMED:24124636 Total aplasia of the paranasal sinuses. Although a variety of theories have been proposed about functions of the paranasal sinuses, not one is clear today. Nonetheless, paranasal sinus-related diseases are associated with a high rate of morbidities. Therefore, it is essential to identify the structure and pathophysiology of the paranasal sinuses. Computed tomography (CT) is a valuable tool displaying anatomic variations and diseases. Because paranasal sinus development is a complex and long-lasting process, there are great structural variations between individuals. Several degrees and combinations of aplasias and hypoplasias have been reported; however, there is only one case of total paranasal sinus aplasia in the literature. Here, we present the second case of total paranasal sinus aplasia. Paranasal sinus development, functions of the paranasal sinuses, and the role of CT were evaluated. abstract_id: PUBMED:8373095 Development of the paranasal sinuses in children: implications for paranasal sinus surgery. The pediatric nasal cavity and paranasal sinuses, when compared to those in adults, differ not only in size but also in proportion. Knowledge of the unique anatomy and pneumatization of children's sinuses is an important prerequisite to understanding the pathogenesis of sinusitis and its complications. It is also important in evaluation of radiographs and in planning surgical interventions. In order to study the development of the paranasal sinuses in children and relate clinical anatomy to sinus surgery, the sinuses in 102 pediatric skulls and cadaver heads were measured. The results were classified by stage of development into 4 different age groups: newborn and 1 to 4, 4 to 8, and 8 to 12 years. The characteristics of each group and their clinical importance for paranasal sinus surgery are described. abstract_id: PUBMED:9209592 Development of the paranasal sinuses in children. The development of computed tomography and functional endoscopic sinus surgery has improved diagnosis and management of sinusitis. It has also renewed interest in the developmental anatomy of the paranasal sinuses. There are significant differences between adult and pediatric sinus anatomy, and to safely perform functional endoscopic sinus surgery in children, the surgeon must be aware of these differences. To define the developmental anatomy of the paranasal sinuses, we analyzed 145 computed tomograms from patients under 18 years of age. The study emphasized landmarks at the level of the maxillary sinus ostium. In addition, distances and angles from the nasal spine to various points in the sinuses were determined. The structures were identified and traced on a digitizing tablet. Means and standard deviations were calculated for each measure as a function of age. This study can aid a better understanding of sinus development in children and provide guidance to the endoscopic sinus surgeon. abstract_id: PUBMED:8890126 Laser use in the paranasal sinuses. With the advancement of sinus endoscope and coronal computerized tomography, the ability to diagnose and relate paranasal sinus pathology to abnormal anatomy and function has increased rapidly. The continued development of surgical lasers and research into a better understanding of their advantages, limitations, and possible applications has paralleled this expansion in the diagnosis of paranasal sinuses. The result has been multiple studies now looking at laser use in the paranasal sinuses. These studies are reviewed and the utility of lasers in the sinuses is discussed. abstract_id: PUBMED:32757847 Segmentation procedures for the assessment of paranasal sinuses volumes. Background: The paranasal sinuses are complex anatomical structures, characterised by highly variable shape, morphology and size. With the introduction of multidetector scanners and the development of many post-processing possibilities, computed tomography became the gold standard technique to image the paranasal sinuses. Segmentation allows the extraction of metrical and shape data of these anatomical components that can be applied for diagnostic, education, surgical planning and simulation, and to plan minimally invasive interventions in otorhinolaryngology and neurosurgery. Discussion: Our aim was to provide a review of the existing literature on segmentation, its types and application, and the data obtained from this procedure. The literature search was conducted on PubMed (including Medline), ScienceDirect and Google Scholar databases, using the keywords as follows: 'paranasal sinuses', 'frontal sinus', 'maxillary sinus', 'sphenoid sinus', 'ethmoid sinus', in all possible combinations with the keywords 'segmentation' and 'volumetric analysis'. Inclusion criteria were: articles written in English, on living human subjects, on the adult population and focused on paranasal sinuses analysis. Conclusion: This article provides an overview of the types and main application of segmentation procedures on paranasal sinuses, and the results provided by the studies on this topic. abstract_id: PUBMED:21378740 On the drainage and irrigation of the paranasal sinuses through the natural pathways: the problem revisited This paper was designed to report the results of the treatment of 86 patients at the age from 18 to 40 years presenting with pyoinflammatory processes in the paranasal sinuses. All of them underwent probing and irrigation of these sinuses through natural pathways under local anesthesia. The pathological process in the paranasal sinuses needed from 5 to 7 lavages to be arrested and eliminated. The comparative analysis of pain severity, sense of discomfort, and unpleasant feelingcarried out with the use of a visual-analogous scale demonstrated that the manifestations of these symptoms were by a factor of two weaker after probing than after the puncture treatment. The duration of therapy by means of drainage and irrigation of the paranasal sinuses was similar to that of the traditional treatment of sinusitis. Its advantages include atraumaticity and good tolerability by the patients. abstract_id: PUBMED:33978790 Paranasal sinus development and implications for imaging. Because the paranasal sinuses continue to develop throughout childhood, radiologists encounter significant anatomical variation when confronted with pediatric imaging studies. Knowledge of the stages of sinus growth is vital to differentiate normal development from abnormally large, hypoplastic or malformed sinuses. Understanding how the paranasal sinuses develop and their expected appearance at different patient ages is important for accurate interpretation of diagnostic imaging. To that end, we provide a brief review of paranasal sinus anatomy and discuss the mechanisms by which each paranasal sinus develops, as well as the implications for imaging. We also provide a practical guide for assessing normal paranasal sinuses in children based on the relationship of the sinuses to anatomical landmarks at various stages of development in order to help pediatric radiologists provide accurate interpretation of diagnostic imaging. Answer: Arrested pneumatization refers to the interruption of the normal development process of the paranasal sinuses, where the transformation from bone marrow to an aerated cavity is halted, resulting in the persistence of non-aerated images on radiological scans. This phenomenon has been observed in the sphenoid, maxillary, and frontal bones but not in the ethmoid bone, suggesting that the paranasal sinuses may develop independently of the ethmoidal olfactory organ, which is formed from the embryonic cartilaginous olfactory capsule (PUBMED:24709406). The embryonic development of the paranasal sinuses originates from the cartilaginous nasal capsule, with the outpouching of the nasal mucous membranes being a secondary event rather than the primary force in sinus development (PUBMED:7996621). The development of the paranasal sinuses can be affected by systemic diseases such as cystic fibrosis, which can lead to differences in sinus volume and development when compared to healthy children (PUBMED:37882847). Total aplasia of the paranasal sinuses, although rare, has been reported and underscores the variability in sinus development among individuals (PUBMED:24124636). The development of the paranasal sinuses in children has important implications for sinus surgery, as the anatomy and pneumatization of children's sinuses differ significantly from those of adults (PUBMED:8373095; PUBMED:9209592). In summary, arrested pneumatization can indeed be considered a witness of paranasal sinus development, as it provides evidence of the developmental processes and potential variations or interruptions that can occur. Understanding the development and variations of the paranasal sinuses is crucial for accurate diagnosis, surgical planning, and management of sinus-related diseases (PUBMED:8890126; PUBMED:32757847; PUBMED:21378740; PUBMED:33978790).
Instruction: Do low-risk nulliparous women with abnormal uterine artery Doppler in the third trimester have poorer perinatal outcomes? Abstracts: abstract_id: PUBMED:27268024 Do low-risk nulliparous women with abnormal uterine artery Doppler in the third trimester have poorer perinatal outcomes? A longitudinal prospective study on uterine artery Doppler in low-risk nulliparous women and correlation with pregnancy outcomes. Objective: To evaluate uterine artery (UtA) Doppler over the course of pregnancy in low-risk nulliparous women and to analyze whether an abnormal uterine artery pulsatility index (UtA-PI) at a 32-34 week' scan implies poorer perinatal outcomes. Methods: An observational prospective study was carried out including 616 low-risk nulliparous women. Women with any of the following were excluded: fetal abnormalities, multiple pregnancy, and heparin, metformin or hypotensive treatment. Maternal characteristics, mean arterial pressure measurements and UtA Doppler findings were recorded longitudinally. Results: Complete pregnancy data were available for 489/616 women (79.3%). Of these, 385 women had a normal UtA-PI throughout pregnancy (Group 0), while 50 (10.1%) had an UtA-PI &gt; 95th percentile in the first or the second trimester that normalized in the third trimester (Group 1), and 56 (11.4%) had an abnormal UtA-PI in the third trimester (Group 2). We found that the rate of pre-eclampsia (PE) was higher in Group 2 (7/56 versus 4/435, p = 0.003) as was the rate of intrauterine growth restriction (IUGR) (6/56 versus 14/435, p = 0.02). Conclusions: Low-risk nulliparous women with abnormal UtA Doppler findings in the third trimester are at a higher risk of developing PE and having a baby with IUGR. abstract_id: PUBMED:24277892 Third-trimester abnormal uterine artery Doppler findings are associated with adverse pregnancy outcomes. Objectives: To evaluate the association between third-trimester abnormal uterine artery Doppler findings and pregnancy outcomes. Methods: A prospective study was designed, including 198 consecutive singleton pregnancies between 27 and 41 weeks' gestation. In the study population, 144 had normal uterine artery Doppler waveforms, 37 had unilateral pathologic waveforms, and 17 had bilateral pathologic waveforms. Eighty patients had intrauterine growth restriction (IUGR), preeclampsia toxemia, or both, and 118 had no complications and served as a control group. The uterine artery Doppler waveform was considered abnormal when a notch or pulsatility index above the 90th percentile was noted. Results: In patients with bilateral pathologic uterine artery Doppler waveforms, the rates of cesarean delivery, small-for-gestational-age (SGA) neonates, preterm delivery, and low Apgar scores were increased compared to patients with normal or pathologic unilateral waveforms (P = .009; P &gt; .001; P = .007; P &gt; .001, respectively). The incidence rates for SGA neonates, cesarean delivery, and preterm delivery were significantly higher among patients without IUGR or preeclampsia toxemia when associated with pathologic bilateral waveforms in comparison to normal waveforms (P = .01 for all). A bilateral pathologic waveform was found to be an independent risk factor for cesarean delivery and SGA neonates. The incidence rates for SGA neonates and preterm delivery were significantly higher among patients with IUGR and/or preeclampsia toxemia when associated with bilateral abnormalities in comparison to normal waveforms (P = .01 for both). Conclusions: Third-trimester abnormal uterine artery Doppler findings are associated with worse perinatal outcomes among patients both with and without pregnancy complications. abstract_id: PUBMED:33390064 Increased pulsatility index of uterine artery Doppler between 26 and 28 weeks of gestation and adverse perinatal outcomes. Objective: To compare adverse perinatal outcomes in pregnant women with or without normalization of the mean pulsatility index (PI) uterine artery Doppler between 24 and 28 weeks of gestation. Methods: Retrospective cohort which pregnant women were divided into three groups: normal uterine artery Doppler between 20-24 and 26-28 weeks (controls), abnormal uterine artery Doppler between 20-24 and normal between 26-28 weeks (anUtA), and abnormal uterine artery Doppler between 20-24 and 26-28 weeks (aaUtA). To compare adverse perinatal results between the groups Chi-square test was used. Binary logistic regression was used to assess the ability of uterine artery Doppler to predict birthweight &lt; 10th and composite perinatal outcomes. Results: Birthweight was significantly lower in the aaUtA compared to anUtA (2687 vs 3248 grams, p = 0.0479). A significant negative correlation was observed between the mean PI uterine artery Doppler during the 3rd trimester and birthweight (r = -0.13, R2 = 0.035, p = .0192). The prevalence of composite perinatal outcomes was significantly higher in aaUtA compared to anUtA (25.9 vs 0%, p = .013). Mean PI uterine artery Doppler during the 3rd trimester was significant predictor for birthweight &lt; 10th (OR: 2.74, CI 95% = 1.03-7.3), but the protodiastolic notch and the association between mean PI uterine artery Doppler and protodiastolic notch were not. Conclusion: Maintenance of altered uterine artery Doppler during the 3rd trimester was associated with higher prevalence of composite perinatal outcomes and lower birthweight compared to its late normalization. Although modest, uterine artery Doppler in the 3rd trimester proved to be predictor of birthweight &lt; 10th. abstract_id: PUBMED:35895911 Third trimester uterine artery Doppler for prediction of adverse perinatal outcomes. Purpose Of Review: Abnormal uterine artery Doppler (UtAD) studies early in gestation have been associated with adverse pregnancy outcomes. However, their association with complications in the third trimester is weak. We aim to review the prediction ability for perinatal complications of these indices in the third trimester. Recent Findings: Abnormal UtAD waveforms in the third trimester are associated with preeclampsia, small-for-gestational age infants (SGA), preterm birth, perinatal death, and other perinatal complications, such as cesarean section for fetal distress, 5 min low Apgar score, low umbilical artery pH, and neonatal admission to the ICU, particularly in SGA infants. UtAD prediction performance is improved by the addition of maternal characteristics as well as biochemical markers to prediction models and is more precise if the evaluation is made closer to delivery or diagnosis. Summary: This review shows that the prediction accuracy of UtAD for adverse pregnancy outcomes during the third trimester is moderate at best. UtAD have limited additive value to prediction models that include PlGF and sFlt-1. Serial assessments rather than a single third trimester evaluation may enhance the prediction performance of the UtAD combined models. abstract_id: PUBMED:35262064 Utilization of Uterine and Umbilical Artery Doppler in the Second and Third Trimesters to Predict Adverse Pregnancy Outcomes: A Nigerian Experience. Objective: To assess the utility of uterine and umbilical artery Doppler in the second and third-trimester in predicting adverse pregnancy outcomes. Methodology: In a prospective longitudinal study, the demographic, clinical, Doppler ultrasound parameters of the uterine and umbilical arteries of 84 consecutive women attending the antenatal clinic at 22-24 weeks and 116 women at 30-34 weeks gestation and pregnancy outcomes were documented and analyzed. Results: Pregnant women with adverse pregnancy outcomes had significantly higher second-trimester mean uterine systolic/diastolic (S/D) ratio (p = 0.001), pulsatility index (PI; p = 0.003), umbilical artery S/D (p = 0.016), and resistivity index (RI; p = 0.041) as well as higher third-trimester uterine S/D and PI. While pregnancies with adverse fetal outcomes showed significantly higher uterine artery S/D and PI at the second trimester, third-trimester uterine showed higher S/D, RI, and PI and umbilical artery PI than in women with normal fetal outcomes. The combination of uterine PI and early diastolic notch were predictors of maternal outcomes and correctly predicted 73% (p &lt; 0.001) in the second trimester. By the third trimester, the uterine PI alone was the best predictor and accurately predicted about 62% of maternal outcomes (p = 0.028). In addition, the second-trimester uterine S/D and early diastolic notch and uterine PI in the third trimester correctly predicted 79% and 78% of fetal outcomes, respectively. Conclusion: Among unselected pregnant women population, the second-trimester Doppler parameters are better predictors of maternal adverse pregnancy outcomes, while adverse fetal outcome prediction by uterine and umbilical Doppler at the second- and the third-trimester parameters are comparable. abstract_id: PUBMED:31785172 Third-trimester uterine artery Doppler for prediction of adverse outcome in late small-for-gestational-age fetuses: systematic review and meta-analysis. Objective: To investigate the predictive ability for adverse perinatal outcome of abnormal third-trimester uterine artery Doppler in late small-for-gestational-age (SGA) fetuses. Methods: A systematic search was performed to identify relevant observational studies and randomized controlled trials evaluating the performance of abnormal third-trimester uterine artery Doppler for the prediction of adverse perinatal outcome in suspected SGA fetuses and SGA neonates. Abnormal uterine artery Doppler was defined as uterine artery pulsatility index &gt; 95th percentile or ≥ 2 SD above the mean, or bilateral uterine artery notching. Hierarchical summary receiver-operating-characteristics (ROC) curves were constructed using random-effects modeling. Bayesian analysis was used to calculate the posterior probability of adverse perinatal outcome following an abnormal or normal uterine artery Doppler assessment. Results: Seventeen observational studies (including 7552 fetuses either diagnosed with suspected SGA (n = 3461) or later diagnosed as a SGA neonate (n = 4091)) met the inclusion criteria; no randomized-controlled trials met the inclusion criteria. Summary ROC curves showed that, among suspected SGA fetuses, the best predictive accuracy of abnormal third-trimester uterine artery Doppler was for perinatal mortality and the worst was for composite adverse perinatal outcome, with areas under the summary ROC curves of 0.90 and 0.66, respectively. The corresponding positive and negative likelihood ratios were 16.5 and 0.6 for perinatal mortality and 2.82 and 0.65 for composite adverse perinatal outcome, respectively. Following an abnormal vs normal uterine artery Doppler assessment, the posterior risks for composite adverse perinatal outcome, admission to the neonatal intensive care unit, Cesarean section for intrapartum fetal compromise, 5-min Apgar score &lt; 7, neonatal acidosis and perinatal death were: 52.3% vs 20.2%, 48.6% vs 18.7%, 23.1% vs 15.2%, 3.59% vs 1.32%, 9.15% vs 5.12% and 31.4% vs 1.64%, respectively. Conclusion: Abnormal uterine artery Doppler in the third trimester appears to be moderately useful in predicting perinatal death in pregnancies with suspected SGA. Copyright © 2019 ISUOG. Published by John Wiley &amp; Sons Ltd. abstract_id: PUBMED:26327300 Value of third-trimester cerebroplacental ratio and uterine artery Doppler indices as predictors of stillbirth and perinatal loss. Objective: Placental insufficiency contributes to the risk of stillbirth. Cerebroplacental ratio (CPR) is an emerging marker of placental insufficiency. The aim of this study was to evaluate the association of third-trimester fetal CPR, uterine artery (UtA) Doppler and estimated fetal weight (EFW) with stillbirth and perinatal death. Methods: This was a retrospective cohort study including 2812 women with a singleton pregnancy who underwent an ultrasound scan in the third trimester. EFWs were converted into centiles, and Doppler indices (UtA and CPR) were converted into multiples of the median (MoM), adjusting for gestational age. Regression analysis was performed to identify, and adjust for, potential confounders, and receiver-operating characteristics (ROC) curve analysis was used to assess the predictive value. Results: When adjusting for EFW centile and UtA mean pulsatility index (UtA-PI) MoM, CPR-MoM remained an independent predictor of stillbirth (odds ratio (OR) = 0.003 (95% CI, 0.00-0.11), P = 0.003) and perinatal mortality (OR = 0.001 (95% CI, 0.00-0.03), P &lt; 0.001). UtA-PI ≥ 1.5 MoM was significantly associated with low CPR-MoM, even after adjusting for EFW centile (OR = 5.22 (95% CI, 3.88-7.04), P &lt; 0.001) or small-for-gestational age (SGA; OR = 4.73 (95% CI, 3.49-6.41), P &lt; 0.001). These associations remained significant, even when excluding pregnancies with SGA or including only cases in which Doppler indices were recorded at term (P &lt; 0.01). For prediction of stillbirth, the area under the ROC curve, using a combination of these three parameters, was 0.88 (95% CI, 0.77-0.99) with a sensitivity of 66.7%, specificity of 92.1%, positive likelihood ratio (LR) of 8.46 and negative LR of 0.36. Conclusions: Third-trimester CPR is an independent predictor of stillbirth and perinatal mortality. The role of UtA Doppler, CPR and EFW in assessing risk of adverse pregnancy outcome should be evaluated prospectively. abstract_id: PUBMED:23617256 Association between first trimester vaginal bleeding and uterine artery Doppler measured at second and third trimesters of pregnancy. Objective: To evaluate the prevalence of first trimester vaginal bleeding among patients with abnormal second and third trimester uterine artery Doppler. Methods: A prospective study of patients with a uterine artery Doppler measurement between 27 and 42 weeks' gestation was undertaken. A comparison was made between two groups: patients with and without first trimester vaginal bleeding. Abnormal uterine artery Doppler was defined as PI &gt;95th% or the presence of a diastolic notch. Results: Of the 277 patients that were included in the study, 65 (23%) had first trimester vaginal bleeding. No differences were noted in uterine artery Doppler waveforms among patients with and without first trimester vaginal bleeding. Among patients with first trimester vaginal bleeding, 9 (14%) had a bilateral uterine artery notch and 56 (86%) did not, compared with 51 (24%) and 161 (76%), in the control group, respectively. Patients with first trimester vaginal bleeding, and a bilateral uterine artery notch had significantly higher rates of small for gestational age neonates, low-Apgar scores (&lt;7) at one minute and cesarean deliveries compared to patients with first trimester vaginal bleeding who did not have bilateral uterine artery notch. Conclusion: First trimester vaginal bleeding was not associated with a higher incidence of abnormal uterine artery waveforms or with placental related conditions. However, adverse perinatal outcomes were found when first trimester vaginal bleeding was associated with second and third trimester bilateral uterine artery notchs. abstract_id: PUBMED:30760063 Third trimester uterine artery Doppler indices as predictors of preeclampsia and neonatal small for gestational age. Objective: To test the hypothesis that third-trimester uterine artery Doppler (UAD) predicts adverse pregnancy and neonatal outcomes in a high-risk population.Study design: This is a nested case control study of women with singleton gestations referred for a fetal growth ultrasound between 24 and 36 weeks. Third-trimester UAD was performed if estimated fetal weight (Hadlock's chart) was &lt;20th percentile as these patients were considered high risk for poor pregnancy outcomes. The primary outcomes assessed were neonatal small for gestational age (SGA) and hypertensive disorders. Secondary outcomes included pH &lt;7.10, NICU admission, Apgar &lt;7 at 5 minutes, respiratory distress syndrome, hypoglycemia, and a composite (presence of one or more of the secondary outcomes) neonatal adverse outcome. The sensitivity and specificity of the UAD indices for predicting these outcomes were compared.Results: Among 200 women included, neonatal SGA occurred in 91 (46%) neonates, preeclampsia in 21 (10.5%), early preeclampsia in 4 (2%) and a composite adverse outcome in 67 (34%) neonates. Abnormal UAD indices, specifically left uterine artery notching and pulsatile index (PI) &gt;95th percentile, were significantly correlated with an increased relative risk (RR) of a number of outcomes. Left uterine artery notching was significantly associated with SGA, RR 1.76 (1.03-3.04), preeclampsia, RR 2.53 (1.47-4.37) and early preeclampsia, RR 2.88 (1.34-6.20). The PI &gt;95th percentile was significantly associated with SGA, RR 1.83 (1.21-2.76), NICU admission, RR 1.79 (1.14-2.79), preeclampsia, RR 1.98 (1.29-3.03), and early preeclampsia, RR 3.13 (2.54-3.86). The mean UAD PI &gt;95th percentile had the best sensitivity for SGA, but the area under the ROC curve (AUC) was modest (0.60, 95% CI = 0.53-0.67). Left uterine artery notching and PI &gt;95th percentile had similar predictive utility for preeclampsia AUC 0.65, 95% CI = 0.53-0.76 (mean uterine artery PI &gt;95th percentile) and AUC 0.66, 95% CI = 0.54-0.77 (left uterine artery notching).Conclusion: Abnormal third-trimester UAD indices are associated with adverse perinatal outcomes including neonatal SGA, preeclampsia, and early preeclampsia. Though statistically significantly correlated, the predictive value of UAD indices for adverse pregnancy and neonatal outcomes was modest. abstract_id: PUBMED:20183807 Persistence of increased uterine artery resistance in the third trimester and pregnancy outcome. Objective: To evaluate whether the persistence of abnormal findings in the third trimester following increased uterine artery (UtA) resistance in the second trimester is related to adverse pregnancy outcome. Methods: Low-risk nulliparous women with increased UtA mean pulsatility index (PI) at 20-22 weeks underwent repeat Doppler interrogation at 26-28 weeks and were divided into two groups: those with persistently abnormal Doppler and those with normalized UtA findings. Pregnancy outcome was noted for all patients and compared with that of 104 controls. Results: We examined 104 women with increased UtA resistance in the second trimester and in 62 (59.6%) cases the abnormal uteroplacental Doppler findings persisted to 26-28 weeks. Compared with controls and with patients with normalized Doppler at the third-trimester scan, patients with persistently abnormal Doppler results had a significantly higher risk of pre-eclampsia (10/62 vs. 1/104, P = 0.002 and 10/62 vs. 1/42, P = 0.047, respectively), small-for-gestational age (SGA) fetus (20/62 vs. 1/104, P &lt; 0.001 and 20/62 vs. 4/42; P = 0.007, respectively) and admission of the infant to a neonatal intensive care unit (16/62 vs. 4/104; P &lt; 0.001 and 16/62 vs. 1/42; P &lt; 0.001, respectively). Compared with controls, cases with normalization had an increased risk of SGA (4/42 vs. 1/104, P = 0.03), but there were no significant differences for the other outcome measures. Conclusions: In low-risk nulliparous women with increased UtA resistance in the second trimester, the persistence of abnormal Doppler findings at 26-28 weeks is associated with an increased risk of obstetric complications when compared with both controls and patients with third-trimester Doppler normalization. Answer: Yes, low-risk nulliparous women with abnormal uterine artery Doppler in the third trimester have been found to have poorer perinatal outcomes. Studies have shown that these women are at a higher risk of developing preeclampsia (PE) and having babies with intrauterine growth restriction (IUGR) (PUBMED:27268024). Additionally, abnormal uterine artery Doppler findings in the third trimester are associated with adverse pregnancy outcomes such as increased rates of cesarean delivery, small-for-gestational-age (SGA) neonates, preterm delivery, and low Apgar scores (PUBMED:24277892). Furthermore, the persistence of altered uterine artery Doppler during the third trimester is associated with a higher prevalence of composite perinatal outcomes and lower birthweight compared to its late normalization (PUBMED:33390064). Abnormal third-trimester uterine artery Doppler has also been shown to be moderately useful in predicting perinatal death in pregnancies with suspected SGA (PUBMED:31785172). Moreover, third-trimester cerebroplacental ratio (CPR) and uterine artery Doppler indices have been identified as independent predictors of stillbirth and perinatal mortality (PUBMED:26327300). Abnormal third-trimester uterine artery Doppler indices are also associated with adverse perinatal outcomes including neonatal SGA, preeclampsia, and early preeclampsia, although the predictive value of these indices for adverse outcomes is considered modest (PUBMED:30760063). Lastly, the persistence of abnormal Doppler findings at 26-28 weeks is associated with an increased risk of obstetric complications when compared with both controls and patients with third-trimester Doppler normalization (PUBMED:20183807). In summary, the evidence suggests that abnormal uterine artery Doppler in the third trimester is a significant predictor of poorer perinatal outcomes in low-risk nulliparous women.
Instruction: Does Race/Ethnicity or Socioeconomic Status Influence Patient Satisfaction in Pediatric Surgical Care? Abstracts: abstract_id: PUBMED:26124264 Does Race/Ethnicity or Socioeconomic Status Influence Patient Satisfaction in Pediatric Surgical Care? Objective: To evaluate patient satisfaction in outpatient pediatric surgical care and assess differences in scores by race/ethnicity and socioeconomic status (SES). Study Design: Observational, cross-sectional analysis. Setting: Outpatient pediatric surgical specialty clinics at a tertiary academic center. Subject And Methods: Families of patients received a patient satisfaction survey following their initial care visit in 2012. Mean scores were calculated and compared by child race/ethnicity and insurance type, where insurance with medical assistance (MA) served as a proxy for low SES. Kruskal-Wallis tests were used to compare scores between groups. Surveys were dichotomized to low and high scorers, and multivariate logistic regression was used to calculate the likelihood of high satisfaction. Results: Of 527 surveys completed, 132 (25%) were for children with MA and 143 (27%) were for racial/ethnic minority children. The overall satisfaction score for all specialties was 84.8, which did not significantly differ by SES (P = .98) or minority status (P = .52). The survey item with the highest score in both SES groups was "degree to which provider talked with you using words you could understand" (overall mean 91.94, P = .23). Multivariate analysis showed that patient age, sex, race/ethnicity, insurance type, neighborhood SES, neighborhood diversity, or surgical department did not significantly influence satisfaction. Conclusion: This is the first study to evaluate the relationship between SES and race/ethnicity with patient satisfaction in outpatient pediatric surgical specialty care. In this analysis, no disparities were identified in the patient experience by individual- or community-level factors. Although the survey methodologies may be limited, these findings suggest that provision of care in pediatric surgical specialties can be simultaneously equitable, culturally competent, and family centered. abstract_id: PUBMED:21292758 The influence of race/ethnicity and socioeconomic status on end-of-life care in the ICU. Background: There is conflicting evidence about the influence of race/ethnicity on the use of intensive care at the end of life, and little is known about the influence of socioeconomic status. Methods: We examined patients who died in the ICU in 15 hospitals. Race/ethnicity was assessed as white and nonwhite. Socioeconomic status included patient education, health insurance, and income by zip code. To explore differences in end-of-life care, we examined the use of (1) advance directives, (2) life-sustaining therapies, (3) symptom management, (4) communication, and (5) support services. Results: Medical charts were abstracted for 3,138/3,400 patients of whom 2,479 (79%) were white and 659 (21%) were nonwhite (or Hispanic). In logistic regressions adjusted for patient demographics, socioeconomic factors, and site, nonwhite patients were less likely to have living wills (OR, 0.41; 95% CI, 0.32-0.54) and more likely to die with full support (OR, 1.59; 95% CI, 1.30-1.94). In documentation of family conferences, nonwhite patients were more likely to have documentation that prognosis was discussed (OR, 1.47; 95% CI, 1.21-1.77) and that physicians recommended withdrawal of life support (OR, 1.57; 95% CI, 1.11-2.21). Nonwhite patients also were more likely to have discord documented among family members or with clinicians (OR, 1.49; 95% CI, 1.04-2.15). Socioeconomic status did not modify these associations and was not a consistent predictor of end-of-life care. Conclusions: We found numerous racial/ethnic differences in end-of-life care in the ICU that were not influenced by socioeconomic status. These differences could be due to treatment preferences, disparities, or both. Improving ICU end-of-life care for all patients and families will require a better understanding of these issues. Trial Registry: ClinicalTrials.gov; No.: NCT00685893; URL: www.clinicaltrials.gov. abstract_id: PUBMED:32336393 Role of Gender and Race in Patient-Reported Outcomes and Satisfaction. The role of gender, race, and socioeconomic status in outcomes and satisfaction are reflected in patient-reported outcomes using measurement tools representing outcome domains. These domains include pain relief, physical and emotional functioning, adverse events, participant disposition, and patient satisfaction. Measurement tools exist for each of the outcomes in both acute and chronic pain. Patients with lower economic status have greater difficulty accessing care, are involved less in shared decision-making process, and are less satisfied with their care. Blacks, Hispanics, and Asians also have increased difficulty in accessing good quality care. Women have inferior outcomes after medical and surgical interventions. abstract_id: PUBMED:16020676 Race/ethnicity, socioeconomic status, and satisfaction with health care. The purpose of the present study was to evaluate the effects of race/ethnicity and socioeconomic status on consumer health care satisfaction ratings. The authors analyzed national data from the 2001 National Research Corporation Healthcare Market Guide Survey (N = 99 102). Four global and 3 composite ratings were examined. In general, satisfaction ratings were high across all global and composite measures; however, Asian/Pacific Islanders and Hispanics gave lower ratings than did whites, and African Americans gave a mix of higher and lower ratings (vs whites). Among the lowest ratings were those given by American Indians/Alaska Natives living in poverty. Race/ethnicity effects were independent of education and income. These findings are consistent with reports of continuing racial/ethnic disparities in both coverage and care. Programs to improve quality of care must specifically address these well-documented, severe, and persistent disparities. abstract_id: PUBMED:25548336 Race, ethnicity, and socioeconomic status in research on child health. An extensive literature documents the existence of pervasive and persistent child health, development, and health care disparities by race, ethnicity, and socioeconomic status (SES). Disparities experienced during childhood can result in a wide variety of health and health care outcomes, including adult morbidity and mortality, indicating that it is crucial to examine the influence of disparities across the life course. Studies often collect data on the race, ethnicity, and SES of research participants to be used as covariates or explanatory factors. In the past, these variables have often been assumed to exert their effects through individual or genetically determined biologic mechanisms. However, it is now widely accepted that these variables have important social dimensions that influence health. SES, a multidimensional construct, interacts with and confounds analyses of race and ethnicity. Because SES, race, and ethnicity are often difficult to measure accurately, leading to the potential for misattribution of causality, thoughtful consideration should be given to appropriate measurement, analysis, and interpretation of such factors. Scientists who study child and adolescent health and development should understand the multiple measures used to assess race, ethnicity, and SES, including their validity and shortcomings and potential confounding of race and ethnicity with SES. The American Academy of Pediatrics (AAP) recommends that research on eliminating health and health care disparities related to race, ethnicity, and SES be a priority. Data on race, ethnicity, and SES should be collected in research on child health to improve their definitions and increase understanding of how these factors and their complex interrelationships affect child health. Furthermore, the AAP believes that researchers should consider both biological and social mechanisms of action of race, ethnicity, and SES as they relate to the aims and hypothesis of the specific area of investigation. It is important to measure these variables, but it is not sufficient to use these variables alone as explanatory for differences in disease, morbidity, and outcomes without attention to the social and biologic influences they have on health throughout the life course. The AAP recommends more research, both in the United States and internationally, on measures of race, ethnicity, and SES and how these complex constructs affect health care and health outcomes throughout the life course. abstract_id: PUBMED:29893618 Race/Ethnicity, Socioeconomic Status, and Healthcare Intensity at the End of Life. Background: Although racial/ethnic minorities receive more intense, nonbeneficial healthcare at the end of life, the role of race/ethnicity independent of other social determinants of health is not well understood. Objectives: Examine the association between race/ethnicity, other key social determinants of health, and healthcare intensity in the last 30 days of life for those with chronic, life-limiting illness. Subjects: We identified 22,068 decedents with chronic illness cared for at a single healthcare system in Washington State who died between 2010 and 2015 and linked electronic health records to death certificate data. Design: Binomial regression models were used to test associations of healthcare intensity with race/ethnicity, insurance status, education, and median income by zip code. Path analyses tested direct and indirect effects of race/ethnicity with insurance, education, and median income by zip code used as mediators. Measurements: We examined three measures of healthcare intensity: (1) intensive care unit admission, (2) use of mechanical ventilation, and (3) receipt of cardiopulmonary resuscitation. Results: Minority race/ethnicity, lower income and educational attainment, and Medicaid and military insurance were associated with higher intensity care. Socioeconomic disadvantage accounted for some of the higher intensity in racial/ethnic minorities, but most of the effects were direct effects of race/ethnicity. Conclusions: The effects of minority race/ethnicity on healthcare intensity at the end of life are only partly mediated by other social determinants of health. Future interventions should address the factors driving both direct and indirect effects of race/ethnicity on healthcare intensity. abstract_id: PUBMED:22070902 Influence of race and socioeconomic status on engagement in pediatric primary care. Objective: To understand the association of race/ethnicity with engagement in pediatric primary care and examine how any racial/ethnic disparities are influenced by socioeconomic status. Methods: Visit videos and parent surveys were obtained for 405 children who visited for respiratory infections. Family and physician engagement in key visit tasks (relationship building, information exchange, and decision making) were coded. Two parallel regression models adjusting for covariates and clustering by physician were constructed: (1) race/ethnicity only and (2) race/ethnicity with SES (education and income). Results: With and without adjustment for SES, physicians seeing Asian families spoke 24% fewer relationship building utterances, compared to physicians seeing White, non-Latino families (p&lt;0.05). Latino families gathered 24% less information than White, non-Latino families (p&lt;0.05), but accounting for SES mitigates this association. Similarly, African American families were significantly less likely to be actively engaged in decision making (OR=0.32; p&lt;0.05), compared to White, non-Latino families, but adjusting for SES mitigated this association. Conclusion: While engagement during pediatric visits differed by the family's race/ethnicity, many of these differences were eliminated by accounting for socioeconomic status. Practice Implications: Effective targeting and evaluation of interventions to reduce health disparities through improving engagement must extend beyond race/ethnicity to consider socioeconomic status more broadly. abstract_id: PUBMED:31027176 Race/Ethnicity, Socioeconomic Status, and Polypharmacy among Older Americans. Background: Very few studies with nationally representative samples have investigated the combined effects of race/ethnicity and socioeconomic position (SEP) on polypharmacy (PP) among older Americans. For instance, we do not know if prevalence of PP differs between African Americans (AA) and white older adults, whether this difference is due to a racial gap in SEP, or whether racial and ethnic differences exist in the effects of SEP indicators on PP. Aims: We investigated joint effects of race/ethnicity and SEP on PP in a national household sample of American older adults. Methods: The first wave of the University of Michigan National Poll on Healthy Aging included a total of 906 older adults who were 65 years or older (80 AA and 826 white). Race/ethnicity, SEP (income, education attainment, marital status, and employment), age, gender, and PP (using 5+ medications) were measured. Logistic regression was applied for data analysis. Results: Race/ethnicity, age, marital status, and employment did not correlate with PP; however, female gender, low education attainment, and low income were associated with higher odds of PP among participants. Race/ethnicity interacted with low income on odds of PP, suggesting that low income might be more strongly associated with PP in AA than white older adults. Conclusions: While SEP indicators influence the risk of PP, such effects may not be identical across diverse racial and ethnic groups. That is, race/ethnicity and SEP have combined/interdependent rather than separate/independent effects on PP. Low-income AA older adults particularly need to be evaluated for PP. Given that race and SEP have intertwined effects on PP, racially and ethnically tailored interventions that address PP among low-income AA older adults may be superior to universal interventions and programs that ignore the specific needs of diverse populations. The results are preliminary and require replication in larger sample sizes, with PP measured directly without relying on individuals' self-reports, and with joint data collected on chronic disease. abstract_id: PUBMED:15985504 Patient satisfaction with health care providers in South Africa: the influences of race and socioeconomic status. Objective: The first democratic government elected in South Africa in 1994 inherited huge inequalities in health status and health provision across all sections of the population. This study set out to assess, 4 years later, the influence of race and socioeconomic status (SES) on perceived quality of care from health care providers. Design: A 1998 countrywide survey of 3820 households assessed many aspects of health care delivery, including levels of satisfaction with health care providers among different segments of South African society. Results: Fifty-one percent (n = 1953) of the respondents had attended a primary care facility in the year preceding the interview and were retained in the analysis. Both race and SES were significant predictors of levels of satisfaction with the services of the health care provider, after adjusting for gender, age, and type of facility visited. White and high SES respondents were about 1.5 times more likely to report excellent service compared with Black and low SES respondents, respectively. Conclusion: In South Africa, race and SES are not synonymous and can no longer be considered reliable proxy indicators of one another. Each has distinct and significant but different degrees of association with client satisfaction. Any assessment of equity-driven health policy in South Africa should consider the impacts of both race and SES on client satisfaction as one of the indicators of success. abstract_id: PUBMED:24671459 Patient satisfaction in pediatric surgical care: a systematic review. Objective: This study seeks to synthesize evidence-based findings related to patient satisfaction as a process measure in pediatric surgical care. Data Sources: PubMed, CINAHL, Scopus, and the Cochrane Central Register of Controlled Trials. Review Methods: We queried 4 standard search engines (1992-2013) for studies specific to pediatric surgical fields in which patient or parent satisfaction or experience of care was a primary outcome measure. Data were systematically analyzed to determine study characteristics, setting, parent or patient focus, measure of experience, and bias. Two independent investigators independently reviewed all articles. Results: The initial search yielded 4748 publications (1503 duplicates), of which 170 underwent full-text review. Thirty-five were included for analysis; the majority (24/35,77%) were published in the last 5 years. Studies examined experience of the child (3/35), parent (23/35), or both (9/35). Experience and satisfaction were evaluated either by validated self-assessment instruments (8), by satisfaction tools (8), or by nonstandard institutional or author-developed tools (19). Experience was measured in the outpatient (7), preoperative (11), operative (14), and postoperative (3) care settings. Specific findings were unique to setting; however, in many studies higher satisfaction correlated with education/information giving, health care provider interpersonal behaviors, and facile/efficient care processes. Conclusion: The patient experience of care is a valuable quality measure that is being more frequently evaluated as a mechanism to improve pediatric surgical care processes. Findings related to patient satisfaction and experience of care may be limited due to lack of measurement using validated tools. Findings from this review may bear significance as patient experience measures become routinely integrated with quality and reimbursement. Answer: Based on the provided abstracts, it appears that race/ethnicity and socioeconomic status (SES) can influence patient satisfaction in various healthcare settings, including pediatric surgical care, although the findings are not entirely consistent across studies. In the study of outpatient pediatric surgical specialty clinics, no disparities were identified in patient satisfaction by SES or minority status (PUBMED:26124264). This suggests that in this particular setting, care provision was equitable and culturally competent, and patient satisfaction was not significantly influenced by individual- or community-level factors such as race/ethnicity or SES. However, other studies have found that race/ethnicity and SES can influence patient satisfaction and engagement in healthcare. For instance, race/ethnicity was found to be a significant predictor of satisfaction levels, with some racial/ethnic groups reporting lower satisfaction than whites, and these effects were independent of education and income (PUBMED:16020676). Additionally, disparities in engagement during pediatric visits were observed by race/ethnicity, but many of these differences were mitigated when accounting for SES (PUBMED:22070902). Furthermore, research has shown that race/ethnicity and SES have intertwined effects on healthcare intensity at the end of life, with minority race/ethnicity and lower SES being associated with higher intensity care (PUBMED:29893618). This indicates that race/ethnicity and SES can influence healthcare experiences and patient satisfaction in complex ways. In summary, while the study in the outpatient pediatric surgical specialty clinics did not find disparities in patient satisfaction related to race/ethnicity or SES (PUBMED:26124264), other research suggests that these factors can influence patient satisfaction and engagement in healthcare settings more broadly. It is important to consider the context and specific healthcare setting when assessing the impact of race/ethnicity and SES on patient satisfaction.
Instruction: Assessing the prevalence of hypertension in populations: are we doing it right? Abstracts: abstract_id: PUBMED:12640244 Assessing the prevalence of hypertension in populations: are we doing it right? Background: Although it is well recognized that the diagnosis of hypertension should be based on blood pressure (BP) measurements taken on several occasions, notably to account for a transient elevation of BP on the first readings, the prevalence of hypertension in populations has often relied on measurements at a single visit. Objective: To identify an efficient strategy for assessing reliably the prevalence of hypertension in the population with regards to the number of BP readings required. Design: Population-based survey of BP and follow-up information. Setting And Participants: All residents aged 25-64 years in an area of Dar es Salaam (Tanzania). Main Outcome Measures: Three BP readings at four successive visits in all participants with high BP (n = 653) and in 662 participants without high BP, measured with an automated BP device.RESULTS BP decreased substantially from the first to third readings at each of the four visits. BP decreased substantially between the first two visits but only a little between the next visits. Consequently, the prevalence of high BP based on the third reading--or the average of the second and third readings--at the second visit was not largely different compared to estimates based on readings at the fourth visit. BP decreased similarly when the first three visits were separated by 3-day or 14-day intervals. Conclusions: Taking triplicate readings on two visits, possibly separated by just a few days, could be a minimal strategy for assessing adequately the mean BP and the prevalence of hypertension at the population level. A sound strategy is important for assessing reliably the burden of hypertension in populations. abstract_id: PUBMED:1867236 Alcohol abuse: comparison of two methods for assessing its prevalence and associated morbidity in hospitalized patients. Purpose: To evaluate two methods for assessing the prevalence of alcohol abuse in hospitalized patients based upon scores on standardized alcoholism screening instruments compared with diagnostic discharge data, and to determine the risk for comorbid conditions in patients who abuse alcohol. Patients And Methods: Of 2,534 consecutive patients admitted to five adult inpatient services of an academic center, 1,964 were screened for alcohol abuse using the CAGE and the SMAST. Their discharge diagnoses were obtained and analyzed for the presence of alcohol-related diagnoses and other comorbid conditions. Results: A total of 1.4% of patients had a principal alcohol-related diagnosis (ARD), 6% had a secondary but no principal ARD, and 15% screened positive for alcohol abuse but had no ARD. The overall prevalence of alcohol abuse was 22.4%. Patients with a principal ARD had a higher risk for dementia, chronic obstructive pulmonary disease (COPD), pancreatitis, sequelae of liver disease, and illegal drug abuse. Patients with a secondary ARD were at risk for 19 comorbid conditions, including pancreatitis, injury, pneumonia, COPD, and poly-drug abuse. Patients who screened positive for alcohol abuse but had no ARD were significantly more likely to have a diagnosis of hypertension, arrhythmia, breast cancer, or pelvic inflammatory disease. Conclusion: Discharge diagnoses alone markedly underestimate the prevalence of alcohol abuse in hospitalized patients. Patients from the three groups are at higher risk for comorbid conditions, and secondary prevention of alcohol abuse can be achieved by routinely screening every patient using recognized alcoholism screening instruments. abstract_id: PUBMED:11715168 Prevalence estimates for hypertension in Latin America and the Caribbean: are they useful for surveillance? Objective: To apply a recently proposed model and assessment tool created by the authors for critically evaluating the data available on the prevalence of hypertension in LAC and assessing their usefulness for surveillance. Methods: A bibliographic search to identify all publications that estimated the prevalence of hypertension was performed. Each of the papers located was assessed using a critical appraisal tool. Results: Of the 58 studies published between 1966 and 2000, only 28 of them (48%) met the critical threshold to be considered useful for surveillance purposes. The distribution of the 28 studies in terms of their usefulness for surveillance was as follows: minimally useful, 16 studies; useful, 8 studies; and very useful, 4 studies. Several methodological shortcomings were identified, from inadequate sampling procedures and sample size to the poor quality of the primary data for planning purposes. Discussion: Published studies on the prevalence of hypertension in Latin America and the Caribbean have, as a whole, limited usefulness for surveillance activities. abstract_id: PUBMED:8009030 Prevalence of arterial hypertension in the Brittany population The epidemiology of arterial hypertension was studied from 1988 to 1991 in a region in the west of France comprising 500,000 inhabitants. Prevalence of arterial hypertension was calculated to 16.2%, significantly higher in men than in women (22.5% vs 11.2%). abstract_id: PUBMED:15787304 Prevalence of dyslipidaemias in a representative sample of the French population Prevalence of dyslipidaemias in a representative sample of the French population Hypercholesterolaemia is a major factor of risk of coronary atherosclerosis. The prevalence of other types of dyslipidaemia in the general population remains poorly defined. This study was performed to measure the prevalence of various dyslipidaemias in the French population. A representative sample of 3508 men and women between the ages of 35 and 64 years was recruited by the "Multinational MONItoring of trends and determinants in CArdiovascular disease" centres of Lille, Strasbourg and Toulouse. We excluded 162 patients suffering from known cardiovascular disorders, and 409 individuals treated with lipid-lowering drugs. The prevalence of pure hypercholesterolaemia, defined as a total cholesterol concentration &gt;6.2 mmol/l (2.4 g/l) and triglyceride concentration &lt;2.3 mmol/l (2 g/l), was 30% (29-32%). The prevalence of HDL cholesterol concentration &lt;1 mmol/l (0.4 g/l) in men, or &lt;1.3 mmol/l (0.5 g/l) in women, was 12% (11-13%). The prevalence of mixed hyperlipidaemia, defined as a total cholesterol concentration &gt;6.2 mmol/l (2.4 g/l) and triglyceride concentration &gt;2.3 mmol/l (2 g/l) was 5% (4-6%). The prevalence of hypertriglyceridaemia, defined as a total cholesterol concentration &lt;6.2 mmol/l (2.4 g/l) and triglyceride concentration &gt;2.3 mmol/l (2 g/l) was 4% (3-5%). Low HDL cholesterol concentrations were associated with smoking, obesity, and absence of either regular physical exercise or alcohol consumption. This study confirmed the high prevalence of pure hypercholesterolaemia, and revealed an important prevalence of low HDL cholesterol concentration, which represents a major cardiovascular risk factor. abstract_id: PUBMED:33853192 Study of the Prevalence of Glaucoma in Kazakhstan. Background: Glaucoma is one of the leading causes of permanent visual disability around the world. However, the available literature lacks data on the prevalence of glaucoma in Central Asia, particularly in the Republic of Kazakhstan. Objective: The study was aimed at assessing the prevalence of glaucoma in the population of the Republic of Kazakhstan over 40 years old in 2019. Methods: A retrospective study was based on the analysis of the results of glaucoma screenings in 171 832 patients over 40 years old living in Kazakhstan (in 14 counties). Glaucoma cases were confirmed by Goldmann tonometry, fundus photography, and visual field testing. Demographic indicators, territorial differences, and hereditary predisposition were studied and analysed. In addition, blood pressure was measured. Results: Of 171 832 patients examined, 452 with verified glaucoma were identified. The average age of the patients was 63.9 ± 9.4. In rural areas, the prevalence of glaucoma was higher compared to the urban population. The overall prevalence of glaucoma among people over 40 years old was 2.37 ± 0.17. The prevalence of glaucoma among women was higher than for men, with an indicator of 1.91 (95% CI relative risk 1.78 - 2.03) (p &lt; 0.05). The highest prevalence was found in the 71 - 75 age group [equals to 14.2% (95% CI 11.7 - 19.9)], with a statistically significant difference (p &lt; 0.05). The highest prevalence of glaucoma was observed in the group of people with a hereditary predisposition, with an indicator of 14.7% (95% CI 0.6 - 1.9) (p &lt; 0.05). Among all patients with concomitant arterial hypertension (n = 90, 19.9%), women (60%) compared with men (40%) had a 2.4% higher risk of glaucoma morbidity (95% CI 1.2% - 3.8%). Conclusion: This study provides updated information on the prevalence of glaucoma in Kazakhstan. The results obtained confirm that the increase in the prevalence of glaucoma in Kazakhstan is directly proportional to the increase in the patients' age. These results showed the importance of screening for a timely diagnosis, especially for patients with high risk factors such as hereditary predisposition. Moreover, the results indicate that the early detection of systemic hypertension and increased intraocular pressure can be used for the prevention of undesirable outcomes such as an irreversible blindness. abstract_id: PUBMED:19019517 Disease prevalence in the English population: a comparison of primary care registers and prevalence models. The Quality and Outcomes Framework (QOF) is a UK system for monitoring general practitioner (GP) activity and performance, introduced in 2004. The objective of this paper is to explore the potential of QOF datasets as a basis for better understanding geographical variations in disease prevalence in England. In an ecological study, prevalence estimates for four common disease domains (coronary heart disease (CHD), asthma, hypertension and diabetes) were derived from the 2004-2005 QOF primary care disease registers for 354 English Local Authority Districts (LADs). These were compared with synthetic estimates from four prevalence models and with self-reported measures of general health from the 2001 census. Prevalence models were recalculated for LADs using demographic and deprivation data from the census. Results were mapped spatially and cross-tabulated against a national classification of local authorities. The four disease domains display different spatial distributions and different spatial relationships with the corresponding prevalence model. For example, the prevalence model for CHD under-estimated QOF cases in northern England, but this north-south pattern was not evident for the other disease domains. The census-derived health measures were strongly correlated with CHD, but not with the other disease domains. The relationship between modelled prevalence and QOF disease registers differs by disease domain, implying that there is no simple cross-domain effect of the QOF process on prevalence figures. Given reliable synthetic estimates of small area prevalence for the QOF disease domains, one potential application of the QOF dataset may be in assessing the geographical extent of under-diagnosis for each domain. abstract_id: PUBMED:31342672 A systematic review and meta-analysis estimating the population prevalence of comorbidities in children and adolescents aged 5 to 18 years. Evidence for the health impact of obesity has largely focussed on adults. We estimated the population prevalence and prevalence ratio of obesity-associated comorbidities in children and adolescents aged 5 to 18 years. Five databases were searched from inception to 14 January 2018. Population-based observational studies reporting comorbidity prevalence by weight category (healthy weight/overweight/obese) in children and adolescents aged 5 to 18 years from any country were eligible. Comorbidity prevalence, stratified by weight category, was extracted and prevalence ratios (relative to healthy weight) estimated using random effects meta-analyses. Of 9183 abstracts, 52 eligible studies (1 553 683 participants) reported prevalence of eight comorbidities or risk markers including diabetes and nonalcoholic fatty liver disease (NAFLD). Evidence for psychological comorbidities was lacking. Meta-analyses suggested prevalence ratio for prediabetes (fasting glucose ≥ 100 mg/dL) for those with obesity relative to those of a healthy weight was 1.4 (95% confidence interval [CI], 1.2-1.6) and for NAFLD 26.1 (9.4-72.3). In the general population, children and adolescents with overweight/obesity have a higher prevalence of comorbidities relative to those of a healthy weight. This review provides clinicians with information when assessing children and researchers a foundation upon which to build a comprehensive dataset to understand the health consequences of childhood obesity. abstract_id: PUBMED:31544107 High Blood Pressure Prevalence, Awareness, Control, and Associated Factors in a Low-Resource African Setting. Background and Objectives: Recent and contextualized data are needed to improve hypertension management known as a major cardiovascular disease risk factor regardless of the geographical area. This study aimed at assessing the prevalence of hypertension, awareness of hypertensive status, treatment, and control of hypertension as well as assessing the factors associated with risk of hypertension and awareness of hypertensive status in the population of Ngaoundere. Methods: This was a community based cross sectional study carried out from February to December 2016. A three-stage sampling method was used for recruitment of participants. Demographic, clinical, and biological data were collected and analyzed using Statistical Package for Social Sciences version 20.0. Statistical significance was set at P &lt; 0.05. Results: In total, 948 participants were included in the study. The overall prevalence of hypertension was 46.94% (n = 445). Fraction of hypertensive participants who were aware of their status was 36.85% (n = 164). Among them, 39 (23.78%) were getting treatment and the control rate of treated hypertensives was 30.56%. Age, marital status, family history of hypertension, overweight, and high serum triglyceride level were identified as independent predicting factors of hypertension, whereas female gender, age, personal history of stroke or diabetes, family history of hypertension or heart failure, overweight, and abdominal obesity were those of hypertension awareness. Conclusion: The present study revealed high prevalence, extremely low awareness, treatment, and control rates of hypertension in Ngaoundere community setting. abstract_id: PUBMED:38424431 Qatar's Silent Epidemic: A Comprehensive Meta-analysis on the Prevalence of Metabolic Syndrome. Background: Metabolic syndrome comprises various conditions like abdominal obesity, insulin resistance, elevated triglyceride levels, reduced HDL, and high blood pressure, which pose significant health challenges globally. It's imperative to determine its prevalence in specific populations to formulate effective preventive measures. Objective: This systematic review and meta-analysis aimed to determine the prevalence of metabolic syndrome in the Qatari population. Methods: Using the PRISMA guidelines, a systematic search was executed on PubMed until July 2023 with keywords "Metabolic syndrome" and "Qatar." Eligibility criteria included human subjects, studies assessing metabolic syndrome components, and research conducted in Qatar or on Qatari subjects. The quality of the studies was evaluated using the Newcastle-Ottawa Scale (NOS). Pooled prevalence rates were calculated using the inverse variance weighting metaanalysis. Results: Out of 237 studies, 14 met our inclusion criteria, with a combined sample size of 14,772 from the Qatari population. The overall pooled prevalence of metabolic syndrome was 26%. The ATP III and IDF criteria exhibited significant differences in prevalence rates, with the IDF criteria showing a higher prevalence. Age ≥ 40 years demonstrated a higher prevalence compared to the younger group. Studies post-2018 reported a decreasing trend in metabolic syndrome prevalence. Conclusion: The prevalence of metabolic syndrome in the Qatari population is comparable to rates in the Middle East. The study underscores the need for tailored interventions and strategies, especially targeting the older age group. Continuous research and monitoring are essential to track and understand the disease's progression in Qatar. Answer: Assessing the prevalence of hypertension in populations is a critical public health activity, but the methodology used can significantly impact the accuracy of the results. The study from Dar es Salaam, Tanzania, suggests that the diagnosis of hypertension should be based on blood pressure (BP) measurements taken on several occasions to account for transient elevations in BP. The study found that BP decreased substantially from the first to third readings at each of the four visits and that the prevalence of high BP based on the third reading or the average of the second and third readings at the second visit was not largely different compared to estimates based on readings at the fourth visit. This indicates that taking triplicate readings on two visits, possibly separated by just a few days, could be a minimal strategy for assessing the mean BP and the prevalence of hypertension at the population level (PUBMED:12640244). However, the literature review on the prevalence of hypertension in Latin America and the Caribbean (LAC) found that many published studies have limited usefulness for surveillance activities due to methodological shortcomings such as inadequate sampling procedures, sample size, and poor quality of primary data (PUBMED:11715168). This highlights the need for more rigorous methodologies to ensure reliable surveillance of hypertension prevalence. In contrast, a study on the prevalence of arterial hypertension in the Brittany population reported a prevalence of 16.2%, with a higher rate in men than in women (PUBMED:8009030). This study did not discuss the methodology in detail, but the reported prevalence aligns with global patterns of hypertension being more common in men. Overall, these findings suggest that while there are established methods for assessing the prevalence of hypertension, there is still room for improvement in the methodology to ensure accurate and reliable surveillance. This is crucial for public health planning and intervention strategies to manage and prevent hypertension at the population level.
Instruction: Perceptions of Peer Sexual Behavior: Do Adolescents Believe in a Sexual Double Standard? Abstracts: abstract_id: PUBMED:16488828 The role of peer, parent, and culture in risky sexual behavior for Cambodian and Lao/Mien adolescents. Purpose: The purpose of this study was to investigate the role of age, gender, peer, family, and culture in adolescent risky sexual behavior for Cambodian and Laotian (Lao)/Mien youth. Methods: We obtained cross-sectional, in-home interview data including measures of individualism, collectivism, acculturation, risky sexual behavior, peer delinquency, parent engagement, and parent discipline from a sample of mostly second-generation Cambodian (n = 112) and Lao/Mien (n = 67) adolescents. Data were analyzed using step-wise, hierarchical multiple regressions. Results: Peer delinquency and age (older) were significant predictors of risky sexual behavior in both groups. Parent discipline also significantly predicted risky sexual behavior, but only for Lao/Mien adolescents. Vertical and horizontal individualism were associated positively with risky sexual behavior for Cambodian youth whereas collectivism (horizontal) was associated negatively with risky sexual behavior for Lao/Mien youth. Acculturation was nonsignificant in both groups. Conclusions: In addition to age, parents, and peer groups, the findings suggest that culture also matters in risky sexual behavior, particularly for Cambodian and Laotian youth. abstract_id: PUBMED:11512489 The relationship of adolescent perceptions of peer norms and parent involvement to cigarette and alcohol use. This investigation assessed the relative influence of peer norms and parental involvement on adolescent cigarette and alcohol use. An anonymous questionnaire was administered to 2,017 seventh- to 12th-grade students in two Ohio public school districts. Cigarette and alcohol use rates in the sample were comparable to those found in national probability surveys. Results indicated that the relative balance of peer-parent influences did not differ across grade level. At all grade levels, perceived peer norms had substantially greater correlations with cigarette and alcohol use than did measures of perceived parental involvement. The findings are interpreted from an efficiency perspective. Optimal use of prevention resources suggest that programming for seventh- to 12th-graders should focus on shaping the perceptions of peer smoking and drinking practices rather than on parent interventions. Social norms marketing or other forms of normative education should be tested in this population. abstract_id: PUBMED:34536716 Impacts of the respecting the circle of life teen pregnancy prevention program on risk and protective factors for early substance use among native American youth. Background: Early substance use disproportionately impacts Native American (Native) youth and increases their risk for future abuse and dependence. The literature urges for interventions to move beyond focusing on single risk behaviors (e.g. substance use) and instead have capacity to improve health risk behaviors co-occuring during adolescence, particularly among Native populations for whom few evidence-based interventions (EBI) exist. We evaluated the effectiveness of the Respecting the Circle of Life program (RCL) on risk and protective factors for early substance use. RCL is a culturally tailored EBI shown to improve sexual health outcomes among Native youth. Methods: We conducted secondary analyses of data collected through a community-based randomized controlled trial of RCL evaluated among Native youth (ages 11-19) residing on a rural reservation between 2015-2020 (N = 534, 47.4 % male). We used linear regression, controlling for baseline age and sex, to test between study group differences in outcomes at 3-, 9-, and 12-month post-intervention. Models were stratified by sex and age (11-12, 13-14, and 15+ years of age) to examine differences within these subgroups. Results: Youth receiving RCL reported lower intention to use substances through 12-months follow-up (p = 0.006). Statistically significant improvements were also observed across peer, parent, and sexual partner risk and protective factors to delay substance use initiation, with notable differences among boys and participants ages 13-14. Conclusions: RCL is a primary prevention, skills-based program effective in preventing risks for substance use. This evaluation underscores the value in developing programs that influence concurrent adolescent risk behaviors, especially for Native communities who endure multiple health disparities. abstract_id: PUBMED:8500453 Decision-making orientation and AIDS-related knowledge, attitudes, and behaviors of Hispanic, African-American, and white adolescents. How adolescents' personal sense of directedness (i.e., peer, parent, or self-directed orientation) affects the decision-making processes of adolescent students regarding AIDS-related knowledge, attitudes, beliefs, behaviors, and skills (KABBS) is examined. The sample consisted of 10th-grade students in 8 public high schools (N = 2,515) in Dade County (greater Miami), Florida. The findings showed that decision-making orientation and directedness was a significant predictor of AIDS-related KABBS of adolescents. Overall, the level of AIDS-related KABBS that were associated with low risk was found significantly more often among self-directed students and least often among peer-directed students. The findings of this study suggest that future preadult health-risk research should incorporate the concept of differences of information processing across adolescents. abstract_id: PUBMED:36795980 Mediators of Social Acceptance Among Emerging Adult Survivors of Childhood Cancer. Purpose: Examine associations of social developmental factors (e.g., peer/parent social attachment, romantic relationships) and perceptions of social acceptance among emerging adult survivors of childhood cancer. Methods: A cross-sectional, within-group design was used. Questionnaires included the Multidimensional Body-Self Relations Questionnaire, Inventory of Parent and Peer Attachment, Adolescent Social Self-Efficacy Scale, Personal Evaluation Inventory, Self-Perception Profile for Adolescents, and demographics. Correlations were utilized to determine associations between general demographic, cancer-specific, and the psychosocial outcome variables. Peer and romantic relationship self-efficacy were assessed as potential mediators of social acceptance in three mediation models. Relationships between perceived physical attractiveness, peer attachment, parental attachment, and social acceptance were assessed. Results: Data were collected from N = 52 adult participants (Mage = 21.38 years, standard deviation = 3.11 years) diagnosed with cancer as a child. The first mediation model demonstrated a significant direct effect of perceived physical attraction on perceived social acceptance and retained significance after adjusting for indirect effects of the mediators. The second model demonstrated a significant direct effect of peer attachment on perceived social acceptance; however, significance was not retained after adjusting for peer self-efficacy, suggesting the relationship is partially mediated by peer relationship self-efficacy. The third model demonstrated a significant direct effect of parent attachment on perceived social acceptance; however, significance was not retained after adjusting for peer self-efficacy, suggesting the relationship is partially mediated by peer self-efficacy. Conclusion: Relationships between social developmental factors (e.g., parental and peer attachment) and perceived social acceptance are likely mediated by peer relationship self-efficacy in emerging adult survivors of childhood cancer. abstract_id: PUBMED:10777974 Factors associated with delayed tobacco uptake among Vietnamese/Asian and Arabic youth in Sydney, NSW. Objective: To describe the smoking behaviour and possible reasons for delayed uptake of tobacco smoking among Arabic and Vietnamese/Asian speaking senior school students in Sydney Method: A descriptive study involving four adult in-depth interviews and five student focus groups plus a quantitative survey of 2,573 school students attending Years 10 and 11 from 12 high schools with high Vietnamese and Arabic populations was conducted in Sydney in 1998. Self-reported smoking behaviour and peer, parent, school and cultural background information was collected. Results: Students who smoke were more likely to have more than $20 a week pocket money, be from an English-speaking background, have no rules at home about smoking, have family members who smoke, not feel close to their father, spend three or more evenings a week out with friends, and have negative perceptions of the school environment and of the level of teacher support. They were less likely to smoke if they perceived their peers to be unsupportive. Conclusions: These results confirm the delayed uptake of smoking among students from a Vietnamese/Asian and Arabic-speaking backgrounds compared with those from an English-speaking background. A number of family and school factors were associated with smoking. Implications: Positive parental modelling, active parenting including awareness of or supervision of student leisure time, strict rules about not smoking and less pocket money are important strategies for preventing smoking among all adolescents. abstract_id: PUBMED:9255694 A competency-based model of child depression: a longitudinal study of peer, parent, teacher, and self-evaluations. In a two-wave longitudinal study of third and sixth graders (N = 617), we obtained self-reports of depression and peer, teacher, parent, and self-reports of competence in five domains: academic, social, attractiveness, conduct, and athletic. Competency evaluations by others predicted change in self-perceived competence over time for girls, but not for boys. Depression predicted change in self-perceived competence over time for boys but not for girls. Among girls, the relative importance of parent, teacher, and peer appraisals shifted from third to sixth grade. For both boys and girls, self-perceptions of competence predicted change in depression scores over time. Furthermore, self-perceived competencies mediated the relation between competency appraisals by others and children's self-reported depression. Results are interpreted in light of a competency-based model of child depression. abstract_id: PUBMED:22216994 Everyone says it's ok: adolescents' perceptions of peer, parent, and community alcohol norms, alcohol consumption, and alcohol-related consequences. An adolescent's perception of norms is related to her or his engagement in alcohol-related behaviors. Norms have different sources, such as parents, peers, and community. We explored how norms from different sources were simultaneously related to different alcohol-related behaviors (current drinking, drunkenness, heavy episodic drinking, driving under the influence or riding with a impaired driver, and alcohol-related nonviolent consequences) using data collected in 2004 from 6,958 adolescents from 68 communities in five states. Results revealed that parent, friend, and community norms were related to adolescents' alcohol-related behavior, but the strength of these impacts varied across behaviors. The pattern of results varied when the analysis relied on all adolescents or just those who had consumed alcohol in the last year. abstract_id: PUBMED:37688471 Peer-to-Peer Human Milk-Sharing Among Israeli Milk Donors: A Mixed-Methods Study in the Land of Milk and Honey. Background: Evidence is lacking on the phenomenon of peer-to-peer human milk-sharing in the Middle East, specifically, in Israel. Research Aims: This study aimed to uncover peer-to-peer human milk-sharing in Israel, learn about how and whether donors engage in safe milk handling and storage practices, and assess knowledge about human milk and breastfeeding among this milk-sharing population. We also aimed to investigate donors' selectiveness in their decisions about to whom they donate their milk and their perceptions about the sale and purchase of human milk. Methods: We conducted a semi-structured online survey, including both closed- and open-ended questions and used mixed methods to analyze responses descriptively. We used non-probability sampling to obtain a broad sample of human milk donors. Results: Out of 250 completed surveys, most participants (87.2%, n = 218) reported engaging in safe milk-sharing practices and were generally knowledgeable about the health risks associated with milk-sharing. Participant religiosity was associated with somewhat lower hygiene practices (r = -0.15, p ≤ .05). Most of the participants (81.7%, n = 190) were against the sale of human milk. Participants generally expressed no preference about the recipient of their milk, with some exceptions. Conclusion: The milk-handling and storage practices of the participants in this study suggest a need to improve knowledge and awareness of safe milk storage temperature and the importance of washing hands before pumping milk, particularly within the religious sector. We propose that guidelines about safe milk-sharing practices be written and adopted by the Israeli Ministry of Health, and communicated through pediatricians, family doctors, nurses in Mother and Child Clinics (In Hebrew: Tipat Halav), and social media. abstract_id: PUBMED:17308084 Activation of TLX3 and NKX2-5 in t(5;14)(q35;q32) T-cell acute lymphoblastic leukemia by remote 3'-BCL11B enhancers and coregulation by PU.1 and HMGA1. In T-cell acute lymphoblastic leukemia, alternative t(5;14)(q35;q32.2) forms effect dysregulation of either TLX3 or NKX2-5 homeobox genes at 5q35 by juxtaposition with 14q32.2 breakpoints dispersed across the BCL11B downstream genomic desert. Leukemic gene dysregulation by t(5;14) was investigated by DNA inhibitory treatments with 26-mer double-stranded DNA oligonucleotides directed against candidate enhancers at, or near, orphan T-cell DNase I hypersensitive sites located between 3'-BCL11B and VRK1. NKX2-5 down-regulation in t(5;14) PEER cells was almost entirely restricted to DNA inhibitory treatment targeting enhancers within the distal breakpoint cluster region and was dose and sequence dependent, whereas enhancers near 3'-BCL11B regulated that gene only. Chromatin immunoprecipitation assays showed that the four most effectual NKX2-5 ectopic enhancers were hyperacetylated. These enhancers clustered approximately 1 Mbp downstream of BCL11B, within a region displaying multiple regulatory stigmata, including a TCRA enhancer motif, deep sequence conservation, and tight nuclear matrix attachment relaxed by trichostatin A treatment. Intriguingly, although TLX3/NKX2-5 promoter/exon 1 regions were hypoacetylated, their expression was trichostatin A sensitive, implying extrinsic regulation by factor(s) under acetylation control. Knockdown of PU.1, known to be trichostatin A responsive and which potentially binds TLX3/NKX2-5 promoters, effected down-regulation of both homeobox genes. Moreover, genomic analysis showed preferential enrichment near ectopic enhancers of binding sites for the PU.1 cofactor HMGA1, the knockdown of which also inhibited NKX2-5. We suggest that HMGA1 and PU.1 coregulate ectopic homeobox gene expression in t(5;14) T-cell acute lymphoblastic leukemia by interactions mediated at the nuclear matrix. Our data document homeobox gene dysregulation by a novel regulatory region at 3'-BCL11B responsive to histone deacetylase inhibition and highlight a novel class of potential therapeutic target amid noncoding DNA. Answer: The provided abstracts do not directly address the question of whether adolescents believe in a sexual double standard, which refers to the concept that different sexual behaviors are judged differently based on the gender of the individual. The sexual double standard typically suggests that men are rewarded or praised for sexual activity, while women are stigmatized or judged negatively for the same behaviors. However, some of the abstracts do touch on aspects of adolescent behavior and the influence of peers, parents, and culture on their actions, including sexual behavior. For instance, the study on Cambodian and Lao/Mien adolescents (PUBMED:16488828) found that peer delinquency and age were significant predictors of risky sexual behavior in both groups, and that cultural factors also played a role in such behavior. This suggests that adolescents' perceptions of acceptable sexual behavior may be influenced by their peers and cultural background, which could potentially include beliefs about a sexual double standard, although this is not explicitly stated. Similarly, the study on the relationship of adolescent perceptions of peer norms and parent involvement to cigarette and alcohol use (PUBMED:11512489) indicates that adolescents are influenced by their perceptions of what their peers consider normal behavior. While this study does not specifically address sexual behavior, it does highlight the importance of peer influence on adolescent behavior, which could extend to beliefs about sexual norms, including a sexual double standard. The other abstracts focus on various aspects of adolescent behavior, risk factors, and influences from peers, parents, and community (PUBMED:34536716, PUBMED:8500453, PUBMED:36795980, PUBMED:10777974, PUBMED:9255694, PUBMED:22216994, PUBMED:37688471, PUBMED:17308084), but do not provide information on the specific question of a sexual double standard among adolescents. Therefore, while the abstracts provide valuable context on the factors influencing adolescent behavior, they do not offer a direct answer to the question about the existence of a sexual double standard in adolescents' perceptions.
Instruction: Do Gender and Race Make a Difference in Acute Coronary Syndrome Pretest Probabilities in the Emergency Department? Abstracts: abstract_id: PUBMED:27862670 Do Gender and Race Make a Difference in Acute Coronary Syndrome Pretest Probabilities in the Emergency Department? Objectives: The objective was to test for significant differences in subjective and objective pretest probabilities for acute coronary syndrome (ACS) in a large cohort of chest pain patients stratified by race or gender. Secondarily we wanted to test for any differences in rates of ACS, rates of 90-day returns, cost, and chest radiation exposure after these stratifications. Methods: This is a secondary analysis of a prospective outcomes study of ED patients with chest pain and shortness of breath. We performed two separate analyses. The data set was divided by gender for analysis 1 while the analysis 2 stratification was made by race (nonwhite vs. white). For each analysis, groups were compared on several variables: provider visual analog scales (VAS) for likelihood of ACS, PREtest Consult ACS probabilities, rates of ACS, total radiation exposure to the chest, total costs at 30 days, and 90-day recidivism (ED, overnight observations, and inpatient admissions). Results: A total of 844 patients were studied. Gender information was present on all 844 subjects, while complete race/ethnicity information was available on 783 (93%) subjects. For the first analysis, female patients made up 57% (478/844) of the population and their mean provider VAS scores for ACS were significantly lower (p = 0.000) at 14% (95% confidence interval [CI] = 13% to 16%) than that of males at 22% (95% CI = 19% to 24%). This was consistent with the objective pretest ACS probabilities subsequently calculated via the validated online tool, PREtest Consult, which were also significantly lower (p = 0.000) at 2.7% (95% CI = 2.4% to 3.1%) for females versus 6.6% (95% CI = 5.9% to 7.3%) for males. However, comparing females to males, there was no significant difference in diagnosis of ACS (3.6% vs. 1.6%), mean chest radiation doses (5.0 mSv vs. 4.9 mSv), total costs at 30 days ($3,451.24 vs. $3,847.68), or return to the ED within 90 days (26% each). For analysis 2 by race, nonwhite patients also comprised 57% (444/783) of individuals. Similar to the gender analysis, mean provider VAS scores for ACS were found to be significantly lower (p = 0.000) at 15% (95% CI = 13% to 16%) for nonwhite versus 20% (95% CI = 18% to 23%) for white subjects. Concordantly, objective pretest ACS probabilities were also significantly lower (p = 0.000) at 3.4% (95% CI = 2.9% to 3.9%) for nonwhite versus 5.3% (95% CI = 4.7% to 5.9%) for white subjects. There were no significant differences in outcomes in nonwhite versus white subjects when compared on diagnosis of ACS (3.2% vs 2.4%), mean chest radiation dose (4.6 mSv vs. 5.0 mSv), cost ($3,156.02 vs. $2,885.18), or 90-day ED returns (28% vs. 23%). Conclusions: Despite consistently estimating the risk for ACS to be lower for both females and minorities concordantly with calculated objective pretest assessments, there does not appear to have been any significant decrease in subsequent evaluation of these perceived lower-risk groups when radiation exposure and costs are taken into account. Further studies on the impact of pretest assessments on gender and racial disparities in ED chest pain evaluation are needed. abstract_id: PUBMED:31822652 Inequality in the management of ischemic chest pain in the emergency department from a gender perspective Objective: Sex is a determining factor in the differences with which men and women are treated in the emergency room. The objective was to analyze the profile in patients with chest paint attended in emergency department, and the gender inequalities in the diagnosis and treatment. Methods: Descriptive observational study of patients, who attended to the Miguel Servet University Hospital emergency department, with ischemic chest pain during 2017. Sociodemographic and clinical variables of treatment and evolution were analyzed. Bivariate and multivariate analysis was performed through the statistical program SPSS. Results: 351 cases were registered (235 men and 116 women). The women were older (median age 75.5 years, against, 71.4 years in men, p=0.003), went to the hospital during summer time (p=0.021) and took most often of benzodiazepines (p=0.001), antidepressants (p&lt;0.001) and diuretics drugs (p=0.039). The women had greater proportion of arterial hypertension (p=0.001). The men came more to the emergency department during autumn period (p=0.008), and had more history of ischemic heart disease (p=0.003) and percutaneous coronary intervention (p&lt;0.001). The time of completion of the first electrocardiogram was greater in women (p&lt;0.001), and were diagnosed with a higher frequency of atypical chest pain (p=0.003), unlike men, more diagnosed of acute coronary syndrome (p=0.028) and subjected to invasive treatment (p&lt;0.001). Conclusions: There are differences according to sex in the antecedents, delay in performing the first electrocardiogram and use of invasive treatment. Its consideration from the emergency department, without influence of value judgments and with the determination of values disaggregated by sex, can improve the attention and evolution of these patients. abstract_id: PUBMED:29535238 Retrospective Comparison of Cardiac Testing and Results on Inpatients with Low Pretest Probability Compared with Moderate/High Pretest Probability for Coronary Artery Disease. Objective: To determine whether admission, and provocative stress testing of patients who have ruled out for acute coronary syndrome put patients with low-risk category for coronary artery disease (CAD) at risk for false-positive provocative stress testing and unnecessary coronary angiogram/imaging. Methods: A retrospective chart review was performed on patients between 30 and 70 years old, with no pre-existing diagnosis of CAD, admitted to observation or inpatient status chest pain or related complaints. Included patients were categorized based on Duke Clinical Score for pretest probability for CAD into either low-risk group, or moderate/high-risk group. The inpatient course was compared including whether provocative stress testing was performed; results of stress testing; whether patients underwent further coronary imaging; and what the results of the further imaging showed. Results: 543 patients were eligible: 305 low pretest probability, and 238 moderate/high pretest probability. No difference was found in rate of stress testing relative risk (RR) = 1.01 (95% CI, 0.852 to 1.192; P = 0); rate of positive or equivocal stress tests between the 2 groups: RR = 0.653 (95% CI, 0.415 to 1.028; P = .07,). Low-pretest-probability patients had a lower likelihood of positive coronary imaging after stress test, RR = 0.061 (95% CI, 0.004 to 0.957; P = .001). Conclusion: Follow-up provocative testing of all patients admitted/observed after emergency department presentation with chest pain is unlikely to find CAD in patients with low pretest probability. Testing all low-probability patients puts them at increased risk for unnecessary invasive confirmatory testing. Further prospective testing is needed to confirm these retrospective results. abstract_id: PUBMED:37434908 The Impact of Gender and Race When Using the GRACE ACS Score to Predict Mortality. Background: Acute coronary syndrome (ACS) causes significant global morbidity and mortality and requires early risk stratification. The global registry of acute coronary events (GRACE) score is a well-known, validated risk stratification system that does not include race and gender. We aimed to assess whether the addition of gender and race could add to the predictability of the GRACE score model. Methods: We performed a retrospective cohort study of 46 764 ACS patients from the files of a national healthcare system. We compared the predictability of the GRACE score in conjunction with gender and race versus the original GRACE score. Different possible associations of predictability were investigated and statistically calculated. The accuracy of the prediction models was assessed using the receiver operating characteristic curve and its respective area under the curve (AUC). We compared the AUC of the 2 models, with the significance set at a P value of less than .05. Results: Our comparison favored the original GRACE score over the modified prediction model with gender and race added (AUC = 0.838 and 0.839 respectively, P = .008). Although the P value comparing the AUC shows that the original GRACE was superior, due to our large dataset, the actual numbers are similar and may not be clinically significant. Gender and race were significantly associated with in-hospital mortality (P &lt; .001, P = .002, respectively). However, this relationship disappeared in the multivariate analysis. Gender significantly predicted in-hospital mortality, with females 1.167 times more likely to die (P &lt; .001). Non-white racial groups had lower in-hospital mortality than whites (OR: 0.823, P = .03). Conclusion: The GRACE score was valid in its original form and its ability to predict mortality was not substantially improved by including gender and race. abstract_id: PUBMED:31088589 Interventions to reduce emergency department door-to- electrocardiogram times: A systematic review. Objectives: We sought to identify emergency department interventions that lead to improvement in door-to-electrocardiogram (ECG) times for adults presenting with symptoms suggestive of acute coronary syndrome. Methods: Two reviewers searched Medline, Embase, CINAHL, and Cochrane CENTRAL from inception to April 2018 for studies in adult emergency departments with an identifiable intervention to reduce median door-to-ECG times when compared with the institution's baseline. Quality was assessed using the Quality Improvement Minimum Quality Criteria Set critical appraisal tool. The primary outcome was the absolute median reduction in door-to-ECG times as calculated by the difference between the post-intervention time and pre-intervention time. Results: Two reviewers identified 809 unique articles, yielding 11 before-after quality improvement studies that met eligibility criteria (N = 15,622 patients). The majority of studies (10/11) reported bundled interventions, and most (10/11) showed statistical improvement in door-to-ECG times. The most common interventions were having a dedicated ECG machine and technician in triage (5/11); improved triage education (4/11); improved triage disposition (2/11); and data feedback mechanisms (2/11). Conclusions: There are multiple interventions that show potential for reducing emergency department door-to-ECG times. Effective bundled interventions include having a dedicated ECG technician, triage education, and better triage disposition. These changes can help institutions attain best practice guidelines. Emergency departments must first understand their local context before adopting any single or group of interventions. abstract_id: PUBMED:37712970 Monitoring of emergency cardiovascular patients in the emergency department : Consensus paper of the DGK, DGINA and DGIIN Patients with potential or proven cardiovascular diseases represent a relevant proportion of the total spectrum in the emergency department. Their monitoring for cardiovascular surveillance until the diagnostics and acute treatment are initiated, often poses an interdisciplinary and interprofessional challenge, because resources are limited, nevertheless a high level of patient safety has to be ensured and the correct procedure has a major prognostic significance. This consensus paper provides an overview of the practical implementation, the modalities of monitoring and the application in a selection of cardiovascular diagnoses. The article provides specific comments on the clinical presentations of acute coronary syndrome, acute heart failure, cardiogenic shock, hypertensive emergency events, syncope, acute pulmonary embolism and cardiac arrhythmia. The level of evidence is generally low as no randomized trials are available on this topic. The recommendations are intended to supplement or establish local standards and to assist all physicians, nursing personnel and the patients to be treated in making decisions about monitoring in the emergency department. abstract_id: PUBMED:24393411 A prospective pilot study of predictors of acute stroke in emergency department patients with dizziness. Objective: To prospectively examine undifferentiated emergency department (ED) patients with dizziness to identify clinical features associated with acute stroke. Patients And Methods: We conducted a pilot study from November 1, 2009, through October 30, 2010, of adult patients with dizziness presenting to 3 urban academic EDs. Data collected included demographic characteristics, medical history, presenting symptoms, examination findings, clinician pretest probability of stroke, and neuroimaging results. Logistic regression was used to identify variables with a significant association with acute stroke (P&lt;.05). Results: During the study period, we enrolled 473 patients (mean ± SD age, 56.7±19.3 years; 60% female; and 71% white). We found 30 acute, serious diagnoses (6.3%), including 14 ischemic strokes, 2 subarachnoid hemorrhages, 7 mass lesions, 2 demyelinating lesions, 2 severe vertebral artery stenoses, 2 acute coronary syndromes, and 1 case of hydrocephalus and meningitis). We identified 6 clinical variables associated with stroke: age (odds ratio [OR], 1.04; 95% CI, 1.0-1.07), hyperlipidemia (OR, 3.62; 95% CI, 1.24-10.6), hypertension (OR, 4.91; 95% CI, 1.46-16.5), coronary artery disease (OR, 3.33; 95% CI, 1.06-10.5), abnormal tandem gait test result (OR, 3.13; 95% CI, 1.10-8.89), and high or moderate physician pretest probability for acute stroke (OR, 18.8; 95% CI, 4.72-74.5). Conclusions: Most ED patients with dizziness do not have a serious cause of their symptoms. Although the small number of outcomes precluded development of a multivariate model, we identified several individual high-risk variables associated with acute ischemic stroke. Further study will be needed to validate the findings of this pilot investigation. abstract_id: PUBMED:16631984 Prospective multicenter study of quantitative pretest probability assessment to exclude acute coronary syndrome for patients evaluated in emergency department chest pain units. Study Objective: We compare the diagnostic accuracy of 3 methods--attribute matching, physician's written unstructured estimate, and a logistic regression formula (Acute Coronary Insufficiency-Time Insensitive Predictive Instrument, ACI-TIPI)--of estimating a very low pretest probability (&lt; or = 2%) for acute coronary syndromes in emergency department (ED) patients evaluated in chest pain units. Methods: We prospectively studied 1,114 consecutive patients from 3 academic EDs, evaluated for acute coronary syndrome. Physicians collected data required for pretest probability assessment before protocol-driven chest pain unit testing. A pretest probability greater than 2% was considered "test positive." The criterion standard was the outcome of acute coronary syndrome (death, myocardial infarction, revascularization, or &gt; 60% stenosis prompting new treatment) within 45 days, adjudicated by 3 independent reviewers. Results: Fifty-one of 1,114 enrolled patients (4.5%; 95% confidence interval [CI] 3.4% to 6.0%) developed acute coronary syndrome within 45 days, including 4 of 991 (0.4%; 95% CI 0.1% to 1.0%) patients, discharged after a negative chest pain unit evaluation result, who developed acute coronary syndrome. Unstructured estimate identified 293 patients with pretest probability less than or equal to 2%, 2 had acute coronary syndrome, yielding sensitivity of 96.1% (95% CI 86.5% to 99.5%) and specificity of 27.4% (95% CI 24.7% to 30.2%). Attribute matching identified 304 patients with pretest probability less than or equal to 2%; 1 had acute coronary syndrome, yielding a sensitivity of 98.0% (95% CI 89.6% to 99.9%) and a specificity of 26.1% (95% CI 23.6% to 28.7%). ACI-TIPI identified 56 patients; none had acute coronary syndrome, yielding sensitivity of 100% (95% CI 93.0% to 100%) and specificity of 6.1% (95% CI 4.7% to 7.9%). Conclusion: In a low-risk ED population with symptoms suggestive of acute coronary syndrome, patients with a quantitative pretest probability less than or equal to 2%, determined by attribute matching, unstructured estimate, or logistic regression, may not require additional diagnostic testing. abstract_id: PUBMED:29540019 Missed diagnoses of acute myocardial infarction in the emergency department: variation by patient and facility characteristics. Background: An estimated 1.2 million people in the US have an acute myocardial infarction (AMI) each year. An estimated 7% of AMI hospitalizations result in death. Most patients experiencing acute coronary symptoms, such as unstable angina, visit an emergency department (ED). Some patients hospitalized with AMI after a treat-and-release ED visit likely represent missed opportunities for correct diagnosis and treatment. The purpose of the present study is to estimate the frequency of missed AMI or its precursors in the ED by examining use of EDs prior to hospitalization for AMI. Methods: We estimated the rate of probable missed diagnoses in EDs in the week before hospitalization for AMI and examined associated factors. We used Healthcare Cost and Utilization Project State Inpatient Databases and State Emergency Department Databases for 2007 to evaluate missed diagnoses in 111,973 admitted patients aged 18 years and older. Results: We identified missed diagnoses in the ED for 993 of 112,000 patients (0.9% of all AMI admissions). These patients had visited an ED with chest pain or cardiac conditions, were released, and were subsequently admitted for AMI within 7 days. Higher odds of having missed diagnoses were associated with being younger and of Black race. Hospital teaching status, availability of cardiac catheterization, high ED admission rates, high inpatient occupancy rates, and urban location were associated with lower odds of missed diagnoses. Conclusions: Administrative data provide robust information that may help EDs identify populations at risk of experiencing a missed diagnosis, address disparities, and reduce diagnostic errors. abstract_id: PUBMED:27183926 Impact of delay in admission on the outcome of critically ill patients presenting to the emergency department of a tertiary care hospital from low income country. Objective: To assess the impact of admission delay on the outcome of critical patients. Methods: The retrospective chart review was done at Aga Khan University Hospital, Karachi, and comprised adult patients visiting the Emergency Department during 2010. Outcome measures assessed were total hospital length of stay, total cost of the visit and in-hospital mortality. Patients admitted within 6 hours of presentation at Emergency Department were defined as non-delayed. Data was analysed using SPSS 19. Results: Of the 49,532 patients reporting at the Emergency Department during the study period, 17,968 (36.3%) were admitted. Of them 2356(13%) were admitted to special or intensive care units, 1595(67.7%) of this sub-group stayed in the Emergency Department for &gt;6 hours before being shifted to intensive care. The study focussed on 325(0.65%) of the total patients; 164(50.5%) in the non-delayed group and 161(49.5%) in the delayed group. The admitting diagnosis of myocardial infarction (p=0.00) and acute coronary syndrome (p=0.01) was significantly more common in the non-delayed group compared to other diagnoses like cerebrovascular attacks (p=0.03) which was significantly more common in the delayed group. There was no significant difference in the hospital length of stay between the two groups (p&gt;0.05). The Emergency Department cost was significantly increased in the delayed group (p&lt;0.05), but there was no difference in the overall hospital cost between the groups (p&gt;0.05). Conclusions: There was no significant difference in the delayed and non-delayed groups, but long Emergency Department stays are distressing for both physicians and patients. Answer: Yes, gender and race do make a difference in acute coronary syndrome (ACS) pretest probabilities in the emergency department (ED). A study found that subjective and objective pretest probabilities for ACS were consistently estimated to be lower for both females and nonwhite patients compared to males and white patients, respectively (PUBMED:27862670). Specifically, female patients had significantly lower mean provider visual analog scale (VAS) scores for ACS and lower objective pretest ACS probabilities calculated via the PREtest Consult tool than male patients. Similarly, nonwhite patients had significantly lower mean provider VAS scores for ACS and lower objective pretest ACS probabilities than white patients. Despite these lower pretest probabilities, there were no significant differences in the rates of ACS diagnosis, chest radiation doses, total costs at 30 days, or 90-day ED returns between females and males or between nonwhite and white patients (PUBMED:27862670). Another study also highlighted gender differences in the management of ischemic chest pain in the ED, with women experiencing greater delays in the completion of the first electrocardiogram and being diagnosed more frequently with atypical chest pain, while men were more often diagnosed with acute coronary syndrome and subjected to invasive treatment (PUBMED:31822652). Furthermore, the GRACE ACS score, a validated risk stratification system for ACS, was assessed to determine if the addition of gender and race could improve its predictability. The study concluded that the original GRACE score did not substantially improve in predicting mortality by including gender and race, although gender and race were significantly associated with in-hospital mortality in univariate analysis (PUBMED:37434908). These findings suggest that gender and race do influence the perception of risk for ACS in the ED, which may have implications for the evaluation and management of chest pain in these patient groups. Further studies are needed to understand the impact of these disparities and to improve the accuracy and equity of ACS risk assessment in the ED.
Instruction: Airway hyperresponsiveness in elite swimmers: is it a transient phenomenon? Abstracts: abstract_id: PUBMED:21167573 Airway hyperresponsiveness in elite swimmers: is it a transient phenomenon? Background: Airway hyperresponsiveness is highly prevalent in competitive swimmers, but it is unknown whether this is transient or persistent. Objectives: To document changes in airway responsiveness and airway inflammation in elite swimmers during intense training and rest. Methods: Nineteen swimmers and 16 healthy controls completed a standardized questionnaire, allergy skin prick tests, exhaled nitric oxide measurement, eucapnic voluntary hyperpnea testing, methacholine challenge, and induced sputum analysis. Testing was performed during intense swimming and after at least 2 weeks of rest. Results: Sixteen swimmers and 13 controls were atopic. Airway responsiveness to methacholine and eucapnic voluntary hyperpnea was significantly higher in swimmers than in controls (P &lt; .0001). A significant decrease in airway responsiveness was observed from training to rest in swimmers only (P &lt; .005). This occurred with both methacholine challenge--with PC(20) values of 6.0 mg/mL and 12.8 mg/mL, respectively--and eucapnic voluntary hyperpnea testing--with a maximum fall in FEV(1) after voluntary testing of 14.1 L and 10.1 L, respectively. Eight of 12 swimmers with airway hyperresponsiveness during intense training had normal airway responsiveness during rest. No airway inflammation occurred, and no significant change in this parameter was observed from training to rest. Conclusion: Training may contribute to the development of airway hyperresponsiveness in elite swimmers, but this seems reversible in many athletes after training cessation for at least 2 weeks. abstract_id: PUBMED:18554704 Airway responsiveness and inflammation in adolescent elite swimmers. Background: Whereas increased airway hyperresponsiveness (AHR) and airway inflammation are well documented in adult elite athletes, it remains uncertain whether the same airway changes are present in adolescents involved in elite sport. Objective: To investigate airway responsiveness and airway inflammation in adolescent elite swimmers. Methods: We performed a cross-sectional study on adolescent elite swimmers (n = 33) and 2 control groups: unselected adolescents (n = 35) and adolescents with asthma (n = 32). The following tests were performed: questionnaire, exhaled nitric oxide (FeNO), spirometry, induced sputum, methacholine challenge, eucapnic voluntary hyperpnea (EVH) test, and exhaled breath condensate pH. Results: There were no differences in FeNO, exhaled breath condensate pH, cellular composition in sputum, or prevalence of AHR to either EVH or methacholine among the 3 groups. When looking at airway responsiveness as a continuous variable, the swimmers were more responsive to EVH than unselected subjects, but less responsive to methacholine compared with subjects with asthma. We found no differences in the prevalence of respiratory symptoms between the swimmers and the unselected adolescents. There was no difference in FeNO, cellular composition of sputum, airway reactivity, or prevalence of having AHR to methacholine and/or EVH between swimmers with and without respiratory symptoms. Conclusion: Adolescent elite swimmers do not have significant signs of airway damage after only a few years of intense training and competition. This leads us to believe that elite swimmers do not have particularly susceptible airways when they take up competitive swimming when young, but that they develop respiratory symptoms, airway inflammation, and AHR during their swimming careers. abstract_id: PUBMED:20298376 Airway hyperresponsiveness and airway inflammation in elite swimmers. Introduction: The prevalence of respiratory symptoms and airway hyperresponsiveness (AHR) is high in elite athletes; swimmers have one of the highest prevalences. No consensus exists on what airway challenge to use when identifying AHR in elite athletes. Further, knowledge is sparse about when during their active sport career AHR develops and if there is an acute effect on the airway inflammation of a swimming training session. Objectives: We aimed to (i) evaluate the airway response to a methacholine challenge, a eucapnic voluntary hyperpnoea (EVH) test, a field-based exercise test (FBT) and a laboratory-based exercise test (LBT) in adult elite swimmers; (ii) investigate airway responsiveness and airway inflammation in adolescent elite swimmers; and (iii) evaluate the acute effect of a training session in an indoor swimming pool on airway inflammation in adolescent elite swimmers. Materials And Methods: Two groups were studied. (i) In adult elite swimmers (n = 16), we examined airway response in four airway provocation tests: methacholine challenge, EVH test, FBT and LBT. (ii) In adolescent elite swimmers (n = 33), we examined airway responsiveness to EVH and methacholine, and airway inflammation and compared the findings with those in asthmatic adolescents (n = 32) and unselected adolescents (n = 35). Further, we examined the acute effect of swimming on airway inflammation in a subpopulation of the adolescent swimmers (n = 21). Airway inflammation was evaluated using sputum induction, measurements of exhaled nitric oxide (FeNO) and exhaled breath condensate (EBC). Results: Of 16 adult swimmers, eight (50%) had AHR; five of the eight (63%) were identified with the EVH test, four (50%) with the FBT, four (50%) with the LBT and none with the methacholine challenge [provocative dose of methacholine causing a 20% fall in FEV1 (PD(20)) &lt;or= 2 micromol]. There were no differences in the prevalence of AHR to either EVH or methacholine (PD(20) &lt;or= 8 micromol) among the adolescent swimmers, the asthmatic adolescents and the unselected adolescents. When looking at airway responsiveness as a continuous variable, the swimmers were more responsive to EVH than were the unselected subjects, and less responsive to methacholine than were the asthmatic adolescents. There were no differences in FeNO, EBC pH or in the cellular composition of the sputum among the three groups. Lung function, FeNO, EBC pH, EBC lactate and differential cell counts in sputum were not acutely affected by the swimming session. Conclusion: We found that the EVH test is the most sensitive test for identifying AHR in elite athletes when using the diagnostic criteria set forward by the International Olympic Committe. Whereas a high prevalence of AHR in adult swimmers was found, the prevalence of AHR in the adolescent swimmers did not differ from that in unselected adolescents nor did the adolescent swimmers have signs of airway inflammation. There was no acute effect of a swimming training session in an indoor chlorinated pool on lung function or airway composition in adolescent swimmers. We believe that elite swimming results in airway changes with AHR and airway inflammation. abstract_id: PUBMED:22459716 High prevalence of asthma in Danish elite canoe- and kayak athletes. Introduction: Asthma is common in elite athletes, but our knowledge of asthma in elite canoe and kayak athletes is limited. The aim of the present prospective cross-sectional study was therefore to investigate the prevalence of asthma, including asthma-like symptoms, exhaled nitric oxide, and airway reactivity to mannitol in Danish elite canoe and kayak athletes Material And Methods: The study group consisted of 29 (of 33 eligible) elite athletes aged 17-43 years, and the examination programme consisted of questionnaires, including the Asthma Control Questionnaire, fraction of exhaled nitric oxide (FENO), spirometry and airway reactivity to mannitol. Asthma was defined as a history of doctor-diagnosed asthma and/or elevated FENO and airway reactivity. Results: Seven of the elite athletes (24.1%) were found to have asthma, including four subjects with previously doctor-diagnosed asthma. Of the four athletes (all treated with inhaled corticosteroids) with doctor-diagnosed asthma, all reported asthma-symptoms and two had elevated FENO, but none had airway hyperresponsiveness (AHR) to mannitol. All three athletes with previously undiagnosed asthma had elevated FENO and AHR to mannitol, but reported no asthma-like symptoms. Conclusion: Asthma is common in elite canoe and kayak athletes, and classical signs of asthmatic airway inflammation are also found in asymptomatic athletes. Funding: not relevant. Trial Registration: not relevant. abstract_id: PUBMED:25202844 Predictors of airway hyperresponsiveness in elite athletes. Introduction: Elite athletes frequently experience asthma and airway hyperresponsiveness (AHR). We aimed to investigate predictors of airway pathophysiology in a group of unselected elite summer-sport athletes, training for the summer 2008 Olympic Games, including markers of airway inflammation, systemic inflammation, and training intensity. Methods: Fifty-seven Danish elite summer-sport athletes with and without asthma symptoms all gave a blood sample for measurements of high-sensitivity C-reactive protein (hs-CRP), interleukin-6 (IL-6), interleukin-8 (IL-8), and tumor necrosis factor alpha (TNF-α), completed a respiratory questionnaire, and underwent spirometry. Bronchial challenges with mannitol were performed in all 57 athletes, and 47 agreed to perform an additional methacholine provocation. Results: Based on a physician's diagnosis, 18 (32%) athletes were concluded to be asthmatic. Asthmatic subjects trained more hours per week than the 39 nonasthmatics (median (min-max): 25 h·wk (14-30) versus 20 h·wk (11-30), P = 0.001). AHR to both methacholine and mannitol (dose response slope) increased with the number of weekly training h (r = 0.43, P = 0.003, and r = 0.28, P = 0.034, respectively). Serum levels of IL-6, IL-8, TNF-α, and hs-CRP were similar between asthmatics and nonasthmatics. However, there was a positive association between the degree of AHR to methacholine and serum levels of TNF-α (r = 0.36, P = 0.04). Fifteen out of 18 asthmatic athletes were challenged with both agents. In these subjects, no association was found between the levels of AHR to mannitol and methacholine (r = 0.032, P = 0.91). Conclusion: AHR in elite athletes is related to the amount of weekly training and the level of serum TNF-α. No association was found between the level of AHR to mannitol and methacholine in the asthmatic athletes. abstract_id: PUBMED:27408633 The World Anti-Doping Code: can you have asthma and still be an elite athlete? Key Points: The World Anti-Doping Code (the Code) does place some restrictions on prescribing inhaled β2-agonists, but these can be overcome without jeopardising the treatment of elite athletes with asthma.While the Code permits the use of inhaled glucocorticoids without restriction, oral and intravenous glucocorticoids are prohibited, although a mechanism exists that allows them to be administered for acute severe asthma.Although asthmatic athletes achieved outstanding sporting success during the 1950s and 1960s before any anti-doping rules existed, since introduction of the Code's policies on some drugs to manage asthma results at the Olympic Games have revealed that athletes with confirmed asthma/airway hyperresponsiveness (AHR) have outperformed their non-asthmatic rivals.It appears that years of intensive endurance training can provoke airway injury, AHR and asthma in athletes without any past history of asthma. Although further research is needed, it appears that these consequences of airway injury may abate in some athletes after they have ceased intensive training. The World Anti-Doping Code (the Code) has not prevented asthmatic individuals from becoming elite athletes. This review examines those sections of the Code that are relevant to respiratory physicians who manage elite and sub-elite athletes with asthma. The restrictions that the Code places or may place on the prescription of drugs to prevent and treat asthma in athletes are discussed. In addition, the means by which respiratory physicians are able to treat their elite asthmatic athlete patients with drugs that are prohibited in sport are outlined, along with some of the pitfalls in such management and how best to prevent or minimise them. abstract_id: PUBMED:35324645 Overuse of Short-Acting Beta-2 Agonists (SABAs) in Elite Athletes: Hypotheses to Explain It. The use of short-acting beta-2 agonists (SABAs) is more common in elite athletes than in the general population, especially in endurance sports. The World Anti-Doping Code places some restrictions on prescribing inhaled β2-agonists. These drugs are used in respiratory diseases (such as asthma) that might reduce athletes' performances. Recently, studies based on the results of the Olympic Games revealed that athletes with confirmed asthma/airway hyperresponsiveness (AHR) or exercise-induced bronchoconstriction (EIB) outperformed their non-asthmatic rivals. This overuse of SABA by high-level athletes, therefore, raises some questions, and many explanatory hypotheses are proposed. Asthma and EIB have a high prevalence in elite athletes, especially within endurance sports. It appears that many years of intensive endurance training can provoke airway injury, EIB, and asthma in athletes without any past history of respiratory diseases. Some sports lead to a higher risk of asthma than others due to the hyperventilation required over long periods of time and/or the high environmental exposure while performing the sport (for example swimming and the associated chlorine exposure). Inhaled corticosteroids (ICS) have a low efficacy in the treatment of asthma and EIB in elite athletes, leading to a much greater use of SABAs. A significant proportion of these high-level athletes suffer from non-allergic asthma, involving the th1-th17 pathway. abstract_id: PUBMED:18685536 Airway responses to eucapnic hyperpnea, exercise, and methacholine in elite swimmers. Purpose: The International Olympic Committee Medical Commission (IOC-MC) requires athletes to provide the result of an objective test to support a diagnosis of asthma or exercise-induced bronchoconstriction (EIB) if they want to inhale a beta-2-agonist. The purpose of the study was to evaluate the airway response to a methacholine challenge and to hyperpnea induced by exercise in the field and in the laboratory or that induced voluntarily by eucapnic hyperpnea in a group of female elite swimmers. Methods: Sixteen female nonasthmatic elite swimmers performed a eucapnic voluntary hyperpnea (EVH) test, a field-based exercise test (FBT), a laboratory-based exercise test (LBT), and a methacholine challenge. The criteria suggested by the IOC-MC were used to define a positive response to the challenges (EVH, field test, and laboratory test: minimum 10% decrease in FEV1; methacholine: PD20 &lt; or = 2 micromol). Results: Eight swimmers (50%) had at least one positive test to hyperpnea. Five were identified with the EVH test, four with FBT, and four with LBT. None were identified using methacholine. Three swimmers with airway hyperresponsiveness to exercise would have been identified using a higher cutoff for methacholine (PD20 &lt; or = 8 micromol). Conclusions: The EVH test is the test that diagnoses most swimmers with an abnormal response to hyperpnea, but not all cases of EIB are identified with the EVH test. Performing a methacholine test using IOC-MC's cutoff value does not improve the chances of diagnosing EIB. We recommend performing the EVH test when diagnosing and evaluating EIB in elite swimmers and if EVH test negative then proceeding to a strenuous LBT. abstract_id: PUBMED:22522585 Respiratory health of elite athletes - preventing airway injury: a critical review. Elite athletes, particularly those engaged in endurance sports and those exposed chronically to airborne pollutants/irritants or allergens, are at increased risk for upper and lower airway dysfunction. Airway epithelial injury may be caused by dehydration and physical stress applied to the airways during severe exercise hyperpnoea and/or by inhalation of noxious agents. This is thought to initiate an inflammatory cascade/repair process that, ultimately, could lead to airway hyperresponsiveness (AHR) and asthma in susceptible athletes. The authors review the evidence relating to prevention or reduction of the risk of AHR/asthma development. Appropriate measures should be implemented when athletes exercise strenuously in an attempt to attenuate the dehydration stress and reduce the exposure to noxious airborne agents. Environmental interventions are the most important. Non-pharmacological strategies can assist, but currently, pharmacological measures have not been demonstrated to be effective. Whether early prevention of airway injury in elite athletes can prevent or reduce progression to AHR/asthma remains to be established. abstract_id: PUBMED:23830133 Air quality and exercise-induced bronchoconstriction in elite athletes. A higher prevalence of airway hyperresponsiveness, airway remodeling, and asthma has been identified among athletes who compete and train in environmental conditions of cold dry air and/or high air pollution. Repeated long-duration exposure to cold/dry air at high minute ventilation rates can cause airway damage. Competition or training at venues close to busy roadways, or in indoor ice arenas or chlorinated swimming pools, harbors a risk for acute and chronic airway disorders from high pollutant exposure. This article discusses the effects of these harsh environments on the airways, and summarizes potential mechanisms and prevalence of airway disorders in elite athletes. Answer: The research suggests that airway hyperresponsiveness (AHR) in elite swimmers may indeed be a transient phenomenon. A study on elite swimmers documented changes in airway responsiveness and airway inflammation during intense training and after a period of rest. The results showed a significant decrease in airway responsiveness from training to rest in swimmers only, with both methacholine challenge and eucapnic voluntary hyperpnea testing indicating reversibility in many athletes after at least 2 weeks of rest from training. Notably, eight of twelve swimmers with AHR during intense training had normal airway responsiveness during rest, and no significant change in airway inflammation was observed from training to rest (PUBMED:21167573). This finding is consistent with the understanding that training may contribute to the development of AHR in elite swimmers, but it seems reversible after a cessation of training for a period. However, it is important to note that while many athletes may experience a reversal of AHR after rest, this may not be the case for all swimmers, and individual responses can vary. Additionally, the studies did not find significant signs of airway damage in adolescent elite swimmers after only a few years of intense training and competition, suggesting that AHR develops over the course of an athlete's swimming career rather than being present from the onset (PUBMED:18554704). In summary, AHR in elite swimmers appears to be a reversible condition for many athletes following a period of rest from intense training, although further research is needed to fully understand the long-term implications and individual variability in the response of airway responsiveness to training and rest.
Instruction: Does a homeopathic ultramolecular dilution of Thyroidinum 30cH affect the rate of body weight reduction in fasting patients? Abstracts: abstract_id: PUBMED:12422922 Does a homeopathic ultramolecular dilution of Thyroidinum 30cH affect the rate of body weight reduction in fasting patients? A randomised placebo-controlled double-blind clinical trial. Objective: To test whether an ultramolecular dilution of homeopathic Thyroidinum has an effect over placebo on weight reduction of fasting patients in so-called 'fasting crisis'. Design: Randomised, placebo-controlled, double-blind, parallel group, monocentre study. Setting/location: Hospital for internal and complementary medicine in Munich, Germany. Subjects: Two hundred and eight fasting patients encountering a stagnation or increase of weight after a weight reduction of at least 100 g/day in the preceding 3 days. Intervention: One oral dose of Thyroidinum 30cH (preparation of thyroid gland) or placebo. Outcome Measures: Main outcome measure was reduction of body weight 2 days after treatment. Secondary outcome measures were weight reduction on days 1 and 3, 15 complaints on days 1-3, and 34 laboratory findings on days 1-2 after treatment. Results: Weight reduction on the second day after medication in the Thyroidinum group was less than in the placebo group (mean difference 92 g, 95% confidence interval 7-176 g, P=0.034). Adjustment for baseline differences in body weight and rate of weight reduction before medication, however, weakened the result to a non-significant level (P=0.094). There were no differences between groups in the secondary outcome measures. Conclusions: Patients receiving Thyroidinum had less weight reduction on day 2 after treatment than those receiving placebo. Yet, since no significant differences were found in other outcomes and since adjustment for baseline differences rendered the difference for the main outcome measure non-significant, this result must be interpreted with caution. Post hoc evaluation of the data, however, suggests that by predefining the primary outcome measure in a different way, an augmented reduction of weight on day 1 after treatment with Thyroidinum may be demonstrated. Both results would be compatible with homeopathic doctrine (primary and secondary effect) as well as with findings from animal research. abstract_id: PUBMED:26374764 Effects of intermittent fasting on body composition and clinical health markers in humans. Intermittent fasting is a broad term that encompasses a variety of programs that manipulate the timing of eating occasions by utilizing short-term fasts in order to improve body composition and overall health. This review examines studies conducted on intermittent fasting programs to determine if they are effective at improving body composition and clinical health markers associated with disease. Intermittent fasting protocols can be grouped into alternate-day fasting, whole-day fasting, and time-restricted feeding. Alternate-day fasting trials of 3 to 12 weeks in duration appear to be effective at reducing body weight (≈3%-7%), body fat (≈3-5.5 kg), total cholesterol (≈10%-21%), and triglycerides (≈14%-42%) in normal-weight, overweight, and obese humans. Whole-day fasting trials lasting 12 to 24 weeks also reduce body weight (≈3%-9%) and body fat, and favorably improve blood lipids (≈5%-20% reduction in total cholesterol and ≈17%-50% reduction in triglycerides). Research on time-restricted feeding is limited, and clear conclusions cannot be made at present. Future studies should examine long-term effects of intermittent fasting and the potential synergistic effects of combining intermittent fasting with exercise. abstract_id: PUBMED:27279831 Ramadan Fasting Decreases Body Fat but Not Protein Mass. Background: Many studies have shown various results regarding the effects of Ramadan fasting on weight and body composition in healthy individuals. Objectives: This study aimed to evaluate the effect of Ramadan fasting on body composition in healthy Indonesian medical staff. Objectives: In this study, we examined the influence of Ramadan fasting on body composition in healthy medical staff. Patients And Methods: The longitudinal study was performed during and after Ramadan fasting in 2013 (August to October). Fourty-three medical staff members (physicians, nurses and nutritionists) at the Internal Medicine Ward of the Dr. Cipto Mangunkusumo General Hospital were measured to compare their calorie intake, weight, body mass index, waist-to-hip ratio (WHR), and body composition, including body fat, protein, minerals and water, on the first and 28(th) days of Ramadan and also 4-5 weeks after Ramadan fasting. Measurements were obtained for all 43 subjects on the 28(th) day of Ramadan, but they were obtained for only 25 subjects 4 - 5 weeks after Ramadan. Results: By the 28(th) day of Ramadan, it was found that the body weight, BMI, body fat, water and mineral measures had decreased significantly (-0.874 ± 0.859 kg, P &lt; 0.001; -0.36 ± 0.371 kg/m(2), P &lt; 0.001; -0.484 ± 0.597 kg, P &lt; 0.001; -0.293 ± 0.486 kg, P = 0.001; -0.054 ± 0.059 kg, P &lt; 0.001, respectively). Protein body mass and calorie intake did not significantly change (-0.049 ± 0.170 kg, P = 0.561; 12.94 ± 760.608 Kcal, P = 0.082 respectively). By 4 - 5 weeks after Ramadan, body weight and composition had returned to the same levels as on the first day of Ramadan. Conclusions: Ramadan fasting resulted in weight loss even it was only a temporary effect, as the weight was quickly regained within one month after fasting. The catabolism catabolic state, which is related to protein loss, was not triggered during Ramadan fasting. Further research is needed to evaluate the effects of weight loss during Ramadan fasting in healthy individuals. abstract_id: PUBMED:34225340 Impact of Fasting on Cardiovascular Outcomes in Patients With Hypertension. Abstract: Fasting has been frequently practiced for religious or medical purposes worldwide. However, limited literature assesses the impact of different fasting patterns on the physiologic and cardiac-related parameters in patients with hypertension. This review aims to examine the effect of fasting on cardiovascular outcomes in hypertensive patients. Medline, Embase, and Cochrane library were systematically screened until March 2021 for observational prospective cohorts investigating the effect of fasting on cardiovascular outcomes. Articles were assessed by searching for hypertension and fasting, both as Medical Subject Headings (MeSH) terms and text words. The review included studies assessing Ramadan, intermittent, and water-only fasting. Water-only fasting reduces body weight, blood pressure, and lipolytic activity of fasting hypertensive patients without affecting average heart rate. Ramadan fasting enhances lipid profile, although it shows conflicting results for body weight, blood pressure, and heart rate variability. Considering the limited studies in this field, further research should be conducted to support the clinical impact of fasting on the cardiovascular health of patients with hypertension. abstract_id: PUBMED:38201996 Intermittent Fasting: Does It Affect Sports Performance? A Systematic Review. Intermittent fasting is one of the most popular types of diet at the moment because it is an effective nutritional strategy in terms of weight loss. The main objective of this review is to analyze the effects that intermittent fasting has on sports performance. We analyzed physical capacities: aerobic capacity, anaerobic capacity, strength, and power, as well as their effect on body composition. For this, a bibliographic search was carried out in several databases where 25 research articles were analyzed to clarify these objectives. Inclusion criteria: dates between 2013 and present, free full texts, studies conducted in adult human athletes, English and/or Spanish languages, and if it has been considered that intermittent fasting is mainly linked to sports practice and that this obtains a result in terms of performance or physical capacities. This review was registered in PROSPERO with code ref. 407024, and an evaluation of the quality or risk of bias was performed. After this analysis, results were obtained regarding the improvement of body composition and the maintenance of muscle mass. An influence of intermittent fasting on sports performance and body composition is observed. It can be concluded that intermittent fasting provides benefits in terms of body composition without reducing physical performance, maintenance of lean mass, and improvements in maximum power. But despite this, it is necessary to carry out new studies focusing on the sports field since the samples have been very varied. Additionally, the difference in hours of intermittent fasting should be studied, especially in the case of overnight fasting. abstract_id: PUBMED:463798 Loss of body nitrogen on fasting. An analysis of the change in total body nitrogen during fasting shows that it declines exponentially, a small fraction being lost rapidly (t 1/2 of a few days), and the remainder being lost slowly (t 1/2 of many months). The obese faster loses N, and weight, at a slower relative rate than the nonobese; and the ratio of N loss to weight loss during an extended fast is inversely related to body fat content, being about 20 g/kg in the nonobese and about 10 g/kg in those with body fat burdens of 50 kg or more. The loss of body N on a low protein-calorie adequate diet can also be described in exponential terms, and this function allows an estimate to be made of the N requirement. abstract_id: PUBMED:36224641 Effect of intermittent fasting 5:2 on body composition and nutritional intake among employees with obesity in Jakarta: a randomized clinical trial. Objective: This study aimed to determine the effect of intermittent fasting 5:2 on body composition in employees with obesity in Jakarta. Results: Fifty participants were included; 25 were allocated to the fasting group and 25 to the control group. There was no significant change in fat mass, fat-free mass, skeletal muscle, and BMI (p &gt; 0.05). Significant in-group changes were observed in body weight (p = 0.023) and BMI (p = 0.018) in the fasting group. Dietary intake was similar before and during the intervention. The reduction in macronutrient intake resulted in a statistically significant difference in carbohydrate, protein, and fat intake in the two groups (p &lt; 0.05). Intermittent fasting 5:2 results in weight loss but does not affect fat mass and fat-free mass reductions. None of the between-group differences were clinically relevant. Trial Registration: ClinicalTrials.gov with ID: NCT04319133 registered on 24 March 2020. abstract_id: PUBMED:37203329 Efficacy and safety of modified fasting therapy for weight loss in 2054 hospitalized patients. Objective: The aim of this study was to evaluate the efficacy and safety of modified fasting therapy, and a retrospective study was conducted to analyze changes in clinical indicators of hospitalized fasting patients. Methods: A total of 2054 hospitalized fasting patients were enrolled in this observational study. All participants underwent 7 days of modified fasting therapy. The clinical efficacy biomarkers, safety indicators, and body composition were measured before and after fasting. Results: The modified fasting therapy reduced body weight, BMI, abdominal circumference, systolic blood pressure, and diastolic blood pressure significantly. Blood glucose and indicators of body composition were improved to various extents (all p &lt; 0.05). There was a small increase in liver function, kidney function, uric acid, electrolytes, blood count, coagulation, and uric biomarkers. Subgroup analysis results showed that cardiovascular diseases benefited from modified fasting therapy. Conclusions: At present this study is the largest retrospective population-based study about modified fasting therapy. The results from 2054 patients showed that the modified fasting therapy lasting 7 days was efficient and safe. It led to improvements in physical health and body weight-associated indicators, as well as body composition and relevant cardiovascular risk factors. abstract_id: PUBMED:1163521 The effect of fasting on the rate of intestinal drug absorption in rats: preliminary studies. The absorption rates of two model drugs, salicylate and antipyrine, from the small intestines of rats deprived of food for various periods of time were compared with rats fed ad libitum. Fasting reduced the absorption rate constants for both drugs with the salicylate rates being depressed more severely than the rates for antipyrine. Intestinal mass studies showed that the weight/length ratio of the rat intestine is progressively decreased as fasting is prolonged up to 96 hr. The intestinal weight loss was much more pronounced than the total body weight loss. The loss in intestinal weight and the observed decrease in drug absorption rate are believed to be related to the inhibition of intestinal cell proliferation due to fasting, resulting in a decreased absorptive surface and reduced mucosal cell viability. abstract_id: PUBMED:31333729 Cortical thickness, neurocognitive, and body composition effects of fasting during Ramadan. Background: We aimed to study the anatomical, physiological, and cognitive function of healthy individuals practicing fasting during the month of Ramadan. Measurements were taken 1 week before and 2 weeks after Ramadan fasting. Materials And Methods: Twelve healthy male individuals (mean age ± standard error of the mean: 34.3 ± 2.9 years; body mass index: 26.26 ± 1.4 kg/m2) were assessed for various parameters before and after Ramadan fasting. All the tests were performed in the morning. Body composition characteristics were assessed by bioelectrical impedance analysis using a commercially available body composition analyzer. For neurocognitive analysis, participants underwent the stop signal task (SST), pattern recognition memory task (PRM), and spatial working memory strategy (SWM) from the Cambridge Neuropsychological Test Automated Battery. T1-weighted, 1 mm-thick magnetic resonance images were also acquired. Results: Anthropometric analysis showed a significant decrease in body weight, fat-free mass (FFM), trunk FFM, and trunk predicted muscle mass, while the other body composition parameters did not exhibit any changes. The stop signal reaction time (SSRT) latency (ms) (P &gt; 0.05) and PRM did not show any significant difference before and after fasting. SWM task (P &lt; 0.05) improved significantly after fasting. Cortical thickness data of the whole brain were not significantly different after fasting at any brain location. There was a significant correlation between the left amygdala and the SWM strategy (r2 = 0.518) and between fat and brain segmentation volume (r2 = 0.375). Conclusion: Our pilot data suggest that Ramadan fasting leads to weight loss and FFM reductions and improve cognitive function. Answer: The study described in abstract PUBMED:12422922 investigated whether a homeopathic ultramolecular dilution of Thyroidinum 30cH affects the rate of body weight reduction in fasting patients. The randomized placebo-controlled double-blind clinical trial included 208 fasting patients who had encountered a stagnation or increase of weight after a weight reduction of at least 100 g/day in the preceding 3 days. The patients received either one oral dose of Thyroidinum 30cH or a placebo. The main outcome measure was the reduction of body weight 2 days after treatment. The results showed that weight reduction on the second day after medication in the Thyroidinum group was less than in the placebo group. However, after adjusting for baseline differences in body weight and rate of weight reduction before medication, the result was weakened to a non-significant level. There were no differences between groups in the secondary outcome measures. The study concluded that while patients receiving Thyroidinum had less weight reduction on day 2 after treatment than those receiving placebo, the differences were not significant after adjusting for baseline differences, and thus the result must be interpreted with caution. The study suggests that the primary outcome measure could be predefined differently to potentially demonstrate an augmented reduction of weight on day 1 after treatment with Thyroidinum, which would be compatible with homeopathic doctrine as well as findings from animal research. However, these results are not conclusive and should be approached with caution due to the non-significant findings after adjustment for baseline differences (PUBMED:12422922).
Instruction: Does age matter? Abstracts: abstract_id: PUBMED:34118094 A history of previous childbirths is linked to women's white matter brain age in midlife and older age. Maternal brain adaptations occur in response to pregnancy, but little is known about how parity impacts white matter and white matter ageing trajectories later in life. Utilising global and regional brain age prediction based on multi-shell diffusion-weighted imaging data, we investigated the association between previous childbirths and white matter brain age in 8,895 women in the UK Biobank cohort (age range = 54-81 years). The results showed that number of previous childbirths was negatively associated with white matter brain age, potentially indicating a protective effect of parity on white matter later in life. Both global white matter and grey matter brain age estimates showed unique contributions to the association with previous childbirths, suggesting partly independent processes. Corpus callosum contributed uniquely to the global white matter association with previous childbirths, and showed a stronger relationship relative to several other tracts. While our findings demonstrate a link between reproductive history and brain white matter characteristics later in life, longitudinal studies are required to establish causality and determine how parity may influence women's white matter trajectories across the lifespan. abstract_id: PUBMED:27919183 Accelerated Gray and White Matter Deterioration With Age in Schizophrenia. Objective: Although brain changes in schizophrenia have been proposed to mirror those found with advancing age, the trajectory of gray matter and white matter changes during the disease course remains unclear. The authors sought to measure whether these changes in individuals with schizophrenia remain stable, are accelerated, or are diminished with age. Method: Gray matter volume and fractional anisotropy were mapped in 326 individuals diagnosed with schizophrenia or schizoaffective disorder and in 197 healthy comparison subjects aged 20-65 years. Polynomial regression was used to model the influence of age on gray matter volume and fractional anisotropy at a whole-brain and voxel level. Between-group differences in gray matter volume and fractional anisotropy were regionally localized across the lifespan using permutation testing and cluster-based inference. Results: Significant loss of gray matter volume was evident in schizophrenia, progressively worsening with age to a maximal loss of 8% in the seventh decade of life. The inferred rate of gray matter volume loss was significantly accelerated in schizophrenia up to middle age and plateaued thereafter. In contrast, significant reductions in fractional anisotropy emerged in schizophrenia only after age 35, and the rate of fractional anisotropy deterioration with age was constant and best modeled with a straight line. The slope of this line was 60% steeper in schizophrenia relative to comparison subjects, indicating a significantly faster rate of white matter deterioration with age. The rates of reduction of gray matter volume and fractional anisotropy were significantly faster in males than in females, but an interaction between sex and diagnosis was not evident. Conclusions: The findings suggest that schizophrenia is characterized by an initial, rapid rate of gray matter loss that slows in middle life, followed by the emergence of a deficit in white matter that progressively worsens with age at a constant rate. abstract_id: PUBMED:34048307 Effects of Age on White Matter Microstructure in Children With Neurofibromatosis Type 1. Children with neurofibromatosis type 1 (NF1) often report cognitive challenges, though the etiology of such remains an area of active investigation. With the advent of treatments that may affect white matter microstructure, understanding the effects of age on white matter aberrancies in NF1 becomes crucial in determining the timing of such therapeutic interventions. A cross-sectional study was performed with diffusion tensor imaging from 18 NF1 children and 26 age-matched controls. Fractional anisotropy was determined by region of interest analyses for both groups over the corpus callosum, cingulate, and bilateral frontal and temporal white matter regions. Two-way analyses of variance were done with both ages combined and age-stratified into early childhood, middle childhood, and adolescence. Significant differences in fractional anisotropy between NF1 and controls were seen in the corpus callosum and frontal white matter regions when ages were combined. When stratified by age, we found that this difference was largely driven by the early childhood (1-5.9 years) and middle childhood (6-11.9 years) age groups, whereas no significant differences were appreciable in the adolescence age group (12-18 years). This study demonstrates age-related effects on white matter microstructure disorganization in NF1, suggesting that the appropriate timing of therapeutic intervention may be in early childhood. abstract_id: PUBMED:31271249 Paternal age contribution to brain white matter aberrations in autism spectrum disorder. Aim: Although advanced parental age holds an increased risk for autism spectrum disorder (ASD), its role as a potential risk factor for an atypical white matter development underlying the pathophysiology of ASD has not yet been investigated. The current study was aimed to detect white matter disparities in ASD, and further investigate the relationship of paternal and maternal age at birth with such disparities. Methods: Thirty-nine adult males with high-functioning ASD and 37 typically developing (TD) males were analyzed in the study. The FMRIB Software Library and tract-based spatial statistics were utilized to process and analyze the diffusion tensor imaging data. Results: Subjects with ASD exhibited significantly higher mean diffusivity (MD) and radial diffusivity (RD) in white matter fibers, including the association (inferior fronto-occipital fasciculus, right inferior longitudinal fasciculus, superior longitudinal fasciculi, uncinate fasciculus, and cingulum), commissural (forceps minor), and projection tracts (anterior thalamic radiation and right corticospinal tract) compared to TD subjects (Padjusted &lt; 0.05). No differences were seen in either fractional anisotropy or axial diffusivity. Linear regression analyses assessing the relationship between parental ages and the white matter aberrations revealed a positive correlation between paternal age (PA), but not maternal age, and both MD and RD in the affected fibers (Padjusted &lt; 0.05). Multiple regression showed that only PA was a predictor of both MD and RD. Conclusion: Our findings suggest that PA contributes to the white matter disparities seen in individuals with ASD compared to TD subjects. abstract_id: PUBMED:24361462 Differential vulnerability of gray matter and white matter to intrauterine growth restriction in preterm infants at 12 months corrected age. Intrauterine growth restriction (IUGR) is associated with a high risk of abnormal neurodevelopment. Underlying neuroanatomical substrates are partially documented. We hypothesized that at 12 months preterm infants would evidence specific white-matter microstructure alterations and gray-matter differences induced by severe IUGR. Twenty preterm infants with IUGR (26-34 weeks of gestation) were compared with 20 term-born infants and 20 appropriate for gestational age preterm infants of similar gestational age. Preterm groups showed no evidence of brain abnormalities. At 12 months, infants were scanned sleeping naturally. Gray-matter volumes were studied with voxel-based morphometry. White-matter microstructure was examined using tract-based spatial statistics. The relationship between diffusivity indices in white matter, gray matter volumes, and perinatal data was also investigated. Gray-matter decrements attributable to IUGR comprised amygdala, basal ganglia, thalamus and insula bilaterally, left occipital and parietal lobes, and right perirolandic area. Gray-matter volumes positively correlated with birth weight exclusively. Preterm infants had reduced FA in the corpus callosum, and increased FA in the anterior corona radiata. Additionally, IUGR infants had increased FA in the forceps minor, internal and external capsules, uncinate and fronto-occipital white matter tracts. Increased axial diffusivity was observed in several white matter tracts. Fractional anisotropy positively correlated with birth weight and gestational age at birth. These data suggest that IUGR differentially affects gray and white matter development preferentially affecting gray matter. At 12 months IUGR is associated with a specific set of structural gray-matter decrements. White matter follows an unusual developmental pattern, and is apparently affected by IUGR and prematurity combined. abstract_id: PUBMED:36358443 Interpretation for Individual Brain Age Prediction Based on Gray Matter Volume. The relationship between age and the central nervous system (CNS) in humans has been a classical issue that has aroused extensive attention. Especially for individuals, it is of far greater importance to clarify the mechanisms between CNS and age. The primary goal of existing methods is to use MR images to derive high-accuracy predictions for age or degenerative diseases. However, the associated mechanisms between the images and the age have rarely been investigated. In this paper, we address the correlation between gray matter volume (GMV) and age, both in terms of gray matter themselves and their interaction network, using interpretable machine learning models for individuals. Our goal is not only to predict age accurately but more importantly, to explore the relationship between GMV and age. In addition to targeting each individual, we also investigate the dynamic properties of gray matter and their interaction network with individual age. The results show that the mean absolute error (MAE) of age prediction is 7.95 years. More notably, specific locations of gray matter and their interactions play different roles in age, and these roles change dynamically with age. The proposed method is a data-driven approach, which provides a new way to study aging mechanisms and even to diagnose degenerative brain diseases. abstract_id: PUBMED:30323144 Prevalence of white matter hyperintensities increases with age. White matter hyperintensities (WMHs) that arise with age and/or atherosclerosis constitute a heterogeneous disorder in the white matter of the brain. However, the relationship between age-related risk factors and the prevalence of WMHs is still obscure. More clinical data is needed to confirm the relationship between age and the prevalence of WMHs. We collected 836 patients, who were treated in the Renmin Hospital, Hubei University of Medicine, China from January 2015 to February 2016, for a case-controlled retrospective analysis. According to T2-weighted magnetic resonance imaging results, all patients were divided into a WMHs group (n = 333) and a non-WMHs group (n = 503). The WMHs group contained 159 males and 174 females. The prevalence of WMHs increased with age and was associated with age-related risk factors, such as cardiovascular diseases, smoking, drinking, diabetes, hypertension and history of cerebral infarction. There was no significant difference in sex, education level, hyperlipidemia and hyperhomocysteinemia among the different age ranges. These findings confirm that age is an independent risk factor for the prevalence and severity of WMHs. The age-related risk factors enhance the occurrence of WMHs. abstract_id: PUBMED:26446690 Age exacerbates HIV-associated white matter abnormalities. Both HIV disease and advanced age have been associated with alterations to cerebral white matter, as measured with white matter hyperintensities (WMH) on fluid-attenuated inversion recovery (FLAIR) magnetic resonance imaging (MRI), and more recently with diffusion tensor imaging (DTI). This study investigates the combined effects of age and HIV serostatus on WMH and DTI measures, as well as the relationships between these white matter measures, in 88 HIV seropositive (HIV+) and 49 seronegative (HIV-) individuals aged 23-79 years. A whole-brain volumetric measure of WMH was quantified from FLAIR images using a semi-automated process, while fractional anisotropy (FA) was calculated for 15 regions of a whole-brain white matter skeleton generated using tract-based spatial statistics (TBSS). An age by HIV interaction was found indicating a significant association between WMH and older age in HIV+ participants only. Similarly, significant age by HIV interactions were found indicating stronger associations between older age and decreased FA in the posterior limbs of the internal capsules, cerebral peduncles, and anterior corona radiata in HIV+ vs. HIV- participants. The interactive effects of HIV and age were stronger with respect to whole-brain WMH than for any of the FA measures. Among HIV+ participants, greater WMH and lower anterior corona radiata FA were associated with active hepatitis C virus infection, a history of AIDS, and higher current CD4 cell count. Results indicate that age exacerbates HIV-associated abnormalities of whole-brain WMH and fronto-subcortical white matter integrity. abstract_id: PUBMED:24983715 Age-related effects in the neocortical organization of chimpanzees: gray and white matter volume, cortical thickness, and gyrification. Among primates, humans exhibit the most profound degree of age-related brain volumetric decline in particular regions, such as the hippocampus and the frontal lobe. Recent studies have shown that our closest living relatives, the chimpanzees, experience little to no volumetric decline in gray and white matter over the adult lifespan. However, these previous studies were limited with a small sample of chimpanzees of the most advanced ages. In the present study, we sought to further test for potential age-related decline in cortical organization in chimpanzees by expanding the sample size of aged chimpanzees. We used the BrainVisa software to measure total brain volume, gray and white matter volumes, gray matter thickness, and gyrification index in a cross-sectional sample of 219 captive chimpanzees (8-53 years old), with 38 subjects being 40 or more years of age. Mean depth and cortical fold opening of 11 major sulci of the chimpanzee brains were also measured. We found that chimpanzees showed increased gyrification with age and a cubic relationship between age and white matter volume. For the association between age and sulcus depth and width, the results were mostly non-significant with the exception of one negative correlation between age and the fronto-orbital sulcus. In short, results showed that chimpanzees exhibit few age-related changes in global cortical organization, sulcus folding and sulcus width. These findings support previous studies and the theory that the age-related changes in the human brain is due to an extended lifespan. abstract_id: PUBMED:38424358 White matter brain age as a biomarker of cerebrovascular burden in the ageing brain. As the brain ages, it almost invariably accumulates vascular pathology, which differentially affects the cerebral white matter. A rich body of research has investigated the link between vascular risk factors and the brain. One of the less studied questions is that among various modifiable vascular risk factors, which is the most debilitating one for white matter health? A white matter specific brain age was developed to evaluate the overall white matter health from diffusion weighted imaging, using a three-dimensional convolutional neural network deep learning model in both cross-sectional UK biobank participants (n = 37,327) and a longitudinal subset (n = 1409). White matter brain age gap (WMBAG) was the difference between the white matter age and the chronological age. Participants with one, two, and three or more vascular risk factors, compared to those without any, showed an elevated WMBAG of 0.54, 1.23, and 1.94 years, respectively. Diabetes was most strongly associated with an increased WMBAG (1.39 years, p &lt; 0.001) among all risk factors followed by hypertension (0.87 years, p &lt; 0.001) and smoking (0.69 years, p &lt; 0.001). Baseline WMBAG was associated significantly with processing speed, executive and global cognition. Significant associations of diabetes and hypertension with poor processing speed and executive function were found to be mediated through the WMBAG. White matter specific brain age can be successfully targeted for the examination of the most relevant risk factors and cognition, and for tracking an individual's cerebrovascular ageing process. It also provides clinical basis for the better management of specific risk factors. Answer: Yes, age does matter in the context of brain health and white matter integrity. Research has shown that age is associated with various changes in the brain's white matter and gray matter. For instance, the prevalence of white matter hyperintensities (WMHs), which are often associated with age and atherosclerosis, increases with age and is linked to age-related risk factors such as cardiovascular diseases, smoking, and hypertension (PUBMED:30323144). Additionally, age exacerbates HIV-associated white matter abnormalities, with older HIV-positive individuals showing a significant association between WMH and older age (PUBMED:26446690). In schizophrenia, there is an accelerated deterioration of gray matter and white matter with age, with significant loss of gray matter volume worsening with age and a faster rate of white matter deterioration in schizophrenia compared to healthy individuals (PUBMED:27919183). Similarly, in neurofibromatosis type 1 (NF1), age-related effects on white matter microstructure disorganization suggest that early childhood may be the appropriate timing for therapeutic intervention (PUBMED:34048307). Moreover, paternal age at birth has been found to contribute to white matter disparities in autism spectrum disorder (ASD), with older paternal age being a predictor of higher mean diffusivity (MD) and radial diffusivity (RD) in white matter fibers (PUBMED:31271249). In preterm infants with intrauterine growth restriction (IUGR), there is evidence of differential vulnerability of gray matter and white matter, with IUGR affecting gray matter development preferentially and white matter showing an unusual developmental pattern (PUBMED:24361462). Furthermore, a study on chimpanzees found few age-related changes in global cortical organization, suggesting that the age-related changes in the human brain may be due to an extended lifespan (PUBMED:24983715). Lastly, a white matter-specific brain age has been developed as a biomarker of cerebrovascular burden in the ageing brain, with diabetes, hypertension, and smoking being associated with an increased white matter brain age gap (WMBAG), indicating poorer white matter health (PUBMED:38424358). In summary, age plays a significant role in the structural and functional changes in the brain's white matter and gray matter, and it is an important factor to consider in the context of neurological health and disease.
Instruction: Does unilateral lobectomy suffice to manage unilateral nontoxic goiter? Abstracts: abstract_id: PUBMED:19703811 Does unilateral lobectomy suffice to manage unilateral nontoxic goiter? Objective: To evaluate the effectiveness of ipsilateral lobectomy to treat unilateral, nontoxic, benign nodular goiter and to define predictive factors for recurrence. Methods: Patients undergoing thyroid lobectomy for unilateral, nontoxic, benign nodular goiter between 2002 and 2007 were included. Patients were excluded if coincidental thyroid cancer was detected at histopathologic examination and completion thyroidectomy was performed. Potential predictors of recurrence including age; sex; family history; preoperative volume of the thyroid gland; preoperative number, size, and ultrasonography characteristics of the nodules; duration of postoperative follow-up; postoperative use of thyroxine; and histopathologic diagnoses were recorded at baseline. Follow-up visits were scheduled every 3 months during the first year and every 6 months thereafter. Recurrent disease was defined as a hypoechogenic or hyperechogenic nodule larger than or equal to 3 mm detected in the remaining contralateral lobe during ultrasonography. Patients with a thyrotropin value greater than 5 mIU/L received thyroxine. Fine-needle aspiration biopsy was performed for nodules greater than 10 mm or for nodules with characteristics suggestive of malignancy. Reoperation was indicated if a nodule was greater than 3 cm in diameter, posed a risk of malignancy, or caused compression signs or symptoms. Results: A total of 104 patients were included. Histopathologic diagnoses at initial operation were adenoma in 45 patients, colloidal nodular goiter in 45 patients, and chronic lymphocytic thyroiditis in 14 patients. Average duration of follow-up was 39.75 +/- 21.75 months (range, 5-87 months). Recurrence was seen in 63 patients (60.6%). Histopathologic characteristics of the lobectomy material (P&lt;.001), preoperative volume of the thyroid gland (P&lt;.006), and multinodularity (P&lt;.011) were significant predictors of recurrence. Conclusions: Higher preoperative thyroid volume, histopathologic characteristics of nodules, and multinodular disease are associated with an increased risk of recurrence in patients with unilateral nodular goiter. Unilateral lobectomy is an effective therapeutic option with low reoperation rates in unilateral benign thyroid disease. abstract_id: PUBMED:38086135 Graves' disease with only unilateral involvement; a case report. Introduction: Graves' disease characteristically presents with a diffuse goiter secondary to the autoantibodies that target the thyrotropin receptors of the thyroid gland. Few cases have been reported of only one of the two lobes being affected. The cause of this phenomenon is still uncertain. Here we report on another case of unilateral Graves' disease. Case Presentation: A 43-year-old female patient presented with a history of weight loss, palpitations and right sided neck swelling for 4 months. Clinical examination showed an enlarged right thyroid lobe. Laboratory investigations yielded evidence of thyrotoxicosis with suppressed thyroid stimulating hormone. In addition, anti-TSH receptor and anti-thyroperoxidase antibodies were positive. Neck Ultrasound showed an enlarged right thyroid lobe with increased vascularization. The isthmus and left lobe were both normal in size. A Tc99m pertechnetate thyroid scan demonstrated enlargement of the right thyroid lobe with diffuse intense uptake, whereas the left lobe was suppressed. A diagnosis of unilateral Graves' disease was made. The thyrotoxicosis was treated and maintained with methimazole. Discussion: Unilateral Graves' disease is a rare manifestation of Graves' disease, sharing the same autoimmune background and the symptoms of thyrotoxicosis. Enlargement of only one lobe was evident on clinical examination. The distinctive feature was unilateral uptake during thyroid scintigraphy. The exact pathophysiology of this condition has yet to be elucidated. Management options and responses are similar to those of classical Graves' disease. Conclusion: Unilateral uptake during thyroid scintigraphy and/or unilateral lobar goiter in the setting of hyperthyroidism can be the presentation of unilateral Graves' disease. abstract_id: PUBMED:37908479 Unilateral Thyroid Lobe Involvement of Graves Disease. A 30-year-old man presented with 3-year history of Graves disease. He was initially diagnosed after he developed unilateral proptosis and was initiated on methimazole 5 mg, on which he was currently euthyroid. Visible right-sided thyromegaly and trouble swallowing developed 2 months after presentation to our practice. Biochemical evaluation revealed suppressed TSH, normal free T4 and total T3, and elevated thyroid stimulating immunoglobulin with normal thyroid receptor antibody. An ultrasound of the thyroid demonstrated left-sided small nodules with right-sided thyromegaly. A nuclear medicine uptake scan revealed significantly greater uptake in the right thyroid lobe, with overall minimal uptake in the left lobe. The need for definitive therapy that would not exacerbate orbitopathy was discussed, and the patient elected for a right-sided hemithyroidectomy. Postoperative biochemical evaluation demonstrated biochemical euthyroidism despite continued elevation in thyroid stimulating immunoglobulin and newly elevated thyroid receptor antibody while remaining off methimazole. Graves disease can rarely involve a single thyroid lobe. Given the rarity, further investigation is needed to determine the natural course of this form of Graves disease. abstract_id: PUBMED:24563802 The Morbidity of Reoperative Surgery for Recurrent Benign Nodular Goitre: Impact of Previous Unilateral Thyroid Lobectomy versus Subtotal Thyroidectomy. Background. Subtotal thyroidectomy (STT) was previously considered the gold standard in the surgical management of multinodular goitre despite its propensity for recurrence. Our aim was to assess whether prior STT or unilateral lobectomy was associated with increased reoperative morbidity. Methods. A retrospective analysis was conducted extracting data from our endocrine surgical database for the period from January 1991 to June 2006. Two patient groups were defined: Group 1 consisted of patients with previous unilateral thyroid lobectomy; Group 2 had undergone previous STT. Specific outcomes investigated were transient and permanent recurrent laryngeal nerve (RLN) injury and hypoparathyroidism. Results. 494 reoperative cases were performed which consisted of 259 patients with previous unilateral lobectomy (Group 1) and 235 patients with previous subtotal thyroidectomy (Group 2). A statistically significant increase relating to previous STT was demonstrated in both permanent RLN injury (0.77% versus 3.4%, RR 4.38, P = 0.038) and permanent hypoparathyroidism (1.5% versus 5.1%, RR 3.14, P = 0.041). Transient nerve injury and hypocalcaemia incidence was comparable. Conclusions. Reoperative surgery following subtotal thyroidectomy is associated with a significantly increased risk of permanent recurrent laryngeal nerve injury and hypoparathyroidism when compared with previous unilateral thyroidectomy. Subtotal thyroidectomy should therefore no longer be recommended in the management of multinodular goitre. abstract_id: PUBMED:24783020 Single lobe disease in cases of advanced endemic goiter: a new phenotype. Objectives: To report a new phenotype of advanced endemic goiter that affects only one lobe of the thyroid gland. Patients And Methods: This study included 60 patients from the west of Sudan with long-standing unilateral simple endemic goiter that required obectomy, with emphasis on the gross appearance, measurements and cytological features of the contralateral lobe. Results: Out of 60 patients with unilateral goiter, 50 (83%) were found to have the disease on the ipsilateral lobe only (monolobar goiter). The contralateral lobe in these 50 patients showed no nodularity, and its volume was within the normal limits. All patients with monolobar disease had total lobectomy on the affected side, and postoperatively they continued to have normal blood levels of T3, T4 and TSH. Conclusion: We report a new phenotype of advanced endemic goiter that affects only one lobe of the thyroid gland, and in the presence of a structurally and functionally normal contralateral lobe. abstract_id: PUBMED:18936354 Thyroid function after unilateral total lobectomy: risk factors for postoperative hypothyroidism. Objective: To evaluate the incidence of postoperative hypothyroidism among patients who underwent unilateral total lobectomy and identify related factors. Design: Retrospective medical record analysis. Setting: Oncological center and private clinic. Patients: From March 1996 to July 2005, 228 euthyroid patients underwent unilateral total lobectomy for benign diseases; 168 had all the information required for inclusion in this study. Main Outcome Measures: Serum levels of thyrotropin and antithyroidal antibodies were assessed, as well as ultrasonographic evaluation of the remaining thyroid lobe and review of all histological specimens, with emphasis on lymphocytic infiltration. Hypothyroidism was defined as thyrotropin level greater than 5.5 mU/L. Results: Most patients were female (88%), with a median (range) age of 45 (16-72) years. Hypothyroidism occurred in 61 cases (32.8%), during a median follow-up period of 29 months (range, 6-108 months). Statistically related factors included higher preoperative thyrotropin levels (2.1 mU/L among hypothyroid patients vs 1.2 mU/L in euthyroid patients; P &lt; .001), smaller thyroid remnant volume (3.9 mL vs 6.0 mL, respectively; P = .003); right vs left lobectomy (P = .006), and higher thyroperoxidase antibody serum levels (P = .009). Conclusions: Postoperative hypothyroidism appeared in 32.8% of the cases in this series, especially among patients with elevated preoperative thyrotropin and postoperative thyroperoxidase antibody levels, after right lobectomy and when a smaller thyroid remnant was left. After confirmation with larger prospective series, these results may support the indication for early postoperative hormone supplementation in these instances. abstract_id: PUBMED:36968877 Physiotherapy Combined With Voice Exercises in a Patient With Unilateral Vocal Cord Palsy Following a Total Thyroidectomy Surgery: A Case Report. Multinodular goiter is a condition in which the thyroid gland is swollen and has several distinct masses. A large multinodular goiter can lead to difficulty in swallowing and breathing. A large goiter hampers respiration and deglutition; therefore, a part of or the whole thyroid gland is removed. Total thyroidectomy is a surgical process which involves the removal of the whole thyroid gland. One of the adverse effects of a complete thyroidectomy is vocal cord paralysis. It occurs because of an injury to the recurrent laryngeal nerve. Vocal cord paralysis could be bilateral or unilateral. It is characterized by hoarseness of voice, breathing difficulties and voice pitch loss, and inability to talk loudly. This case report describes physiotherapy along with voice exercises in a 65-year-old female who suffered from unilateral vocal cord palsy following total thyroidectomy. The patient was successfully rehabilitated after four weeks, using a tailored physiotherapy program according to the difficulty faced by her. The rehabilitation exercises consisted of upper and lower limb mobility activities, breathing activities including thoracic expansion, and deep breathing exercises. Static hamstrings, static quadriceps exercise, heel slides and isometric exercise to neck muscles, and passive movements to the cervical spine were administered. Voice therapy exercises combined with breathing exercises were also administered. abstract_id: PUBMED:23688788 Unilateral thyroidectomy for the treatment of benign multinodular goiter. Background: Benign multinodular goiter (MNG) is one of the most commonly treated thyroid disorders. Although bilateral resection is the accepted surgical treatment for bilateral MNG, the appropriate surgical resection for unilateral MNG continues to be debated. Bilateral resection generally has lower recurrence rates but higher complication rates than unilateral resection. Therefore, the purpose of this study was to define the recurrence and complication rates of unilateral and bilateral resections to determine the appropriate intervention for patients with unilateral, benign MNG. Methods: We reviewed a prospectively maintained database of all patients who underwent a thyroidectomy for treatment of benign MNG at a single institution between May 1994 and December 2011. All patients with bilateral MNG were treated with bilateral resection. Surgical treatment for unilateral MNG was determined by surgeon preference, with all but one surgeon opting for unilateral resection to treat unilateral MNG. Data were reported as means ± standard error of the mean. Chi-squared analysis was used to determine statistical significance at a level of P &lt; 0.05. Results: A total of 683 patients underwent thyroidectomy for MNG. Of these patients, 420 (61%) underwent unilateral resection and 263 patients (39%) underwent total thyroidectomy. The mean age was 52 ± 17 y, and 542 patients (79%) were female. The mean follow-up time was 46.1 ± 1.9 mo. The rate of recurrent disease was similar between unilateral (2%, n = 10) and bilateral (1%, n = 3) resections (P = 0.248). Unilateral resection patients had a lower total complication rate than patients with bilateral resections (8% versus 26%, P &lt; 0.001); however, there was no difference in the rate of permanent complications (0.2% versus 1%, P = 0.133). Thyroid hormone replacement was rare in unilateral resection patients but necessary in all patients with bilateral resection (19% versus 100%, P &lt; 0.001). Conclusions: Patients that had unilateral resections endured less overall morbidities than those who had bilateral resections, and their risk of recurrent disease was similar. They were also significantly less likely to require lifelong hormone replacement therapy postoperatively. Although bilateral resection remains the recommended treatment for bilateral MNG, these data strongly support the use of unilateral thyroidectomy for the treatment of unilateral, benign MNG. abstract_id: PUBMED:30415868 Unilateral benign multinodular versus solitary goiter: Long-term contralateral reoperation rates after lobectomy. Background: Few long-term studies define the appropriate extent of surgery and recurrence rates for unilateral multinodular goiter. We compared the rate and time to reoperation in patients with multinodular goiter who underwent lobectomy to that of patients with benign solitary nodule. Methods: Retrospective study of a prospective database of all patients who underwent lobectomy for multinodular goiter or solitary nodule from 1991 to 2017. We analyzed reoperation rates and time to reoperation. Reoperation was defined as the need for completion thyroidectomy determined the following citeria: nodule greater than 3 cm, multiple nodules, nodule growth or suspicion for malignancy by ultrasound or fine-needle aspiration biopsy, or compressive symptoms. Results: Included in the study were 2,675 lobectomies; 852 (31.85%) for multinodular goiter. In total, 394 patients (14.7%) underwent reoperation: 261 (30.6%) with a previous multinodular goiter and 133 (7.29%) with solitary nodule (P &lt; .0001). A total of 80% of the patients with multinodular goiter and 67.66% with solitary nodule recurred as multinodular goiter; 3.5% of all recurrences were carcinomas. The mean time to reoperation was 14.8 years, without difference between groups (P = .5765). Patients without reoperation were younger (47 ± 15 vs 54 ± 13 years of age, P &lt; .0001) and more likely to be male (P &lt; .0001). Conclusion: Lobectomy for unilateral multinodular goiter is the procedure of choice given the length of time to reoperation. Patients and surgeons should be aware of the need for long-term surveillance. abstract_id: PUBMED:20130863 Five-year follow-up of a randomized clinical trial of unilateral thyroid lobectomy with or without postoperative levothyroxine treatment. Background: The aim of this study was to compare the prevalence of recurrent nodular goiter in the contralateral thyroid lobe among patients after unilateral thyroid lobectomy for unilateral multinodular goiter (MNG) receiving versus not receiving postoperative prophylactic levothyroxine (LT4) treatment. Methods: From January 2000 through December 2003, 150 consenting patients underwent a unilateral thyroid lobectomy for unilateral MNG at our institution. They were randomized to two groups with 75 patients in each group. Patients in group A received prophylactic LT4 treatment postoperatively (dose range 75-125 microg/day to maintain thyroid-stimulating hormone values below 1.0 mU/L), whereas patients in group B received no postoperative LT4 treatment. All the patients underwent ultrasonographic, cytologic, and biochemical follow-up for at least 60 months postoperatively. The primary outcome was the prevalence of recurrent goiter in the contralateral thyroid lobe. The secondary outcome was the reoperation rate for recurrent goiter. The outcomes were stratified according to individual iodine metabolism status assessed by urinary iodine excretion. Results: During the 5-year follow-up, among patients receiving vs. not receiving LT4, recurrent goiter within the contralateral thyroid lobe was found in 1.4% vs. 16.7% of patients, respectively (p = 0.001). Moreover, 1.4% vs. 8.3%, respectively, of patients receiving vs. not receiving LT4 required contralateral thyroid lobe surgery (p = 0.05). LT4 decreased the recurrence rate among iodine-deficient patients (3.4% vs. 36%, respectively; p = 0.002) but not among iodine-sufficient patients (0% vs. 6.4%, respectively; p = 0.09). Conclusions: Prophylactic LT4 treatment significantly decreased the recurrence rate of nodular goiter in the contralateral thyroid lobe and the need for completion thyroidectomy, mostly among patients with iodine deficiency. Answer: Unilateral lobectomy can be an effective therapeutic option to manage unilateral nontoxic goiter, as suggested by the findings in the provided abstracts. A study with 104 patients who underwent thyroid lobectomy for unilateral, nontoxic, benign nodular goiter showed that recurrence was seen in 60.6% of patients, but the reoperation rates were low. Higher preoperative thyroid volume, histopathologic characteristics of nodules, and multinodularity were associated with an increased risk of recurrence (PUBMED:19703811). Another study comparing unilateral and bilateral resections for benign multinodular goiter (MNG) found that patients who had unilateral resections experienced fewer overall morbidities than those who had bilateral resections, and their risk of recurrent disease was similar. They were also significantly less likely to require lifelong hormone replacement therapy postoperatively (PUBMED:23688788). A long-term study on reoperation rates after lobectomy for unilateral multinodular goiter versus solitary nodule indicated that lobectomy for unilateral multinodular goiter is the procedure of choice given the length of time to reoperation, although patients and surgeons should be aware of the need for long-term surveillance (PUBMED:30415868). Furthermore, a randomized clinical trial found that prophylactic levothyroxine (LT4) treatment significantly decreased the recurrence rate of nodular goiter in the contralateral thyroid lobe and the need for completion thyroidectomy, mostly among patients with iodine deficiency (PUBMED:20130863). In conclusion, unilateral lobectomy can suffice to manage unilateral nontoxic goiter, especially when considering the lower complication rates and the reduced need for hormone replacement therapy compared to bilateral resections. However, there is a risk of recurrence, and long-term follow-up is necessary. Prophylactic LT4 treatment postoperatively may reduce the risk of recurrence, particularly in iodine-deficient patients.
Instruction: A novel case of laparoscopic ureterolithotomy in a partial duplex ureteric collecting system: can open procedures still be justified in the minimally invasive era? Abstracts: abstract_id: PUBMED:35694451 Bilateral Synchronous 3-Port Trans-peritoneal Laparoscopic Ureterolithotomy: a Case Report. Trans-peritoneal laparoscopic ureterolithotomy is a well-established minimally invasive procedure for management of large impacted ureteric stones since the last 3 decades. We present a case of a 51-year old gentleman, who presented with bilateral large upper ureteric calculi with obstructive uropathy and azotemia, managed successfully with bilateral synchronous 3-port trans-peritoneal laparoscopic ureterolithotomy, and to our knowledge is the first such case to be reported in literature. Supplementary Information: The online version contains supplementary material available at 10.1007/s42399-022-01190-5. abstract_id: PUBMED:23361633 A novel case of laparoscopic ureterolithotomy in a partial duplex ureteric collecting system: can open procedures still be justified in the minimally invasive era? Background: Impacted ureteric stones can pose a treatment challenge due to the high level of failure of ESWL and endourological approaches. Laparoscopic ureterolithotomy can provide a safe and successful alternative to these and open, invasive procedures. Methods: Interval laparoscopic ureterolithtomy was carried out following placement of a percutaneous nephrostomy. This was performed through an trans-peritoneal approach with the ureterotomy closed by intracorporeal suturing and placement of a JJ stent without the need for an abdominal wound drain. Conclusion: Laparoscopic ureterolithotomy is a safe, minimally invasive method of managing large, impacted ureteric stones with minimal associated patient morbidity. abstract_id: PUBMED:32455117 Case report of ureteroscopy assisted laparoscopic ureterolithotomy for multiple large ureteric calculi. Managing patient having multiple large ureteric calculi at different locations in ureter with minimal invasive surgery is always a challenge for the surgeon. We hereby present the case report of ureteroscopy assisted laparoscopic ureterolithotomy for multiple large ureteric calculi in proximal and distal ureter in a young female. In this unique and novel method ureteroscopy and laparoscopy was done simultaneously over the patient using two camera units and two surgeons. This approach avoided open ureterolithotomy scar and also extensive dissection of ureter. This unique surgery can be considered as confluence of endourology and laparoscopy. abstract_id: PUBMED:26793529 Laparoscopic Ureterolithotomy for Giant Ureteric Calculus: A Case Report. We present a case of a 21 year old male who presented with symptomatic right upper ureteric calculus measuring 5 cm × 1.5 cm fulfilling the criteria to be named as giant ureteric calculus. Laparoscopic right ureterolithotomy was performed and the giant ureteric calculus was retrieved. abstract_id: PUBMED:28753887 Minimally Invasive Surgical Ureterolithotomy Versus Ureteroscopic Lithotripsy for Large Ureteric Stones: A Systematic Review and Meta-analysis of the Literature. Context: The management of large ureteric stones represents a technical and clinical challenge. Objective: To investigate the safety and efficacy of minimally invasive surgical ureterolithotomy (MISU) in comparison with ureteroscopic lithotripsy (URS) for the treatment of large ureteric stones. Evidence Acquisition: The Preferred Reporting Items for Systematic Review and Meta-analyses (PRISMA) guidelines were followed for the conduction of the study, which was registered in the PROSPERO database. Search string was "(laparoscop* OR retroperito* OR robot*) AND ureterolitho*"; database scope included PubMed, SCOPUS, Cochrane, and EMBASE. Primary end points were the stone-free (SFR) and complications rates. Secondary end points included operative time and hospital stay. Subgroup analyses were performed for stones 1-2 and &gt;2cm, as well as different lithotripters and ureteroscopes. Meta-analysis and forest-plot diagrams were performed with the RevMan 5.3.5 software. Evidence Synthesis: After screening 673 publications, seven randomized controlled trials were eligible to be included in the meta-analysis. A total of 778 patients were pooled after the elimination of the dropouts. No robotic cohorts were found. Only upper ureteral stones were treated in the included studies. The SFR at discharge and 3 mo was higher with MISU with odds ratios of 6.30 (95% confidence interval [CI]: 3.05, 13.01; I2=0%) and 5.34 (95% CI: 2.41, 8.81; I2=0%), respectively. The most common complications for MISU and URS were conversion to open surgery and stone migration to the renal pelvis, respectively. Favorable results in terms of operative time were observed in the case of URS with a mean difference of 29.5min (95% CI: 14.74, 44.26; I2=98%). Hospitalization time was favorable in the case of URS with a mean difference of 2.08 days (95% CI: 0.96, 3.20; I2=99%). Conclusions: This meta-analysis showed a significantly higher SFR at discharge and 3 mo for MISU in comparison with URS when upper ureteral stones were treated. Operative and hospitalization time favored URS over MISU. Patient Summary: The current study investigated the literature on the minimally invasive management of large ureteric stones. The available evidence shows that both ureteroscopic lithotripsy and minimally invasive surgical ureterolithotomy could be considered for the treatment of these stones with similar results. The selection of the approach should be based on the advantages and disadvantages of each technique. abstract_id: PUBMED:37153488 Unilateral multicystic dysplastic kidney disease associated with ipsilateral ureteric bud remnant and contralateral duplex collecting system. Congenital anomalies of the kidney and urinary tract are among the most common developmental malformations. The heterogeneity of these anomalies is very high, some of them are rarely discussed in the literature. Herein, we present a case of a 5-year-old male who was found to have a combination of unilateral multicystic dysplastic kidney associated with ipsilateral ureteric bud remnant and contralateral duplex collecting system. abstract_id: PUBMED:33102027 Ipsilateral robot assisted laparoscopic side to side ureteroureterostomy in a duplex collecting system - A video case report with 1 year follow up. A 26-year-old male presented with an obstructing calculus in the mid superior-moiety ureter in a duplicated urinary collecting-system. A sequela of the obstruction resulted in a symptomatic stricture in a functional superior-moiety ureter, unresponsive to endoscopic interventions. An ipsilateral robot-assisted laparoscopic side-to-side ureteroureterostomy was performed thus bypassing the stricture in the superior-moiety ureter. Follow up endoscopic visualisation showed a healthy, patent anastomosis. This video presentation shows appropriate positioning, operative technique and follow up for a robot assisted side-to-side ureteroureterostomy. Our minimally invasive novel method is a feasible and safe treatment of a duplex collecting system with a symptomatic ectopic ureter. abstract_id: PUBMED:29737311 Laparoscopic ureterolithotomy: Experience of 60 cases from a developing world hospital. Objective: Laparoscopic ureterolithotomy, which has been quoted to have a success rate equivalent to open ureterolithotomy for uretric stones, can be performed transperitoneally and retroperitoneally. The aim of the present study is to report our experience with laparoscopic retroperitoneal ureterolithotomy, its results and advantages in the current era of minimally invasive surgery in a developing country. Patients And Methods: It was a prospective study carried from May 2010 to December 2012. 60 patients diagnosed with upper and middle uretric calculi, with sizes more than 1 cm and with value of more than 1500 hu on CT Urography ,underwent laparoscopic retroperitoneal ureterolithotomy. Results: All patients underwent retroperitoneal laparoscopic ureterolithotomy successfully. The mean operative time was 64.53 min. The mean blood loss was 39.83 ml. 3 patients had minor intra-operative complications which were tackled on table. Post-operative complications developed in 3 patients, all minor. There were no major complications. The removal of drain was at (2.7 days). Mean hospital stay was of 3.3 days. Patients reported to their routine activities in 1.78 weeks. During follow-up 3 months later, CT urography revealed normal ureter in all cases. Conclusion: Laparoscopic retroperitoneal ureterolithotomy has low rate of conversion to open surgery and an acceptable overall complication rates. In selected patients with impacted, hard, large ureteral stones, which are likely to cause diffi-culty in endo-urological procedures, laparoscopic ureterolithotomy is a reason-able treatment option. abstract_id: PUBMED:25740639 Partial splenectomy in the era of minimally invasive surgery: the current laparoscopic and robotic experiences. Background: Partial splenectomy (PS) is a spleen-preserving technique that is applied as a result of trauma, focal lesions or hematological conditions. Despite the improvement of laparoscopic techniques within the past several decades, minimally invasive PS has remained a marginal technique that has not been well evaluated. Our objective was to provide an update on the indications and the feasibility of this procedure. Methods: The MEDLINE database (PubMed) was searched, and all relevant articles that involved a true minimally invasive PS (i.e., segmental or lobar devascularization of the spleen with parenchymal transection) were included. The search was conducted until the 31st of March 2014. Demographic data, operative indications, estimated blood losses, operative times, conversion rates and complications were extracted from the included articles and were summarized for discussion. Results: Out of the 195 publications that were retrieved, 33 were included, which were mainly case reports and case series that represented a total of 187 patients. There were 37 men, 33 women and 117 patients of unknown gender. The mean age of the patients was ranged from 6 to 58 years. The mean total operative time was between 70 and 216 min for conventional laparoscopy and between 108 and 120 min for the robotic approach. For most studies, the mean estimated blood loss was minimal. The complication rate was 5.36% for conventional laparoscopy and 5.56% for the robotic approach. Conclusion: The outcomes of minimally invasive PS were favorable and comparable to those of the open technique according to the literature. This procedure may constitute an attractive alternative to the open technique for selected cases. Moreover, a robotic approach might be an interesting technical option, but additional research is needed before any definitive conclusions can be drawn. abstract_id: PUBMED:19468448 Laparoscopic ureterolithotomy for lower ureteric stones: Steps to make it a simple procedure. Despite advances in endoscopy and availability of holmium lithotripsy there are ureteric stones, which primarily need to be treated with laparoscopic ureterolithotomy. Literature is replete with the stone retrieval in upper ureteric stone but there are a very few reports on stones removal from ureter below the lower sacroiliac joint. Putting a double J stent before starting the procedure does not give any extra advantage; rather it takes away significant operating room time. This point of technique describes port placement strategy, proximal ureteral occlusion; stone localization, ureterotomy, stone retrieval and laparoscopic stenting are the important steps where one would like to be careful enough to complete the procedure successfully. Answer: The case presented in the abstract from PUBMED:23361633 discusses a novel instance of laparoscopic ureterolithotomy performed on a patient with a partial duplex ureteric collecting system. The procedure was successful, with the ureterotomy closed by intracorporeal suturing and placement of a JJ stent without the need for an abdominal wound drain. This case highlights the efficacy and safety of laparoscopic ureterolithotomy as a minimally invasive alternative to open procedures for managing large, impacted ureteric stones, suggesting that open procedures may not be justified in many cases given the minimal morbidity associated with the laparoscopic approach. The question of whether open procedures can still be justified in the era of minimally invasive surgery is further addressed by the systematic review and meta-analysis reported in PUBMED:28753887. This study compared minimally invasive surgical ureterolithotomy (MISU) with ureteroscopic lithotripsy (URS) for the treatment of large ureteric stones. The meta-analysis showed that MISU had a significantly higher stone-free rate at discharge and 3 months post-procedure compared to URS when treating upper ureteral stones. Although URS had favorable results in terms of operative and hospitalization time, the higher stone-free rates with MISU suggest that minimally invasive approaches are effective and may reduce the need for open procedures. Additionally, the abstract from PUBMED:29737311 describes the experience of 60 cases of laparoscopic ureterolithotomy in a developing world hospital, further supporting the procedure's success and low complication rates. This indicates that laparoscopic ureterolithotomy is a reasonable treatment option for impacted, hard, large ureteral stones that may be difficult to manage with endourological procedures, again questioning the necessity of open procedures in such cases. In conclusion, the evidence from these abstracts suggests that minimally invasive procedures, such as laparoscopic ureterolithotomy, are safe and effective for managing large ureteric stones, and they may reduce the justification for open procedures in the current era of minimally invasive surgery.
Instruction: Do long-term results justify decompressive craniectomy after severe traumatic brain injury? Abstracts: abstract_id: PUBMED:28824970 Bicompartmental Decompressive Craniectomy: Report of Two Cases. A recent study of randomized controlled trials showed favorable outcomes with use of decompressive craniectomy in managing and treating uncontrolled intracranial pressures accompanied with cerebral edema due to trauma. We present the details of bicompartmental decompressive craniectomy on two patients who presented with severe head trauma of supra- and infratentorial pathologies. The surgical management techniques and long-term follow-up are discussed in detail. abstract_id: PUBMED:30522035 Long-term functional outcome after decompressive suboccipital craniectomy for space-occupying cerebellar infarction. Objectives: Suboccipital decompressive craniectomy (SDC) is considered the best treatment option in patients with space-occupying cerebellar infarction and clinical signs of deterioration. The primary purpose of this study was to evaluate long-term functional outcome in patients one year after SDC for space-occupying cerebellar infarction, and secondly, to determine factors associated with outcome. Patients And Methods: All patients treated with SDC due to space-occupying cerebellar infarction between January 2009 and October 2015 were included in the study. Data was retrospectively collected from patient records, CT/MRI scans and surgical protocols. Long-term functional outcome was determined by the modified Rankin Scale (mRS) and mRS ≥ 4 was defined as unfavorable outcome. Results: Twenty-two patients (16 male, 6 female) were included in the study. Median age was 53 years. Nine patients were treated with external ventricular drainage as an initial treatment attempt prior to SDC. Median time from symptom onset (stroke ictus) to initiation of the SDC surgery was 48 h (IQR 28-99 hours) and median GCS before SDC was 8 (IQR 5-10). At follow up, median mRS was 3 (IQR 2-6). Outcome was favorable (mRS 0-3) in 12 patients and unfavorable in 10 (3 with major disability, 7 dead). Brainstem infarction and bilateral cerebellar infarction were associated with unfavorable outcome. Conclusions: In this small study, functional long-term outcome in patients with space-occupying cerebellar infarction treated by SDC was acceptable and comparable to previously published results (favorable outcome in 54% of patients). Brainstem infarction and bilateral cerebellar infarction were associated with unfavorable outcome. abstract_id: PUBMED:18826356 Do long-term results justify decompressive craniectomy after severe traumatic brain injury? Object: A decompressive craniectomy can be a life-saving procedure to relieve critically increased intracranial pressure. The survival of a patient is important as well as the subsequent and long-term quality of life. In this paper the authors' goal was to investigate whether long-term clinical results justify the use of a decompressive craniectomy. Methods: Thirty-three patients (20 males and 13 females) with a mean age of 36.3 years (range 13-60 years) with severe traumatic brain injury (Grades III and IV) and subsequent massive brain swelling were examined. For postoperative assessment the Barthel Index was used. A surgical intervention was based on the following criteria: 1) The intracranial pressure could not be controlled by conservative treatment and constantly exceeded 30 mm Hg (cerebral perfusion pressure&lt;50 mm Hg). 2) Transcranial Doppler ultrasonography revealed only a systolic flow pattern or systolic peaks. 3) There were no other major injuries. 4) The patient was not older than 60 years. Results: One-fifth of all patients died and one-fifth remained in a vegetative state. Mild deficits were seen in 6 of 33 patients. A full rehabilitation (Barthel Index 90-100) was achieved in 13 patients (39.4%). Five patients could resume their former occupation, and another 4 had to change jobs. Conclusions: Age remains to be one of the most important exclusion factors. Decompressive craniectomy provided good clinical results in nearly 40% of patients who were otherwise most likely to die. Therefore, long-term results justify the use of decompressive craniectomy in this case series. abstract_id: PUBMED:23133731 Long-term incidence and predicting factors of cranioplasty infection after decompressive craniectomy. Objective: The predictors of cranioplasty infection after decompressive craniectomy have not yet been fully characterized. The objective of the current study was to compare the long-term incidences of surgical site infection according to the graft material and cranioplasty timing after craniectomy, and to determine the associated factors of cranioplasty infection. Methods: A retrospective cohort study was conducted to assess graft infection in patients who underwent cranioplasty after decompressive craniectomy between 2001 and 2011 at a single-center. From a total of 197 eligible patients, 131 patients undergoing 134 cranioplasties were assessed for event-free survival according to graft material and cranioplasty timing after craniectomy. Kaplan-Meier survival analysis and Cox regression methods were employed, with cranioplasty infection identified as the primary outcome. Secondary outcomes were also evaluated, including autogenous bone resorption, epidural hematoma, subdural hematoma and brain contusion. Results: The median follow-up duration was 454 days (range 10 to 3900 days), during which 14 (10.7%) patients suffered cranioplasty infection. There was no significant difference between the two groups for event-free survival rate for cranioplasty infection with either a cryopreserved or artificial bone graft (p=0.074). Intergroup differences according to cranioplasty time after craniectomy were also not observed (p=0.083). Poor neurologic outcome at cranioplasty significantly affected the development of cranioplasty infection (hazard ratio 5.203, 95% CI 1.075 to 25.193, p=0.04). Conclusion: Neurologic status may influence cranioplasty infection after decompressive craniectomy. A further prospective study about predictors of cranioplasty infection including graft material and cranioplasty timing is necessary. abstract_id: PUBMED:20637010 Decompressive craniectomy: technical note. Decompressive craniectomy is a neurosurgical technique in which a portion of the skull is removed to reduce intracranial pressure. The rationale for this procedure is based on the Monro-Kellie Doctrine; expanding the physical space confining edematous brain tissue after traumatic brain injury will reduce intracranial pressure. There is significant debate over the efficacy of decompressive craniectomy despite its sound rationale and historical significance. Considerable variation in the employment of decompressive craniectomy, particularly for secondary brain injury, explains the inconsistent results and mixed opinions of this potentially valuable technique. One way to address these concerns is to establish a consistent methodology for performing decompressive craniectomies. The purpose of this paper is to begin accomplishing this goal and to emphasize the critical points of the hemicraniectomy and bicoronal (Kjellberg type) craniectomy. abstract_id: PUBMED:21091342 Long-term complications of decompressive craniectomy for head injury. There is currently much interest in the use of decompressive craniectomy for intracranial hypertension. Though technically straightforward, the procedure is not without significant complications. A retrospective analysis was undertaken of 164 patients who had had a decompressive craniectomy for severe head injury in the years 2004 to 2009 at the two major hospitals in Western Australia. Eighty-six patients had a bifrontal decompression and seventy-eight had a unilateral decompression. Two patients died due to post-operative care issues. Complications attributable to the decompressive surgery were: herniation of the cortex through the bone defect (42 patients, 25.6%), subdural effusion (81 patients, 49.4%), seizures (36 patients, 22%), hydrocephalus (23 patients, 14%), and syndrome of the trephined (2 patients, 1.2%). Complications attributable to the subsequent cranioplasty included: sudden death due to massive cerebral swelling in 3 patients (2.2%), infection requiring removal of the bone flap in 16 patients (11.6%), and bone flap resorption requiring augmentation in 10 patients (7.2%). After excluding simple complications such as subdural effusion and brain herniation through the skull defect and some patients who died as a direct consequence of traumatic brain or extracranial injury, 81 patients (55.5%) had at least one complication after decompressive craniectomy. The occurrence of at least one complication after decompressive craniectomy was significantly associated with an increased risk of prolonged stay in the hospital or rehabilitation facility (odds ratio 2.54, 95%confidence interval 1.22,5.24, p=0.013), after adjusting for predicted risk of unfavorable outcome. abstract_id: PUBMED:23849902 Decompressive craniectomy--a narrative review and discussion. There continues to be considerable amount of interest in decompressive craniectomy however its use is controversial. It is technically straightforward however it is not without significant complications and although there is currently unequivocal evidence available that it can be a life saving intervention, evidence that outcome is improved over and above standard medical therapy is less forthcoming. This narrative review considers the current role of decompressive craniectomy in the management of neurological emergencies and focuses on four specific questions, namely; (i) Is the decompressive craniectomy a life saving procedure? (ii) Does decompressive craniectomy improve outcome? (iii) Are there any risks associated with decompressive craniectomy? (iv) How do patients feel about their eventual outcome? Finally the future directions for the use of decompressive craniectomy are explored. abstract_id: PUBMED:36348855 Primary Decompressive Craniectomy After Traumatic Brain Injury: A Literature Review. Traumatic brain injuries (TBIs) still put a high burden on public health worldwide. Medical and surgical treatment strategies are continuously being studied, but the role and indications of primary decompressive craniectomy (DC) remain controversial. In medically refractory intracranial hypertension after severe traumatic brain injury, secondary decompressive craniectomy is a last resort treatment option to control intracranial pressure (ICP). Randomized controlled studies have been extensively performed on secondary decompressive craniectomy and its role in the management of severe traumatic brain injuries. Indications, prognostic factors, and long-term outcomes in primary decompressive craniectomy during the evacuation of an epidural, subdural, or intracerebral hematoma in the acute phase are still a matter of ongoing research and controversy to this day. Prospective trials have been designed, but the results are yet to be published. In isolated epidural hematoma without underlying brain injury, osteoplastic craniotomy is likely to be sufficient. In acute subdural hematoma (ASDH) with relevant brain swelling and preoperative CT signs such as effaced cisterns, overly proportional midline-shift compared to a relatively small acute subdural hematoma, and accompanying brain contusions as well as pupillary abnormalities, intraventricular hemorrhage, and coagulation disorder, primary decompressive craniectomy is more likely to be of benefit for patients with traumatic brain injury. The role of intracranial pressure monitoring after primary decompressive craniectomy is recommended, but prospective trials are pending. More refined guidelines and hopefully class I evidence will be established with the ongoing trials: randomized evaluation of surgery with craniectomy for patients undergoing evacuation of acute subdural hematoma (RESCUE-ASDH), prospective randomized evaluation of decompressive ipsilateral craniectomy for traumatic acute epidural hematoma (PREDICT-AEDH), and pragmatic explanatory continuum indicator summary (PRECIS). abstract_id: PUBMED:38233695 Long-term survival after primary decompressive craniectomy for severe traumatic brain injury: an observational study from 1 to 17 years. Primary decompressive craniectomy (DC) is carried out to prevent intracranial hypertension after removal of mass lesions resulting from traumatic brain injury (TBI). While primary DC can be a life-saving intervention, significant mortality risks persist during the follow-up period. This study was undertaken to investigate the long-term survival rate and ascertain the risk factors of mortality in TBI patients who underwent primary DC. We enrolled 162 head-injured patients undergoing primary DC in this retrospective study. The primary focus was on long-term mortality, which was monitored over a range of 12 to 209 months post-TBI. We compared the clinical parameters of survivors and non-survivors, and used a multivariate logistic regression model to adjust for independent risk factors of long-term mortality. For the TBI patients who survived the initial hospitalization period following surgery, the average duration of follow-up was 106.58 ± 65.45 months. The recorded long-term survival rate of all patients was 56.2% (91/162). Multivariate logistic regression analysis revealed that age (odds ratio, 95% confidence interval = 1.12, 1.07-1.18; p &lt; 0.01) and the status of basal cisterns (absent versus normal; odds ratio, 95% confidence interval = 9.32, 2.05-42.40; p &lt; 0.01) were the two independent risk factors linked to long-term mortality. In conclusion, this study indicated a survival rate of 56.2% for patients subjected to primary DC for TBI, with at least a one-year follow-up. Key risk factors associated with long-term mortality were advanced age and absent basal cisterns, critical considerations for developing effective TBI management strategies. abstract_id: PUBMED:24719566 Study of the long-term results of decompressive craniectomy after severe traumatic brain injury based on a series of 60 consecutive cases. Background: Decompressive craniectomy can be proposed in the management of severe traumatic brain injury. Current studies report mixed results, preventing any clear conclusions on the place of decompressive craniectomy in traumatology. Methods: The objective of this retrospective study was to evaluate the results of all decompressive craniectomies performed between 2005 and 2011 for refractory intracranial hypertension after severe traumatic brain injury. Sixty patients were included. Clinical parameters (Glasgow scale, pupillary examination) and radiological findings (Marshall CT scale) were analysed. Complications, clinical outcome, and early and long-term Glasgow Outcome Scale (GOS) were evaluated after surgery. Finally, the predictive value of preoperative parameters to guide the clinician's decision to perform craniectomy was studied. Results: Craniectomy was unilateral in 58 cases and the mean bone flap area was 100 cm(2). Surgical complications were observed in 6.7% of cases. Mean followup was 30 months and a favourable outcome was obtained in 50% of cases. The initial Glasgow Scale was the only statistically significant predictive factor for long-term outcome. Conclusion: Despite the discordant results in the literature, this study demonstrates that decompressive craniectomy is useful for the management of refractory intracranial hypertension after severe traumatic brain injury. Answer: The long-term results of decompressive craniectomy after severe traumatic brain injury (TBI) appear to justify the procedure in certain cases, although the outcomes can be variable and are influenced by multiple factors. One study reported that decompressive craniectomy provided good clinical results in nearly 40% of patients who were otherwise most likely to die, suggesting that the long-term results justify the use of decompressive craniectomy in their case series (PUBMED:18826356). Another study found that 50% of patients had a favorable outcome after decompressive craniectomy for refractory intracranial hypertension following severe TBI, with the initial Glasgow Scale being a statistically significant predictive factor for long-term outcome (PUBMED:24719566). However, it is important to note that decompressive craniectomy is not without significant complications. A retrospective analysis of 164 patients who underwent the procedure for severe head injury revealed that 55.5% had at least one complication after decompressive craniectomy, which was significantly associated with an increased risk of prolonged stay in the hospital or rehabilitation facility (PUBMED:21091342). Additionally, a study investigating long-term survival after primary decompressive craniectomy for TBI found a survival rate of 56.2% with at least a one-year follow-up, with advanced age and absent basal cisterns being key risk factors associated with long-term mortality (PUBMED:38233695). The literature suggests that while decompressive craniectomy can be a life-saving intervention and may improve outcomes for some patients, the decision to perform the procedure should be carefully considered, taking into account the patient's age, neurological status, and other risk factors that may influence the likelihood of a favorable long-term outcome (PUBMED:23849902, PUBMED:36348855). In summary, while decompressive craniectomy can offer a chance of survival and potential recovery after severe TBI, the variability in outcomes and the risk of complications mean that it is not a universally applicable solution and should be evaluated on a case-by-case basis.
Instruction: Is supplying homeless individuals with permanent housing effective? Abstracts: abstract_id: PUBMED:17546533 Is supplying homeless individuals with permanent housing effective? A prospective three year study on the course of mental health problems Objective: Description of the three-year course of mental illness after supplying permanent housing to homeless individuals. Methods: 109-male and 20 female homeless individuals were assessed at the assignment of permanent housing and at one and three year follow-up using the Structured Clinical Interview for DSM-IV. Results: A high percentage (86 %) was able to maintain or even improve the index housing situation. Only minor changes were observed in mental illness severity and global functioning. Symptoms improved slightly over the three year period. High degrees of alcohol consumption and mental illness severity increased the risk of deterioration of the housing arrangement. Conclusions: Supplying homeless individuals with permanent housing is an effective measure but insufficient for improving mental illness. Combined measures of social and medical interventions from one provider are suggested for effective support of homeless people. abstract_id: PUBMED:16445480 Intervention effects of supplying homeless individuals with permanent housing: a 3-year prospective study. Objective: To describe the intervention effects of supplying homeless individuals with permanent housing. Method: In a prospective study, 109 male and 20 female homeless individuals were assessed at baseline and at 1- and 3-year follow-up concerning mental illness (SCID-I), psychopathology, global assessment of functioning, emotional lability and alcohol consumption. Results: A high proportion (86%) of the individuals was able to maintain or improve stability of housing. Only minor changes were observed concerning mental illness and global functioning. Extensive alcohol consumption and high psychopathology increased the risk of losing the stable housing. Conclusion: The placement of homeless individuals in board and care homes or community housing after social counselling seems to be a necessary measure to remedy homelessness. However, supplying more permanent housing is not sufficient to decisively improve mental health status. abstract_id: PUBMED:31097899 Life Goals and Gender Differences among Chronically Homeless Individuals Entering Permanent Supportive Housing. This research seeks to understand goals and the gender differences in goals among men and women who are transitioning into permanent supportive housing. Because of systemic gender inequality, men and women experience homelessness differently. Data collected for this study come from a longitudinal investigation of HIV risk behavior and social networks among women and men transitioning from homelessness to permanent supportive housing. As part of this study, 421 baseline interviews were conducted in English with homeless adults scheduled to move into permanent supportive housing; participants were recruited between September 2014 and October 2015. This paper uses goals data from the 418 male-or female-identified respondents in this study. Results identified goal differences in education and general health between men and women that should be taken into account when service providers, policy makers, and advocates are addressing the needs of homeless women. abstract_id: PUBMED:24729832 Meeting the Housing and Care Needs of Older Homeless Adults: A Permanent Supportive Housing Program Targeting Homeless Elders. The homeless population is aging faster than the general population in the United States. As this vulnerable population continues to age, addressing complex care and housing needs will become increasingly important. This article reviews the often-overlooked issue of homelessness among older adults, including their poor health status and unique care needs, the factors that contribute to homelessness in this population, and the costs of homelessness to the U.S. health care system. Permanent supportive housing programs are presented as a potential solution to elder homelessness, and Hearth, an outreach and permanent supportive housing model in Boston, is described. Finally, specific policy changes are presented that could promote access to housing among the growing older homeless population. abstract_id: PUBMED:32853418 Exploring the Association of Community Integration in Mental Health among Formerly Homeless Individuals Living in Permanent Supportive Housing. Supportive housing has been widely used among persons experiencing chronic homelessness and/or mental health conditions. While it has been demonstrated to be effective in addressing homelessness among populations with complex needs, community integration remains a challenge. Community integration is the extent to which individuals live, participate, and socialize in their community and consists of three aspects: physical, social, and psychological. The study utilized data from the Transitions to Housing project that followed formerly homeless individuals (N = 383) throughout their first year of residence in permanent supportive housing (PSH). The study set out to examine which aspects of community integration are associated with mental health symptoms in this population. Five nested multivariate linear regression models were conducted and then compared. The model that accounted for demographics, substance use, neighborhood quality, and all three aspects of community integration simultaneously was the best fit and explained the most variance in mental health symptoms (24%). The complete model suggested higher levels of psychological integration were significantly associated with decreased mental health symptoms in this sample. This finding suggests fostering a sense of belonging among PSH residents could improve mental health outcomes. Implications for practice and future research are discussed. abstract_id: PUBMED:34713455 Nurse and case manager views on improving access and use of healthcare for adults living in permanent supportive housing. Housing is one of the social determinants of health, and homelessness is associated with health inequalities including increased morbidity and decreased life expectancy. Services to improve access to and use of primary healthcare are provided to formerly homeless individuals (hereafter residents) who live in permanent supportive housing (PSH). Residents do not always utilize services, nor receive adequate healthcare, and often have poor health outcomes. The study aims were to explore nurse and case manager (hereafter participants) views on the challenges of providing healthcare to residents, and strategies to address challenges. This descriptive, qualitative study used thematic analysis. Five nurses and eight case managers working with residents of PSH agencies were interviewed using semistructured interviews. Five main themes emerged. The first theme of context of healthcare use included how the residents' history of homelessness, trauma, and survival affected using services. The second theme was how aspects of relationships (communication issues and mistrust) were barriers to care. The third theme was how residents' health issues (physical chronic diseases, mental health, and substance dependency) affected care. Community level barriers (insurance, financial hardship, and transportation) was the fourth theme. The final theme highlighted recommendations to improve access and use of healthcare by building rapport, addressing mistrust, and using effective communication techniques. Participants noted that barriers to healthcare use were often influenced by residents' previous homeless experience. Nurses noted that chronic physical health issues were problematic for residents. Participants expressed the need to take time to form an authentic relationship to increase trust with residents. abstract_id: PUBMED:34561888 "Halfway Independent": Experiences of formerly homeless adults living in permanent supportive housing. Permanent supportive housing (PSH), which combines affordable public housing with social services, has become the dominant model in the United States for providing housing to formerly homeless people. PSH has been effective in reducing re-entry to homelessness, yet has shown limited evidence of improving formerly homeless individuals' mental health and quality of life. This study aimed to understand the lived experiences of formerly homeless adults' adjustment to tenancy in PSH, with a focus on how living in PSH has affected their meaningful activity and social engagement. Based on a phenomenological approach, a thematic analysis was conducted using semi-structured interviews with 17 individuals living in three PSH buildings in New York City. Results suggested that PSH was beneficial in fulfilling formerly homeless individual's basic needs and facilitating lifestyle improvements, yet many were dissatisfied with their living conditions and lacked meaningful activity, social integration, and community belongingness. These issues were found to develop in large part as a result of formerly homeless individuals' disharmonious relationships within the social context of PSH, consisting of staff members, other residents, and people in the surrounding community. The effects of the COVID-19 pandemic and implications for PSH social services are discussed. abstract_id: PUBMED:30971139 Developing Tobacco Control Interventions in Permanent Supportive Housing for Formerly Homeless Adults. Smoke-free policies are effective population-based strategies to reduce tobacco use yet are uncommon in permanent supportive housing (PSH) for formerly homeless individuals who have high rates of smoking. In this study, we partnered with six supportive housing agencies in the San Francisco Bay Area to examine the implementation of smoke-free policies and cessation services. We administered a questionnaire and conducted in-depth, semistructured interviews with agency directors (n = 6), property management staff (n = 23), and services staff (n = 24) from 23 PSH sites on the barriers to implementing tobacco control interventions. All properties restricted smoking in indoor shared areas, but only two had policies restricting smoking in living areas. While there was staff consensus that smoke-free policies were important to reduce tobacco-related harm, participants disagreed on whether smoke-free policies were aligned with PSH's harm reduction framework. Residents' comorbid mental illness and substance use and the lack of appropriate enforcement tools were barriers to implementation. Using these formative findings, we present a framework for a toolkit of strategies to increase implementation of smoke-free policies and cessation interventions in PSH. Successful implementation of indoor smoke-free policies in PSH will require concurrent cessation services to support smoking cessation efforts and address the mental health and substance use needs of residents. abstract_id: PUBMED:33053205 Change in housing environment and residential satisfaction following exit from permanent supportive housing. Moving On initiatives (MOIs) transition stable permanent supportive housing (PSH) residents into mainstream housing without embedded services. While this approach frees up PSH for homeless individuals in need, open questions remain regarding MOI recipients' long-term outcomes. This exploratory study examines how housing environment and residential satisfaction, potential predictors of housing retention, change from PSH to mainstream housing. Subjective assessments of housing and neighborhood quality and residential satisfaction, as well as objective neighborhood-level data, are used to examine housing-related change for New York City MOI recipients. Participants generally moved to less-distressed neighborhoods with lower poverty and crime. Subjective perceptions of some aspects of neighborhood and housing quality also improved post-move. Participants tended to move farther from public transportation but were on average located within one mile of the nearest subway station. Results can be taken as early indicators of the potential benefits of MOIs. abstract_id: PUBMED:26900305 The Dilemmas of Frontline Staff Working with the Homeless: Housing First, Discretion, and the Task Environment. This paper examines staff discretion in permanent supportive housing facilities run by a nonprofit agency claiming to use a housing-first approach. Field observation, archival data, and individual and group interviews with staff and clients were examined to better understand agency processes involved in intake, sanctions, and disposal of clients to evaluate housing-first fidelity. In their day-to-day interactions with clients, frontline workers' discretion is affected by working conditions such as lack of resources and heavy workloads, as well as by demands placed on the agency by members of the its task environment. Implications for housing-first programs and homeless clients are discussed. Answer: Supplying homeless individuals with permanent housing has been shown to be effective in certain aspects, particularly in maintaining or improving housing stability. A high percentage (86%) of homeless individuals were able to maintain or even improve their housing situation after being provided with permanent housing (PUBMED:17546533; PUBMED:16445480). However, while permanent housing can address the immediate need for shelter, it is not sufficient on its own to decisively improve mental health status or overall well-being. The studies indicate that only minor changes were observed in the severity of mental illness and global functioning after homeless individuals were supplied with permanent housing (PUBMED:17546533; PUBMED:16445480). Factors such as extensive alcohol consumption and high psychopathology increased the risk of losing stable housing, suggesting that additional support services are necessary to address these issues (PUBMED:16445480). Moreover, community integration, which includes physical, social, and psychological aspects, remains a challenge for formerly homeless individuals living in permanent supportive housing (PSH). Higher levels of psychological integration were associated with decreased mental health symptoms, indicating that fostering a sense of belonging could improve mental health outcomes (PUBMED:32853418). Additionally, the experiences of formerly homeless adults in PSH revealed dissatisfaction with living conditions and a lack of meaningful activity, social integration, and community belongingness, which were influenced by disharmonious relationships within the PSH social context (PUBMED:34561888). In conclusion, while supplying permanent housing to homeless individuals is an effective measure for housing stability, it is not a comprehensive solution for improving mental health or overall quality of life. To be more effective, permanent housing initiatives should be combined with social and medical interventions, as well as efforts to foster community integration and address individual needs such as mental health support, substance abuse treatment, and social services (PUBMED:17546533; PUBMED:16445480; PUBMED:32853418; PUBMED:34561888).
Instruction: Distal pancreatectomy for body-tail pancreatic cancer: is there a role for celiac axis resection? Abstracts: abstract_id: PUBMED:29302421 Robotic distal pancreatectomy combined with celiac axis resection. A subset of pancreatic body and tail cancers present with locally advanced disease due to involvement of the celiac axis. Previously considered unresectable, these T4 tumors may be extirpated with a distal pancreatectomy and en bloc resection of the celiac trunk in carefully selected patients. In the setting of multimodality treatment, these resections can yield survival similar to resectable and borderline resectable lesions. Robotic surgery has been shown to be safe and feasible in complex pancreatic resections. This article summarizes our patient selection criteria and operative approach to robotic distal pancreatectomy with celiac axis resection (DP-CAR) for locally advanced body and tail tumors of the pancreas. abstract_id: PUBMED:30631817 Celiac Axis Resection with Distal Pancreatectomy (Modified Appleby Procedure) Allows for R0 Resection of Pancreatic Body and Tail Mass Following Neoadjuvant Therapy: Case Report and Literature Review. Background: The modified Appleby procedure has been developed for cancer of the pancreatic body or tail with celiac axis invasion, historically classified as unresectable disease. Post-Appleby resection, the source of arterial blood to the liver is the superior mesenteric artery, which supplies the gastroduodenal artery and ultimately feeds the proper hepatic artery. In cases of inadequate collateralization, preoperative coiling of the common hepatic artery (CHA) or intraoperative reconstruction via an aorto-hepatic bypass has been described. Method: We describe a 74-year-old female with a pancreatic mass that was initially determined to be unresectable. She underwent extensive combination neoadjuvant chemotherapy. A favorable response was evidenced by a decrease in serum CA 19-9 levels. After 7 months, she was restaged and offered a distal pancreatectomy (DP) with the possibility of a modified Appleby procedure due to potential tumor involvement of the proximal CHA. Results: Intraoperatively, tumor was identified along the CHA traveling proximally to the celiac axis. Therefore, a modified Appleby procedure with DP and splenectomy was performed without the need for reconstruction of the CHA. Postoperative specimen pathology showed residual pancreatic ductal adenocarcinoma with marked treatment effects. The pathology confirmed an R0 resection. The patient followed our postpancreatic surgery care pathway. She remains well 7 months postoperatively. Conclusion: A pancreatic body or tail mass encasing the celiac vessels should not be an immediate referral for palliative care. Recent evidence shows that successful R0 resection can be achieved following neoadjuvant therapy. In fact, patients who have undergone a successful modified Appleby procedure show survival outcomes similar to patients with less advanced cancer who underwent standard DP. The modified Appleby procedure used in conjunction with neoadjuvant therapy can achieve complete resection in select patients previously thought to be unresectable. abstract_id: PUBMED:28116666 Distal Pancreatectomy with Celiac Axis Resection Combined with Reconstruction of the Left Gastric Artery. Distal pancreatectomy with celiac axis resection is one of the most aggressive approaches for the treatment of locally advanced pancreatic cancer with common hepatic artery and/or celiac axis invasion. However, ischemic complications such as ischemic gastropathy and liver failure are problematic. To avoid these complications, we developed left gastric artery-reconstructing distal pancreatectomy with celiac axis resection. We used the middle colic artery for reconstruction. We performed this procedure in 10 patients, using the middle colic artery in three different ways: left branch reconstruction, right branch reconstruction, and reverse reconstruction. On postoperative images, 90% of the reconstructed left gastric arteries were patent. No complications associated with arterial reconstruction occurred. No patients developed ischemic gastropathy or liver failure. The R0 resection rate was 70%. Nine patients underwent adjuvant chemotherapy and seven patients were able to start it within 90 days. Distal pancreatectomy with celiac axis resection combined with reconstruction of the left gastric artery using the middle colic artery is a feasible option and would enhance the safety for carefully selected patients. Multicenter validation is needed to clarify the benefits of this new procedure. abstract_id: PUBMED:35245736 Retroperitoneal-first laparoscopic approach (Retlap)-assisted distal pancreatectomy with celiac axis resection (DP-CAR): A novel minimally invasive approach for achieving adequate dorsal surgical margin. Background: Distal pancreatectomy with celiac axis resection (DP-CAR) is a procedure to secure a surgical margin for a locally advanced pancreatic body cancer that invades the celiac axis. However, in patients with cancer close to the root of the celiac axis, obtaining adequate surgical margins can be difficult because the tumor obstructs the field of vision to the root of the celiac axis. Previously, we described the retroperitoneal-first laparoscopic approach (Retlap) to achieve both accurate evaluation of resectability for locally advanced pancreatic cancer requiring DP-CAR [1] and adequate surgical margin for laparoscopic distal pancreatectomy [2]. In this video, we introduce Retlap-assisted DP-CAR as a minimally invasive approach for performing an artery-first pancreatectomy [3, 4] and achieving sufficient dorsal surgical margin (Fig. 1). Methods: Our patient is a 67-year-old man with a 55 × 29-mm pancreatic body tumor after chemotherapy. Preoperative computed tomography revealed a tumor close to the root of the celiac axis. Because the area of tumor invasion on preoperative images was near the root of the celiac artery, Retlap-assisted DP-CAR was performed to determine whether the celiac axis can be secured and obtain an adequate dorsal surgical margin (Fig. 2). Results: The operative time and estimated blood loss was 715 min and 449 mL, respectively. In spite of the advanced tumor's location and size, R0 resection was achieved in a minimally invasive way. Conclusion: Retlap-assisted DP-CAR is not only technically feasible and useful for achieving accurate evaluation of resectability but also facilitates obtaining an adequate surgical margin. abstract_id: PUBMED:34187724 Distal pancreatectomy with En bloc celiac axis resection for locally advanced pancreatic body/tail cancer: A systematic review and meta-analysis. Distal pancreatectomy with En-bloc celiac axis resection (DP-CAR) is a challenging procedure that has yielded certain clinical efficacy in the treatment of locally advanced pancreatic body/tail cancer, especially in patients with invasion of abdominal vessels. However, the clinical efficacy and safety of DP-CAR remain controversial. The study aimed to systematically review efficacy and safety of DP-CAR in the treatment of locally advanced pancreatic body/tail cancer. We systematically searched PubMed, EMBASE, Cochrane Library, and Web of Science databases from inception to 1 October 2020. Two studiers independently accomplished the study selection, data extraction, and quality assessment. Initially, of 1032 studies were searched, among which 11 high quality studies including 1072 patients were finally identified. The pooled results showed that DP-CAR versus Distal pancreatectomy (DP), the rate of R0 resection (RR = 0.76; 95%CI: 0.66 to 0.88; p = 0.0002) and 3-year survival (RR = 0.65; 95%CI: 0.43 to 0.98; p = 0.04) was lower, postoperative mortality (RR = 2.48; 95%CI: 1.02 to 6.03; p = 0.04) was higher, the operation time (MD = 104.67; 95%CI: 84.70 to 124.64; p &lt; 0.001) and hospital stay (MD = 3.94, 95% CI 1.35 to 6.53; p = 0.003) were longer. There was no statistical difference between the DP-CAR and DP group in 1-year, 2-year survival rate (RR = 0.84; 95%CI: 0.57 to 1.23; p = 0.37) (RR = 0.70; 95%CI: 0.45 to 1.10; p = 0.12). In conclusion, compared with DP, DP-CAR has worse efficacy and prognosis survival and is more dangerous, but it can obtain better survival benefit and quality of life than palliative treatment. We suggest that DP-CAR can be carefully attempted for effective margin-negative resection. However, surgeons and patients need to know its potential perioperative risk. abstract_id: PUBMED:35352239 Long-term survival after distal pancreatectomy with celiac axis resection and hepatic artery reconstruction in the setting of locally advanced unresectable pancreatic cancer. The long-term survival of patients with locally advanced, unresectable pancreatic cancer is extremely poor. We present our experience with a 67-year-old woman who had a 40-mm mass in the body of the pancreas. Tumor infiltration reached the gastroduodenal artery, celiac artery, common hepatic artery, and splenic artery. After 10 courses of FOLFIRINOX, 2 courses of gemcitabine plus nab-paclitaxel, and 6 courses of gemcitabine alone, we performed distal pancreatectomy with celiac axis resection and hepatic artery reconstruction. The bifurcation of the gastroduodenal artery and the proper hepatic artery had to be resected, after which we created 2 anastomoses: proper hepatic-to-middle colic artery, and second jejunal-to-right gastroepiploic artery. Histopathologic examination revealed an Evans grade IIb histologic response to prior treatment and verified the R0 resection status. The patient was discharged on postoperative day 30 after treatment of a grade B pancreatic fistula and is still alive, without recurrence, more than 5 years after initiation of treatment. This patient with locally advanced, unresectable pancreatic cancer achieved long-term survival through perioperative multidisciplinary treatment, including distal pancreatectomy with celiac axis resection and hepatic artery reconstruction. This aggressive procedure could be a treatment option for patients with locally advanced, unresectable pancreatic cancer. abstract_id: PUBMED:34697205 Distal Pancreatectomy With Celiac Axis Resection for Locally Advanced Pancreatic Body Cancer - A Case Report and Literature Review. Background: Locally advanced pancreatic cancer invading the surrounding vascular structures has long been considered as unresectable and, therefore, patients were usually submitted to palliative chemotherapy. Case Report: We present the case of a 44-year-old male investigated for weight loss and abdominal pain and diagnosed with a locally advanced pancreatic tumor invading the celiac axis. An endoscopic ultrasound was performed and a biopsy was retrieved demonstrating the presence of a moderately differentiated pancreatic adenocarcinoma. After discussing with the patient the risks and the benefits of performing an extended surgical procedure, the patient consented to distal pancreatectomy en bloc with celiac axis resection. Postoperatively, the patient was submitted to low-molecular-weight heparin therapy for 3 weeks followed by oral anticoagulant for 2 months. Histopathological studies confirmed the presence of a moderately differentiated pancreatic adenocarcinoma invading the celiac axis and described negative resection margins. Conclusion: Although celiac axis invasion has been considered for a long period of time as a sign of unresectable disease due to the high rates of perioperative complications, it seems that in selected cases, surgery can be safely performed with curative intent, especially if negative resection margins are achieved. abstract_id: PUBMED:33086168 Distal pancreatectomy with celiac axis resection (DP-CAR): Optimal perioperative outcome in a patient with locally advanced pancreas adenocarcinoma. Introduction: Distal pancreatectomy with en bloc celiac axis resection (DP-CAR) is an operation technically demanding, uncommonly performed, even in high-volume pancreatic centers, which may offer a curative resection in patients with locally advanced cancer of the body of the pancreas, otherwise considered unresectable. Presentation Of Case: We present, in clinical and technical detail, a patient with DP-CAR with a very good intraoperative and postoperative course, no complications, short hospital stay, and histology consistent with a curative resection. Discussion: Because of the scarcity of DP-CAR, even high-volume individual centers have been able to gather relatively limited experience, and only in a time frame of more than a decade each. Conclusion: DP-CAR can be curative for a minority of patients with pancreatic adenocarcinoma and is performed only in centers with a long, dedicated interest in advanced pancreatic surgery with a well-known track record in resection of borderline and locally advanced pancreatic cancer involving major peripancreatic veins. abstract_id: PUBMED:36718050 Accurate intraoperative real-time blood flow assessment of the remnant stomach during robot-assisted distal pancreatectomy with celiac axis resection using indocyanine green fluorescence imaging and da Vinci Firefly technology. Introduction: Ischemic gastropathy is one of the unique postoperative complications associated with distal pancreatectomy with celiac axis resection for locally advanced pancreatic cancer. Therefore, it is essential to evaluate blood flow to the stomach following a resection; however, no intraoperative procedures have been established to assess this issue. Herein we describe two cases in which intraoperative evaluation of real-time blood flow in the residual stomach was performed using indocyanine green fluorescence and da Vinci Firefly technology during a robot-assisted distal pancreatectomy with celiac axis resection. Methods: Robot-assisted distal pancreatectomy with celiac axis resection was performed using a da Vinci Xi surgical system on two patients with locally advanced pancreatic cancer and suspected invasion of the celiac artery. Indocyanine green (ICG) (0.5 mg/kg) was injected intravenously after resection to evaluate real-time blood flow of the stomach using the da Vinci Firefly system. Blood flow of the stomach was evaluated 60 seconds after the intravenous injection of ICG. Results: All cases were confirmed that there was sufficient blood flow in the residual stomach. Therefore, reconstruction of the left gastric artery was not performed, and the surgery was completed with preservation of the stomach. Good postoperative outcomes were achieved and there was no evidence of ischemic gastropathy or delayed gastric emptying in both cases. Conclusion: This method is very useful in determining whether or not to perform reconstruction of the left gastric artery and/or additional resection of the remnant stomach during a robot-assisted distal pancreatectomy with celiac axis resection. abstract_id: PUBMED:33526385 Survival impact of distal pancreatectomy with en bloc celiac axis resection combined with neoadjuvant chemotherapy for borderline resectable or locally advanced pancreatic body carcinoma. Background: The survival benefit associated with distal pancreatectomy with en bloc celiac axis resection (DP-CAR) for patients with borderline resectable or locally advanced pancreatic body carcinoma is controversial. The aim of this study was to evaluate the impact of DP-CAR following neoadjuvant chemotherapy on survival in patients with borderline resectable or locally advanced pancreatic body carcinoma. Methods: Medical records of patients with pancreatic ductal adenocarcinoma who underwent distal pancreatectomy (DP, n = 102) and DP-CAR following neoadjuvant chemotherapy (n = 32) between 2008 and 2019 were analyzed retrospectively. Short- and long-term outcomes were compared between the two groups. Results: All patients who underwent DP-CAR had tumor contact with the celiac axis. Of these, 30 patients underwent preoperative embolization of the common hepatic artery. The pretreatment tumor size of patients who underwent DP-CAR was larger (P &lt; 0.001), and rates of blood transfusion (P = 0.003) and postoperative complications (P = 0.016) were higher in patients who underwent DP-CAR compared with patients who underwent DP. The 5-year survival rate of patients who underwent DP and DP-CAR were 50.6% and 41.1%, respectively (median survival time, 65.9 vs 37.0 months). For all 134 patients, pretreatment serum CA19-9 levels (P &lt; 0.001), adjuvant chemotherapy (P &lt; 0.001), and lymph node status (P = 0.035) were independent prognostic factors of overall survival by multivariate analysis. Conclusions: DP-CAR following neoadjuvant chemotherapy for patients with borderline resectable or locally advanced pancreatic body carcinoma may bring the same survival impact as DP, despite increased morbidity. Answer: Yes, there is a role for celiac axis resection in the treatment of body and tail pancreatic cancer. Distal pancreatectomy combined with celiac axis resection (DP-CAR) is a surgical approach that can be considered for patients with locally advanced pancreatic body and tail cancers that involve the celiac axis. This subset of tumors, previously deemed unresectable due to their T4 classification, may be removed with DP-CAR in carefully selected patients, especially when combined with multimodality treatment, potentially yielding survival rates similar to those of resectable and borderline resectable lesions (PUBMED:29302421). The modified Appleby procedure, which includes DP-CAR, has been shown to allow for R0 resection (complete removal of the tumor with no cancer cells at the margins of the removed tissue) of pancreatic body and tail masses following neoadjuvant therapy (treatment given before the main treatment). This procedure can result in survival outcomes comparable to those of patients with less advanced cancer who underwent standard distal pancreatectomy (DP) (PUBMED:30631817). Techniques such as the retroperitoneal-first laparoscopic approach (Retlap)-assisted DP-CAR have been developed to achieve adequate dorsal surgical margins in a minimally invasive manner, even in patients with advanced tumors located close to the root of the celiac axis (PUBMED:35245736). A systematic review and meta-analysis have indicated that DP-CAR can provide better survival benefits and quality of life compared to palliative treatment, although it is associated with a lower rate of R0 resection and 3-year survival, higher postoperative mortality, and longer operation time and hospital stay compared to standard DP (PUBMED:34187724). Furthermore, long-term survival has been reported in patients with locally advanced, unresectable pancreatic cancer who underwent DP-CAR with hepatic artery reconstruction, suggesting that this aggressive procedure could be a viable treatment option in the setting of multidisciplinary care (PUBMED:35352239). In conclusion, DP-CAR is a feasible and potentially curative option for selected patients with locally advanced pancreatic body and tail cancer, particularly when combined with neoadjuvant therapy and careful patient selection.
Instruction: Braden Scale cumulative score versus subscale scores: are we missing opportunities for pressure ulcer prevention? Abstracts: abstract_id: PUBMED:24378697 Braden Scale cumulative score versus subscale scores: are we missing opportunities for pressure ulcer prevention? Background: Hospital-acquired pressure ulcer incidence rates continue to rise in the United States in the acute care setting despite efforts to extinguish them, and pressure ulcers are a nursing-sensitive quality indicator. The Braden Scale for Predicting Pressure Sore Risk instrument has been shown to be a valid and reliable instrument for assessing pressure ulcer risk. Case: This case study represented 1 patient out of a chart audit that reviewed 20 patients with confirmed hospital-acquired pressure ulcers. The goal of the audit was to determine whether these ulcers might be avoided if preventive interventions based on Braden subscale scores versus the cumulative score were implemented. Conclusion: This case study describes a patient who, deemed at low risk for pressure ulcer development based on cumulative Braden Scale, may have benefited from interventions based on the subscale scores of sensory perception, activity, and mobility. Further research is needed to determine whether interventions based on subscales may be effective for preventing pressure ulcers when compared to a protocol based exclusively on the cumulative score. abstract_id: PUBMED:36421654 Nursing Assessment of Pressure Injury Risk with the Braden Scale Validated against Sensor-Based Measurement of Movement. Nursing staff assessment to accurately identify pressure injury (PrI) risk is a hallmark in PrI prevention care. Risk scores from the Braden Scale for Predicting Pressure Sore Risk© (hereafter Braden), a commonly used tool for assessing PrI risk, signal the need for preventative care. Braden Mobility, Activity, and Sensory Perception subscale subgroups associated with repositioning movement features help identify preventative strategies that minimize pressure intensity and duration. Evidence confirming subscale rating accuracy is needed. This study compared assessment score accuracy with movement data collected via accelerometer sensor. Sample included 913 nursing home residents from the Turn Everyone and Move for Pressure Ulcer Prevention (TEAM-UP) cluster randomized trial. Movements and Braden Mobility and Activity subscale scores were evaluated for significant differences and associations. Mobility subgroups explained a small-medium amount of variance in mean lying and upright movement features (0.002 ≤ R2 ≤ 0.195). Activity subgroups explained a small-medium amount of variance in mean lying, upright, and ambulating movements (0.016 ≤ R2 ≤ 0.248). Significant associations occurred among subscale subgroups and most movements. Nursing assessment ratings using Braden scale's Mobility and Activity subscale scores are accurate indicators of actual repositioning movements and can be relied upon for PrI prevention care planning for older adults. abstract_id: PUBMED:27417802 Utility of Braden Scale Nutrition Subscale Ratings as an Indicator of Dietary Intake and Weight Outcomes among Nursing Home Residents at Risk for Pressure Ulcers. The Braden Scale for Pressure Sore Risk(©) is a screening tool to determine overall risk of pressure ulcer development and estimate severity of specific risk factors for individual residents. Nurses often use the Braden nutrition subscale to screen nursing home (NH) residents for nutritional risk, and then recommend a more comprehensive nutritional assessment as indicated. Secondary data analysis from the Turn for Ulcer ReductioN (TURN) study's investigation of U.S. and Canadian NH residents (n = 690) considered at moderate or high pressure ulcer (PrU) risk was used to evaluate the subscale's utility for identifying nutritional intake risk factors. Associations were examined between Braden Nutritional Risk subscale screening, dietary intake (mean % meal intake and by meal timing, mean number of protein servings, protein sources, % intake of supplements and snacks), weight outcomes, and new PrU incidence. Of moderate and high PrU risk residents, 61.9% and 59.2% ate a mean meal % of &lt;75. Fewer than 18% overall ate &lt;50% of meals or refused meals. No significant differences were observed in weight differences by nutrition subscale risk or in mean number protein servings per meal (1.4 (SD = 0.58) versus 1.3 (SD = 0.53)) for moderate versus high PrU risk residents. The nutrition subscale approximates subsequent estimated dietary intake and can provide insight into meal intake patterns for those at either moderate or high PrU risk. Findings support the Braden Scale's use as a preliminary screening method to identify focused areas for potential intervention. abstract_id: PUBMED:25377103 Use of the Braden Scale for pressure ulcer risk assessment in a community hospital setting: the role of total score and individual subscale scores in triggering preventive interventions. Purpose: To determine whether pressure ulcer preventive interventions are implemented when a total Braden Scale score reflects that the patient is at risk. Design: A retrospective chart review was completed for 20 patients with confirmed hospital-acquired pressure ulcers (HAPUs). Subjects And Setting: A convenience sample of 20 patients with HAPUs confirmed by a certified wound nurse was systematically selected from 63 charts. The study setting was a 200-bed acute care facility in the Midwestern United States. Methods: A retrospective review of 20 patient charts was conducted. Data collected included daily Braden Scale scores and subscale scores, along with pressure ulcer preventive intervention implementation for at-risk (cumulative Braden Scale scores ≤ 18) and not-at-risk (cumulative Braden Scale scores &gt; 18) days. Data were collected both before and after pressure ulcer occurrence. The occurrence of preventive interventions was compared between at-risk and not-at-risk patient days. Results: Nineteen percent of not-at-risk patient days were found to have lower subscale scores, indicating a need for focused preventive interventions. The day before an HAPU occurred, the mean Braden Scale score was 13.7 ± 2.8 (mean ± SD) for those who were provided an intervention and 18.5 ± 2.3 for those not provided an intervention (t = 3.89, P = .001). Sixty-three percent of at-risk patients received some intervention the day before an HAPU occurred, while 20% of not-at-risk patients received some intervention. Conclusions: Routine use of a pressure ulcer risk assessment tool is considered necessary for a comprehensive pressure ulcer prevention program. Planning preventive care according to the subscale scores of the Braden Scale may be more effective for prevention of HAPUs in some cases. abstract_id: PUBMED:34394449 Assessing pressure injury risk using a single mobility scale in hospitalised patients: a comparative study using case-control design. Background: Pressure injury is known to cause not only debilitating physical effects, but also substantial psychological and financial burdens. A variety of pressure injury risk assessment tools are in use worldwide, which include a number of factors. Evidence now suggests that assessment of a single factor, mobility, may be a viable alternative for assessing pressure injury risk. Aims: The aim of this study was to ascertain whether using the Braden mobility subscale alone is comparable to the full Braden scale for predicting the development of pressure injury. Methods: This study, a retrospective case-control design, was conducted in a large tertiary acute care hospital in Singapore. Medical records of 100 patients with hospital-acquired pressure injury were matched with 100 medical records of patients who had no pressure injury at a 1:1 ratio. Results: Patients who were assessed using the Braden mobility subscale as having 'very limited mobility' or worse were 5.23 (95% confidence interval (CI) 2.66-10.20) times more likely to develop pressure injury compared with those assessed as having 'slightly limited' mobility or 'no limitation'. Conversely, patients assessed using the Braden scale as having 'low risk' or higher were 3.35 (95% CI 1.77-6.33) times more likely to develop pressure injury compared with those assessed as 'no risk'. Using full model logistic regression analysis, the Braden mobility subscale was the only factor that was a significant predictor of pressure injury and it remained significant when analysed for the most parsimonious model using backward logistic regression. Conclusions: These findings provide the empirical evidence that using the Braden mobility subscale alone as an assessment tool for predicting pressure injury development is comparable to using the full Braden scale. Use of this single factor would simplify pressure injury risk assessment and support its use within busy clinical settings. abstract_id: PUBMED:30734477 Using the Braden subscales to assess risk of pressure injuries in adult patients: A retrospective case-control study. The aim of this study was to compare the pressure injury risk predictability between the individual Braden subscales and the total Braden scale in adult inpatients in Singapore. A retrospective 1:1 case-control design was used from a sample of 199 patient medical records. Clinical data were collected from a local university hospital's medical records database. The results showed that, among the six subscales, the activity subscale was the most sensitive and specific in predicting pressure injury (PI). However, the overall results showed that the Braden scale remained the most predictive of PI development in comparison with the individual subscales. The study also found that, among the Singaporean patients, the Braden cut-off score for PI risk was 17 compared with the current cut-off score of 18. Therefore, it may be relevant for local tertiary hospitals to review their respective Braden cut-off scores as the study results indicate an over-prediction of PI risk, which leads to unnecessary utilisation of resources. The hospital may also consider developing a PI prevention bundle comprising commonly used preventive interventions when at least one Braden subscale reflects a suboptimal score. abstract_id: PUBMED:28096013 Braden scale (ALB) for assessing pressure ulcer risk in hospital patients: A validity and reliability study. Purpose: The inter-rater reliability of Braden Scale is not so good. We modified the Braden(ALB) scale by defining nutrition subscale based on serum albumin, then assessed it's the validity and reliability in hospital patients. Methods: We designed a retrospective study for validity analysis, and a prospective study for reliability analysis. Receiver operating curve (ROC) and area under the curve (AUC) were used to evaluate the predictive validity. Intra-class correlation coefficient (ICC) was used to investigate the inter-rater reliability. Results: Two thousand five hundred twenty-five patients were included for validity analysis, 76 patients (3.0%) developed pressure ulcer. Positive correlation was found between serum albumin and nutrition score in Braden scale (Spearman's coefficient 0.2203, P&lt;0.0001). The AUCs for Braden scale and Braden(ALB) scale predicting pressure ulcer risk were 0.813 (95% CI 0.797-0.828; P&lt;0.0001), and 0.859 (95% CI 0.845-0.872; P&lt;0.0001), respectively. The Braden(ALB) scale was even more valid than the Braden scale (z=1.860, P=0.0628). In different age subgroups, the Braden(ALB) scale seems also more valid than the original Braden scale, but no statistically significant differences were found (P&gt;0.05). The inter-rater reliability study showed the ICC-value for nutrition increased 45.9%, and increased 4.3% for total score. Conclusion: The Braden(ALB) scale has similar validity compared with the original Braden scale for in hospital patients. However, the inter-rater reliability was significantly increased. abstract_id: PUBMED:33614937 Evaluation of the Braden scale in predicting surgical outcomes in older patients undergoing major head and neck surgery. Background: Being able to predict negative postoperative outcomes is important for helping select patients for treatment as well for informed decision-making by patients. Frailty measures are often time and resource intensive to use as screening measures, whereas the Braden scale, a commonly used measure to assess patients at risk of developing pressure ulcers after surgery, may be a potential tool to predict postoperative complication rates and longer length of stay (LOS) in patients undergoing major head and neck cancer surgery. Methods: A retrospective analysis of Braden scale scores was performed on a prospectively collected cohort of patients undergoing major head and neck surgery recruited between December 2011 and April 2014. The association of Braden scale score with the primary outcomes of complications and LOS was analyzed using logistic regression and linear regression models on univariate analysis (UVA), respectively. Multivariate analysis (MVA) was performed based on a backward stepwise selection algorithm. Results: There were 232 patients with a mean (SD) Braden scale score of 14.9 (2.8) with a range from 9 to 23. The Braden scale (β = -.07 per point; 95% CI -0.09, -0.04, P &lt; .001) was an independent predictor of increased LOS on UVA, but not on MVA when adjusted for other variables. For overall complications, as well as type of complication, the Braden scale score was not a significant predictor of complications on either UVA or MVA. Conclusion: In the sample population, the Braden scale did not demonstrate an ability to predict negative outcomes in head and neck surgery patients. Level Of Evidence: Level 2b individual cohort study. abstract_id: PUBMED:15385871 Validation of the mobility subscale of the Braden Scale for predicting pressure sore risk. Background: The Braden Scale for Predicting Pressure Sore Risk has been tested extensively for reliability and validity, but the validity of each subscale has not been evaluated. Because subscale scores are intended to guide patient care decisions, validity is an important issue. Objective: : To establish the convergent construct validity of the mobility subscale of the Braden Scale. Methods: The study evaluated 16 members at a veterans' home (4 members representing each score on the mobility subscale). Movement, as recorded by a Motionlogger Actigraph, a wristwatch-sized accelerometer and microprocessor that measures physical movement (activity), was measured continuously. Each person wore an Actigraph on the nondominant ankle for 72 hours. Results: The mean activity for each of the four subscale score groups was plotted, producing a histogram in which higher scores were associated with greater activity (F[3, 15] = 31.69;p &lt;.001, one-way analysis of variance), as expected. Pair wise multiple comparisons between groups showed that only the subgroup with a score of 4 was significantly different in mean activity (p &lt;.001) from the other three score groups. Conclusions: Convergent construct validity for the Braden mobility subscale was supported. A larger sample and establishment of a threshold to eliminate minor, ineffective movements from Motionlogger Actigraph measures may distinguish between significant and nonsignificant movement. abstract_id: PUBMED:29078875 Validity of the Braden Scale in grading pressure ulcers in trauma and burn patients. Background: Pressure ulcers are a costly hospital-acquired condition in terms of clinical outcome and expense. The Braden Scale was developed in 1987 as a risk scoring method for pressure ulcers and uses six different risk factors: sensory perception, moisture, activity, mobility, nutrition, and friction and shear. A score of ≤18 is considered high risk. To date, research on the utility of the Braden Scale has focused on general medicine and nontrauma/burn surgery patients. We hypothesize that the Braden Scale does not accurately discriminate who will get a pressure ulcer among trauma and burn patients. Methods: We collected data from medical records regarding documented Braden scores and presence of pressure ulcers regardless of staging. Patients with ulcers present on admission were excluded from analysis. For each patient, the lowest Braden score documented before the occurrence of the pressure ulcer was determined. A logistic regression was used to estimate odds ratios and associated 95% confidence intervals for the association between pressure ulcer likelihood and lowest Braden Scale measurement. To determine the discriminatory ability of the Braden Scale on pressure ulcer risk, four measures of performance (i.e., sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio) were calculated for four nonmutually exclusive groups: a Braden Scale measurement ≤18, ≤14, ≤12, and ≤9. Results: From 2011 through 2014, a total of 2660 patients were admitted to the trauma/burn intensive care unit. Of these patients, 63 (2.3%) subsequently developed a pressure ulcer. A Braden Scale of ≤18 as the threshold for being at-risk of pressure ulcer had a sensitivity of 100% and specificity of 6%, whereas a Braden Scale of ≤9 had a sensitivity of 28.6% and a specificity of 90%. For all Braden Scale measurements, the positive likelihood ratio never reached the value of 10 that suggests high likelihood of an ulcer. Conclusions: The Braden scale has mediocre discriminatory ability among the trauma/burn population. In addition, the low positive likelihood ratio suggests that the Braden scale may not be a useful clinical tool as it may result in unnecessary expenditure of time and personnel resources in preventing pressure ulcer formation. Answer: The Braden Scale is a widely used tool for assessing pressure ulcer risk, but there is a growing body of evidence suggesting that relying solely on the cumulative score may lead to missed opportunities for pressure ulcer prevention. A case study highlighted in one of the abstracts suggests that a patient deemed at low risk for pressure ulcer development based on the cumulative Braden Scale score may have benefited from interventions based on the subscale scores of sensory perception, activity, and mobility (PUBMED:24378697). This indicates that interventions based on subscales might be effective for preventing pressure ulcers when compared to a protocol based exclusively on the cumulative score. Further research supports the accuracy of Braden Scale's Mobility and Activity subscale scores as indicators of actual repositioning movements, which are crucial for pressure injury (PrI) prevention care planning for older adults (PUBMED:36421654). Additionally, the Braden Scale Nutrition Subscale Ratings have been shown to approximate subsequent estimated dietary intake and can provide insight into meal intake patterns for those at either moderate or high pressure ulcer risk, supporting the scale's use as a preliminary screening method (PUBMED:27417802). A retrospective chart review also found that 19% of not-at-risk patient days had lower subscale scores, indicating a need for focused preventive interventions, and that planning preventive care according to the subscale scores may be more effective for the prevention of hospital-acquired pressure ulcers (HAPUs) in some cases (PUBMED:25377103). Moreover, a study comparing the Braden mobility subscale alone to the full Braden scale for predicting the development of pressure injury found that the mobility subscale alone was a significant predictor of pressure injury, suggesting that using this single factor could simplify risk assessment in clinical settings (PUBMED:34394449). However, another study found that while the activity subscale was the most sensitive and specific in predicting pressure injury, the overall Braden scale remained the most predictive of pressure injury development compared to individual subscales (PUBMED:33614937). Additionally, a modified Braden Scale that defined the nutrition subscale based on serum albumin showed increased validity and significantly improved inter-rater reliability (PUBMED:28096013). In summary, while the cumulative score of the Braden Scale is a valuable tool for assessing pressure ulcer risk, there is evidence to suggest that individual subscale scores may provide additional, valuable insights that could lead to more targeted and effective preventive interventions. Therefore, healthcare providers may need to consider both the cumulative score and the subscale scores to optimize pressure ulcer prevention strategies.
Instruction: Is a doctor needed in the adult ENT pre-admission clinic? Abstracts: abstract_id: PUBMED:15550187 Is a doctor needed in the adult ENT pre-admission clinic? Objectives: Pre-admission clinics are traditionally run jointly by nurses and doctors. Within an adult ENT pre-admission clinic, we wished to assess what doctors added to nurses' pre-clerking, to determine whether doctors were actually needed in the clinic. Methods: Prospective study, looking at how often doctors, seeing patients after ward-based nurses, changed or added to clerking or tests as organized by nurses. Results: Out of 184 patients, doctors changed or added to nurses' clerking or planned investigations in 47 patients (26 per cent), making 64 different changes. The commonest reasons for changes were ordering blood tests (22 changes), chest X-rays (eight), cancelling due to hypertension (seven), altering drug history (five) and requesting electrocardiograms (five changes). Conclusion: Most changes made by doctors could be eliminated by designing a pre-admission clinic protocol that could easily be used by nurses. We recommend that all ENT departments consider implementing nurse-led pre-admission clinics. abstract_id: PUBMED:38370987 Evaluating the Appropriateness of ENT Emergency Clinic Referrals to Enhance the Quality of Healthcare Provision in the National Health Service (NHS). Background Ear, Nose, and Throat (ENT) services in the National Health Service (NHS) face escalating pressure, exacerbated by the COVID-19 pandemic, resulting in prolonged waiting times and increased referrals. Understanding the factors driving pressure on ENT services is crucial for enhancing patient care and resource allocation. Methods A retrospective single-centre cohort study was conducted at Queen's Medical Centre, Nottingham, UK, over five weeks. A total of 156 referrals to the ENT Emergency Clinic (E-Clinic) were analyzed, assessing the appropriateness of referrals and healthcare professionals' involvement in reviewing cases. Results The analysis revealed 28 distinct case categories, with certain conditions being predominant in specific reviews (e.g., otitis externa, nasal fractures, epistaxis). Notably, 21.8% of cases were deemed unsuitable or inappropriate for E-Clinic assessment. Strategic restructuring was suggested, distributing cases among healthcare professionals based on expertise and complexity. Discussion The findings underscore the need for a refined referral process and appropriate allocation of cases, emphasising the importance of nurse-led reviews for certain conditions and the necessity for senior review in complex cases. Improving the primary-secondary care interface and educating healthcare professionals on appropriate referrals are crucial for refining the system. Conclusion Optimising the quality of referrals and allocation of cases within ENT E-Clinics can alleviate workload pressures and enhance patient care. Strategic distribution of cases based on expertise and complexity, alongside refined referral processes, can significantly improve clinic efficiency and patient outcomes in the NHS. abstract_id: PUBMED:27453846 The value of an ENT specialist outreach service in a Family Medicine Unit for the urban poor in India. Objectives: To assess the function of an otolaryngology (ENT) specialist outreach service in a Family Medicine (FM) Unit for the urban poor attached to a Tertiary Teaching Hospital in India. Materials And Methods: The study investigated the pattern of ENT diseases in patients who came to the FM Unit and the proportion of these patients who were referred to the ENT specialist clinic at the unit. The study also analyzed the ENT problems that were managed by the ENT specialist at the unit and the conditions, which needed referral to the Tertiary Hospital. Data was collected by chart review. Setting: Weekly ENT specialist outreach service in an FM Unit for the urban poor in India attached to a Tertiary Teaching Hospital. Results: Among the outpatients who attended the unit in 12 months, 12.89% had ENT-related problems, of which 23.9% were referred to the visiting ENT specialist, 88.30% of these patients were managed in the FM Unit with basic ENT facilities. Conclusion: This study demonstrated that majority of the patients with ENT-related problems who presented to an FM Unit could be managed by the FM specialists. Of those patients who required the expertise of a specialist in ENT, the majority could be managed in the FM Unit, with basic ENT examination and treatment facilities. Triage and management by the family physician and the visiting ENT surgeon in the FM Unit is a prudent use of resources and will improve the quality of care people receive for their ENT problems. abstract_id: PUBMED:37921245 Outcomes of a first point of contact speech language therapy clinic for patients requiring vocal cord check pre and post thyroid/parathyroid surgery. Introduction: Speech Language Therapy First Point of Contact Clinic (SLT-FPOCC) models can assist assessment of low-risk patient populations referred to ear, nose and throat (ENT) services. To further improve ENT waitlist management and compliance with best-practice care, consideration of other low-risk populations that could be safely managed through this service model is needed. The aims of this paper are to evaluate the clinical and service outcomes of completing vocal cord check (VCC) assessments for patients' pre and post thyroid/parathyroid surgery within an SLT-FPOCC model and examine consumer perceptions. Methods & Procedures: The service followed existing SLT-FPOCC procedures, with ENT triaging referrals, then SLT completing pre- and postoperative VCC assessment (interview, perceptual assessment, flexible nasendoscopy), with assessment data later reviewed by ENT to diagnose laryngeal pathology. Clinical and service outcomes were collected prospectively. Patients completed an anonymous post-service satisfaction survey. Results: Of the first 100 patients referred for preoperative VCCs, SLT assessment identified 42 with dysphonia and 30 reporting dysphagia, while ENT confirmed 9 with significant preoperative anatomical findings. Eighty-three underwent surgery, with 63 (95 nerves at surgical risk) returning for a postoperative VCC. Postoperative VCC identified three temporary neuropraxias (3.2%) and three unilateral vocal fold paresis (3.2%). Patients were highly satisfied with the service. All 163 pre-/postoperative VCCs were completed with no adverse events. Conclusion & Implications: The current data support SLT-FPOCC service expansion to include pre and post thyroid/parathyroid surgery VCC checks, with positive consumer perception. The model supports delivery of best practice management (i.e., pre- and postoperative VCC) for patients receiving surgery for thyroid/parathyroid dysfunction, and associated efficiencies for ENT services. What This Paper Adds: What is already known Assessment of laryngeal function via flexible nasoendoscopy is recommended best practice for patients pre and postthyroid/parathyroid surgery, as recurrent laryngeal nerve injury is a low incidence (&lt;10%), yet well-recognised risk of these surgeries. Traditionally, general surgeons refer presurgical patients to ear, nose and throat (ENT) for vocal cord check (VCC) assessment. However, with access to specialist outpatient services under increasing pressure, there is growing support for utilisation of other health professionals, such as speech-language therapists working in first point of contact (FPOCC) models, to assist with the administration of pre- and postsurgical assessments of such low-risk populations. What this study adds This work expands on the emerging body of evidence for speech language therapy (SLT) led FPOCC models within ENT outpatient services, providing clinical and service outcomes to support the safety of a new model designed to administer VCCs for patients pre and post thyroid/parathyroid surgery. Adopting a similar model to a prior published SLT-led FPOCC model, the trained SLT completes the pre- and postsurgical VCC including flexible nasoendoscopy and videostroboscopy, with images and clinical information then presented to ENT for diagnosis and management planning. This study also provides the first data on consumer perceptions of this type of service model. Clinical implications of this study Data on 100 consecutive presurgical patients revealed positive service findings, supporting the safety of this model. Nature and incidence of clinical findings pre and post surgery were consistent with previously published studies using traditional models of care (i.e., ENT completing the flexible nasendoscopy). Consumer perception was positive. This model enables delivery of pre-and postsurgical assessments for patients receiving thyroid/parathyroid surgery, consistent with best practice care, and reduces burden on ENT services. In total 163 ENT appointments were avoided with this model, with positive implications for ENT waitlist management. abstract_id: PUBMED:30285744 Cost-effectiveness analysis of doctor-pharmacist collaborative prescribing for venous thromboembolism in high risk surgical patients. Background: Current evidence to support cost effectiveness of doctor- pharmacist collaborative prescribing is limited. Our aim was to evaluate inpatient prescribing of venous thromboembolism (VTE) prophylaxis by a pharmacist in an elective surgery pre-admission clinic against usual care, to measure any benefits in cost to the healthcare system and quality adjusted life years (QALYs) of patients. Method: A decision tree model was developed to assess cost effectiveness of pharmacist prescribing compared with usual care for VTE prophylaxis in high risk surgical patients. Data from the literature was used to inform decision-tree probabilities, utility, and cost outcomes. In the intervention arm, a pharmacist prescribed patient's regular medications, documented a VTE risk assessment and prescribed VTE prophylaxis. In the usual care arm, resident medical officers were responsible for prescribing regular medications, and for risk assessment and prescribing of VTE prophylaxis. The base scenario assessed the cost effectiveness of a pre-existing pre-admission clinic pharmacy service that takes on a collaborative prescribing role. The alternative scenario assessed the benefits of introducing a pre-admission clinic pharmacy service where previously there had not been one. Probabilistic sensitivity analysis was conducted to explore uncertainty in the model. Results: In both the base-case scenario and the alternative scenario pharmacist prescribing resulted in an increase in the proportion of patients adequately treated and a decrease in the incidence of VTE resulting in cost savings and improvement in quality of life. The cost savings were $31 (95% CI: -$97, $160) per patient in the base scenario and $12 (95% CI: -$131, $155) per patient in the alternative scenario. In both scenarios the pharmacist-doctor prescribing resulted in an increase in QALYs of 0.02 (95% CI: -0.01, 0.005) per patient. The probability of being cost effective at a willingness to pay off $40,000 was 95% in the base scenario and 94% in the alternative scenario. Conclusion: Delegation of the prescribing of VTE prophylaxis for high risk surgical patients to a pharmacist prescriber in PAC, as part of a designated scope of practice, would result in fewer cases of VTE and associated lower costs to the healthcare system and increased QALYs gained by patients. Trial Registration: Pre admission clinic study registered with ANZCTR-ACTR Number ACTRN12609000426280 . abstract_id: PUBMED:9654879 Pre-admission clinic in an orthopaedic department: evaluation over a 6-month period. A pre-admission clinic for patients undergoing elective orthopaedic surgery has been used at the Royal Surrey County Hospital, Guildford, for the past 3 years. This report audits the activities of the clinic over a period of 6 months. Data regarding the patients who were invited to the pre-admission clinic during the study period were analysed. In all, 232 patients were asked to attend the clinic and a total of 221 (95.2%) attended. Of these patients, 10 had their operations cancelled and three had their operations postponed in the clinic due to various medical and social reasons. Another 28 operations were cancelled or postponed at a later stage. All of the postponed procedures were eventually performed within 3 months. Of the 232 patients, 180 (77.5%) underwent their operation on the arranged day without any complications. The pre-admission clinic in our orthopaedic department helps us to prevent a significant number of operation cancellations on the day of admission. It also facilitates an extensive pre-operative assessment of the patients and reduces the ward-based workload of the junior medical staff. More extensive use of the pre-admission clinic is recommended. abstract_id: PUBMED:11564299 A paediatric otolaryngology pre-admission assessment clinic audited. Pre-admission clinics are becoming increasingly popular for surgical specialties with a quick turnover as they aid waiting list management and reduce non-attender rates for surgery. As paediatric patients have a high rate of non-attendance, we performed a retrospective audit of otolaryngology paediatric pre-admission assessment clinic notes for June to October 1998 (n = 363). The attendance rate for the clinic was 97 per cent. Of the children who attended the clinic, 90 per cent had their operation as planned, complications occurred in 2.9 per cent. The operation date was delayed in 20 patients, in 11 patients no cause for the delay was given in the case notes. As a result of this audit, the Senior House Officer sees the patient on the day of admission rather than in the pre-admission clinic, which is staffed by nurses. abstract_id: PUBMED:27566184 Who accompanies patients to the chronic pain clinic? Background: Patients may be accompanied to the pain clinic consultation and these accompanying persons are relevant in the communication process. Aims: We sought to characterize if patients were accompanied and by whom to the pain clinic. We also wished to determine the accompanying persons influence on the doctor-patient interaction. This has not been studied previously in this clinical setting. Methods: Local ethics committee approval followed by written informed consent was obtained. Patients attending the pain clinic for the first time and review patients were included (n = 219). Results: Twenty-one percent of patients (n = 46) were accompanied. Adult accompanied by spouse 19 (41.3 %) and adult child accompanied by parent 18 (39.1 %) were the most common dyads. The accompanying person's role was most frequently described by doctors as an advocate for the patient 30/46 (65.2 %) [for an adult accompanied by spouse (63.1 %)]. The influence of the main accompanying person on the patient doctor encounter was described as positive [adult accompanied by spouse (78.9 %)] [(adult accompanied by parent (94.4 %)]. Conclusions: Patients are accompanied to the pain clinic with a typically positive influence on doctor patient relationship and communication. Awareness of these issues is important in good communication in the pain clinic. abstract_id: PUBMED:18429870 Nurse-led adult ENT pre-operative assessment. Pre-operative assessment in some ENT departments is now nurse-led rather than doctor-led. Appropriately trained nursing staff can perform medical aspects of pre-operative assessment to the same or a better standard than a doctor. Almost all patients are satisfied with a nurse-led pre-operative assessment, and in some cases they prefer it. abstract_id: PUBMED:17470305 Primary assessment of the vertiginous patient at a pre-ENT balance clinic. Introduction: Due to problems with long waiting times for assessment of vertiginous patients (more than 24 weeks), we changed practice and instituted a pre-ENT balance clinic assessment; we then audited the results. In particular, we looked at the subgroup with benign positional paroxysmal vertigo. Methods: One hundred and fifteen patients were seen at the pre-ENT balance clinic from October 2003 to September 2004. Those diagnosed with benign positional paroxysmal vertigo received particle repositioning therapy at the same clinic and did not subsequently need ENT assessment. Results: By the end of the audit period, waiting times were reduced to three weeks, and more than one-quarter of vertiginous patients (i.e. those diagnosed with benign positional paroxysmal vertigo) did not need to be reviewed at an ENT clinic. Conclusion: We believe this to be the first study to present prospective data showing that patients with benign positional paroxysmal vertigo may be safely diagnosed and effectively managed at a pre-ENT balance clinic. Answer: Based on the provided abstracts, it appears that a doctor may not always be necessary in the adult ENT pre-admission clinic. A study assessing the role of doctors in an adult ENT pre-admission clinic found that most changes made by doctors to nurses' clerking or planned investigations could be eliminated by designing a pre-admission clinic protocol that could easily be used by nurses. The study recommended that ENT departments consider implementing nurse-led pre-admission clinics (PUBMED:15550187). Additionally, a nurse-led pre-operative assessment in some ENT departments has been found to be satisfactory to patients, with some patients even preferring it to doctor-led assessments (PUBMED:18429870). Furthermore, the concept of task shifting to appropriately trained non-physician healthcare professionals is supported by other studies. For instance, a study on a speech language therapy first point of contact clinic (SLT-FPOCC) for patients requiring vocal cord checks pre and post thyroid/parathyroid surgery showed that the service could be safely managed by speech language therapists, with positive consumer perception and avoidance of ENT appointments (PUBMED:37921245). Similarly, a study on the value of an ENT specialist outreach service in a Family Medicine Unit in India demonstrated that the majority of patients with ENT-related problems could be managed by family medicine specialists, with only a subset requiring the expertise of a visiting ENT specialist (PUBMED:27453846). In the context of ENT emergency clinic referrals, it was found that a significant proportion of cases were deemed unsuitable or inappropriate for E-Clinic assessment, suggesting that a refined referral process and appropriate allocation of cases could improve clinic efficiency and patient outcomes, potentially reducing the need for a doctor in every case (PUBMED:38370987). Overall, these studies suggest that with appropriate protocols, training, and case selection, nurse-led or other healthcare professional-led services can effectively manage certain aspects of ENT care, potentially reducing the need for a doctor to be present in every situation within the adult ENT pre-admission clinic.
Instruction: Can the EuroSCORE predict the early and mid-term mortality after off-pump coronary artery bypass grafting? Abstracts: abstract_id: PUBMED:17532408 Can the EuroSCORE predict the early and mid-term mortality after off-pump coronary artery bypass grafting? Background: This study evaluated the role of the European System for Cardiac Operative Risk Evaluation (EuroSCORE) in the prediction of early-term and mid-term mortality in patients undergoing isolated off-pump coronary artery bypass grafting (OPCAB). Methods: From January 2002 to August 2006, 757 consecutive patients underwent isolated OPCAB. The patients' operative risks were calculated according to the standard and logistic EuroSCORE models. The cohort was classified into four subgroups according to both EuroSCORE scales. To evaluate the predictability, the expected mortality was compared with the observed mortality. The receiver operating characteristic curves were plotted and calibration was assessed. Mean follow-up was 32.8 +/- 13.9 months. Results: Ten (1.3%) in-hospital deaths occurred. The predicted total numbers of deaths by the EuroSCORE models were 34.2 (4.5%) for the standard EuroSCORE and 37.8 (5.0%) for the logistic EuroSCORE. The expected mortality rates were significantly higher than the observed mortality rates in all subgroups, except one. The area under curve (AUC) in in-hospital mortality was 0.72 for the standard EuroSCORE and 0.71 for the logistic EuroSCORE, but the tests of calibration for both EuroSCORE models were significant. Mid-term mortality was 3.6%. The AUC curve in mid-term mortality was 0.71 for the standard or logistic EuroSCORE. The calibration in both EuroSCORE models for mid-term mortality was nonsignificant, indicating good calibration. Conclusions: Both EuroSCORE models overestimated the in-hospital mortality; however, both models showed good predictability for mid-term mortality. The EuroSCORE could be helpful in planning resource allocation and tailoring follow-up for patients undergoing isolated OPCAB. abstract_id: PUBMED:35548406 A Meta-Analysis of Early, Mid-term and Long-Term Mortality of On-Pump vs. Off-Pump in Redo Coronary Artery Bypass Surgery. We aimed to compare the early, mid-term, and long-term mortality between on-pump vs. off-pump redo coronary artery bypass grafting (CABG). We conducted a systematic search for studies comparing clinical outcomes of patients who underwent on-pump vs. off-pump redo CABG. We pooled the relevant studies quantitatively to compare the early (perioperative period, whether in hospital or within 30 days after discharge), mid-term (≥1 year and &lt;5 years), and long-term (≥5 year) mortality of on-pump vs. off-pump redo CABG. A random-effect model was applied when there was high heterogeneity (I2 &gt; 50%) between studies. Otherwise, a fixed-effect model was utilized. After systematic literature searching, 22 studies incorporating 5,197 individuals (3,215 in the on-pump group and 1,982 in the off-pump group) were identified. A pooled analysis demonstrated that compared with off-pump redo CABG, on-pump redo CABG was associated with higher early mortality (OR 2.11, 95%CI: 1.54-2.89, P &lt; 0.00001). However, no significant difference was noted in mid-term mortality (OR 1.12, 95%CI: 0.57-2.22, P = 0.74) and long-term mortality (OR 1.12, 95%CI: 0.41-3.02, P = 0.83) between the two groups. In addition, the complete revascularization rate was higher in the on-pump group than the off-pump group (OR 2.61, 95%CI: 1.22-5.60, P = 0.01). In conclusion, the off-pump technique is a safe and efficient alternative to the on-pump technique, with early survival advantage and similar long-term mortality to the on-pump technique in the setting of redo CABG, especially in high-risk patients. Systematic Review Registration: https://www.crd.york.ac.uk/PROSPERO/, identifier: CRD42021244721. abstract_id: PUBMED:17876379 Early mortality from off-pump and on-pump coronary bypass surgery in Canada: a comparison of the STS and the EuroSCORE risk prediction algorithms. Objective: Early mortality from off-pump and on-pump coronary artery bypass graft (CABG) surgery was assessed and compared with two widely used risk algorithms for CABG: The Society of Thoracic Surgeons (STS) and the European System for Cardiac Operative Risk Evaluation (EuroSCORE). Method: From March 12, 2001, to December 31, 2002, 1657 consecutive patients were treated with off-pump CABG and 1693 consecutive patients were treated with on-pump CABG. The predicted risk of mortality scores for the STS and EuroSCORE models were calculated. The predictive accuracy for early mortality was assessed by comparing the observed and expected mortalities for equal-sized quantiles of risk using the Hosmer-Lemeshow goodness-of-fit test. The discriminatory power of the models was evaluated by calculating the area under the receiver operating characteristic (ROC) curves. Results: The observed postoperative mortality was 1.8% (95% CI 1.3% to 2.4%) for off-pump CABG and 1.5% (95% CI 1.1% to 2.1%) for on-pump CABG. For both on-pump and off-pump CABG surgery, the Hosmer-Lemeshow goodness-of-fit test indicated good accuracy. The area under the ROC curve was 0.81 (95% CI 0.73 to 0.90) for the STS and 0.79 (95% CI 0.71 to 0.88) for EuroSCORE in off-pump CABG (P=0.567). The area under the ROC curve was 0.82 (95% CI 0.73 to 0.91) for STS and 0.81 (95% CI 0.71 to 0.90) for EuroSCORE in on-pump CABG (P=0.616). The STS-predicted risk of stroke, prolonged ventilation and renal failure were similar to the observed data, with relatively good discriminatory powers for both off-pump and on-pump CABG. Conclusion: Both the STS and EuroSCORE risk algorithms are good predictors of early mortality from off-pump or on-pump CABG surgery. However, the generalizability of these results in the Canadian context would require a broader sampling of Canadian centres, including ones that provide both on-pump and off-pump CABG. abstract_id: PUBMED:26275518 Risk Stratification in Off-Pump Coronary Artery Bypass (OPCAB) Surgery—Role of EuroSCORE II. Objectives: To evaluate the EuroSCORE II for risk stratification in patients undergoing off-pump coronary artery bypass (OPCAB) surgery. Design: A retrospective observational study. Setting: Two tertiary care hospitals. Participants: Participants were 1,211 patients undergoing OPCAB surgery. Interventions: No interventions were implemented. Measurements And Main Results: The EuroSCORE II estimated the operative risk for each patient. The calibration of the scoring system was assessed using the Hosmer Lemeshow test, and the discriminative capacity was estimated with area under receiver operating characteristic curves. The incidence, patient characteristics, causes of intraoperative conversion to on-pump coronary artery bypass (ONCAB), and outcome were studied. The all-cause in-hospital mortality was 2.39%. Predicted mortality with the EuroSCORE II was 2.03±1.63. Using the Hosmer Lemeshow test, a C statistic of 8.066 (p = 0.472) was obtained, indicating satisfactory model fit. The calculated area under the receiver operating characteristic curve was 0.706 (p = 0.0002), indicating good discriminatory power. Emergency intraoperative conversion to ONCAB occurred in 6.53% of patients. The mortality in the ONCAB group was significantly higher compared with patients who underwent successful OPCAB surgery (15.18% v 1.5%, p&lt;0.0001). On multiple regression analysis with conversion to ONCAB as the endpoint, associated factors were patients with a higher EuroSCORE II (odds ratio = 1.13, confidence interval = 1.03-1.27) and more-than-trivial mitral regurgitation (odds ratio = 1.84, confidence interval = 1.07-3.06). Net reclassification improvement of 0.714 (p&lt;0.0001) was obtained when on-pump conversion was added to the EuroSCORE II. Conclusions: The EuroSCORE II has satisfactory calibration and discrimination power to predict mortality after OPCAB surgery. Intraoperative conversion to ONCAB is a major complication of OPCAB surgery. A higher EuroSCORE II also predicts higher probability of conversion to ONCAB. abstract_id: PUBMED:37346442 Early and midterm outcomes after off pump coronary artery bypass surgery. Purpose: There has been debate whether off pump coronary artery bypass surgery (OPCAB) has results comparable to conventional on pump bypass surgery. This has led to the low uptake of OPCAB in the West. In India, OPCAB is the default mode of coronary revascularization. However, there is scarce data on mid-term outcomes of OPCAB in our patients. This study aims to evaluate both short and mid-term mortality and analyze factors associated with mortality. Methods: This is a single center study of all consecutive patients undergoing isolated OPCAB from October 2014 to December 2019. Patient data was collected from hospital records and follow-up was from the hospital electronic medical records and telephone interviews. Mortality and factors contributing to survival were analyzed. Results: Operative mortality was 2.3%. Mid-term mortality was 5.5%. Preoperative renal dysfunction, post-operative renal failure, use of the intra-aortic balloon pump (IABP), re-exploration for bleeding, postoperative stroke, ventilation &gt; 24 h, and postoperative atrial fibrillation were associated with operative mortality. Factors associated with mid-term mortality were age &gt; 62 years, postoperative renal failure, IABP usage, ventilation time &gt; 24 h, and postoperative atrial fibrillation. The mean survival time was 2343.55 + / - 15.27 days and 6-year survival was 88.7%. Conclusion: OPCAB can safely be performed with satisfactory short and mid-term outcomes. Further corroborative studies from different regions of the country or a multi-center study will help to establish the suitability of the technique in Indian patients. abstract_id: PUBMED:23990048 Performance of EuroSCORE II compared to EuroSCORE I in predicting operative and mid-term mortality of patients from a single center after combined coronary artery bypass grafting and aortic valve replacement. Objective: The performance comparison of the recently introduced European System for Cardiac Operative Risk Evaluation II in predicting operative as well as mid-term mortality, with its previous version in patients after combined aortic valve replacement and coronary artery bypass grafting surgery. Methods: This retrospective analysis included 216 patients operated on at one institution from 01/1999 to 12/2005. Accuracy and calibration of EuroSCORE I and II were assessed by plotting the areas under the receiver operator curves and comparing observed and predicted mortalities. Results: EuroSCORE II showed, regarding early mortality, a slightly higher discriminatory accuracy with an area under the receiver operator curve of 0.77, while additive and logistic EuroSCORE I areas were 0.749, 0.75, respectively. The highest specificity and sensitivity level was approached for EuroSCORE II at a predicted mortality of 4.4 %. Receiver operator curves concerning mid-term mortality revealed areas for additive, logistic EuroSCORE and EuroSCORE II of 0.745, 0.739 and 0.718 with the highest accuracy levels at predicted mortalities of 6.5, 6.48 and 3.88 %, respectively. Mean predicted mortalities by logistic EuroSCORE and EuroSCORE II were 8.35 and 3.99 %, respectively, while overall observed operative mortality was 6.3 %. In "high-risk" patients (EuroSCORE &gt; 13), EuroSCORE II underestimated early and mid-term outcomes. Conclusions: Regarding operative mortality, EuroSCORE II showed in this study a slightly higher discriminatory accuracy than EuroSCORE I. There were no significant differences in the calibration of the two model versions in "low-" and "moderate-risk" patients regarding early as well as mid-term mortality. Analyses in larger patient populations will contribute to further model improvement. abstract_id: PUBMED:20103507 The role of EuroSCORE in patients undergoing off-pump coronary artery bypass. Introduction: European System for Cardiac Operative Risk Evaluation (EuroSCORE) has been used to predict the postoperative mortality rate for patients undergoing open-heart surgery. The contributions of EuroSCORE in off-pump coronary artery bypass grafting (CABG) has not yet clearly elucidated. Methods: Consecutive patients of isolated off-pump CABG performed from 2000 when we start performing 'routine' off-pump procedures were stratified using the additive EuroSCORE. Incidence of postoperative mortality, morbidity, and recovery were assessed, and compared to an historical cohort of on-pump procedures performed between 1991 until 1998 when CABG had been routinely performed under on-pump. Results: There were 1318 patients in the off-pump and 1162 patients in the on-pump group. EuroSCORE of the off-pump group was significantly higher than that of the on-pump group. In both the on- and off-pump groups, mortality, total incidence of major complications, heart failure, and renal failure, and three parameters of recovery time were well correlated with EuroSCORE; however, the discriminatory power of the EuroSCORE model was always better in the on-pump group than in the off-pump group. Stroke was correlated with EuroSCORE only in the on-pump group. Pneumonia, mediastinitis postoperative myocardial infarction, or mediastinitis was not correlated with EuroSCORE in either group. In the off-pump group, postoperative major complication was reduced and postoperative recovery was shortened significantly, compared to those in the on-pump group. Conclusion: In off-pump CABG, EuroSCORE can, but not as good as in on-pump CABG, predict mortality, certain major postoperative complications, and postoperative recovery. This suggests off-pump technique appears to modify the risk stratification of the patients undergoing CABG. abstract_id: PUBMED:34824547 Predictive Ability of European Heart Surgery Risk Assessment System II (EuroSCORE II) and the Society of Thoracic Surgeons (STS) Score for in-Hospital and Medium-Term Mortality of Patients Undergoing Coronary Artery Bypass Grafting. Objective: To evaluate the powers of European Heart Surgery Risk Assessment System II (EuroSCORE II) and the Society of Thoracic Surgeons (STS) score in predicting in-hospital and medium-term mortality of patients undergoing coronary artery bypass grafting (CABG). Methods: Totally 1628 Chinese patients were included between January 2000 and January 2018. Their perioperative clinical data were collected and the patients were closely followed up. According to the length of follow-up time, the total cohort was divided into 1-year, 2-year, 3-year, 4-year and 5-year groups. The in-hospital and medium-term risk prediction of EuroSCORE II and STS score were comparatively assessed by calibration, discrimination, decision curve analysis (DCA), net reclassification index (NRI), integrated discrimination improvement (IDI) and Bland-Altman analysis. Results: About 36 (2.21%) patients died during hospitalization. Both EuroSCORE II and STS score performed extremely well in predicting in-hospital mortality (area under curve = 0.900 and 0.879, respectively). However, calibration and discrimination analyses showed gradual decrease when these two risk evaluation systems were used to predict mortality during the follow-up period. At the same time, the predictive ability of EuroSCORE II was better than STS score. DCA curves showed that the performances of the two evaluation systems were roughly equal between the threshold probability of 0% to 20%. The percentage of correct reclassification of EuroSCORE II was 21.64% higher than that of STS score in predicting 2-year postoperative mortality. The IDI index showed that the predictive capabilities of these two systems were roughly equivalent. Bland-Altman analysis showed no significant difference between the values of the two systems. Conclusion: EuroSCORE II and STS score have excellent predictive powers in predicting in-hospital mortality of patients undergoing CABG. In particular, EuroSCORE II is superior in calibration and discrimination. The prediction efficiency of the two risk evaluation systems is still acceptable for two-year postoperative mortality, but decreases year by year. abstract_id: PUBMED:35873132 Performance of the EuroSCORE II Model in Predicting Short-Term Mortality of General Cardiac Surgery: A Single-Center Study in Taiwan. Background: The latest European System for Cardiac Operative Risk Evaluation (EuroSCORE) II is a well-accepted risk evaluation system for mortality in cardiac surgery in Europe. Objectives: To determine the performance of this new model in Taiwanese patients. Methods: Between January 2012 and December 2014, 657 patients underwent cardiac surgery at our institution. The EuroSCORE II scores of all patients were determined preoperatively. The short-term surgical outcomes of 30-day and in-hospital mortality were evaluated to assess the performance of the EuroSCORE II. Results: Of the 657 patients [192 women (29.22%); age 63.5 ± 12.68 years], the 30-day mortality rate was 5.48%, and the in-hospital mortality rate was 9.28%. The discrimination power of this new model was good in all populations, regardless of 30-day mortality or in-hospital mortality. Good accuracy was also noted in different procedures related to coronary artery bypass grafting, and good calibration was noted for cardiac procedures (p value &gt; 0.05). When predicting surgical death within 30 days, the EuroSCORE II overestimated the risk (observed to expected: 0.79), but in-hospital mortality was underestimated (observed to expected: 1.33). The predictive ability [area under the curve (AUC) of the receiver operating characteristic (ROC) curve] and calibration of the EuroSCORE II for 30-day mortality (0.792) and in-hospital mortality (0.825) suggested that in-hospital mortality is a better endpoint for the EuroSCORE II. Conclusions: The new EuroSCORE II model performed well in predicting short-term outcomes among patients undergoing general cardiac surgeries. For short-term outcomes, in-hospital mortality was better than 30-day mortality as an indicator of surgical results, suggesting that it may be a better endpoint for the EuroSCORE II. abstract_id: PUBMED:34300198 A Risk Score for Predicting Long-Term Mortality Following Off-Pump Coronary Artery Bypass Grafting. Background: Off-pump coronary artery bypass grafting (OPCAB) comprises 15-30% of all bypass grafting surgeries. The currently available perioperative scores such as Euroscore and STS score do not specifically predict long-term mortality after off-pump procedures. The neutrophil-to-lymphocyte ratio (NLR) is one of the new, easily accessible markers of inflammation with proven predictive value in cardiovascular diseases. We aimed to develop the first risk score for long-term mortality after OPCAB and to determine if the perioperative value of NLR predicts long-term mortality in OPCAB patients. Methods: In total, 440 consecutive patients with multivessel stable coronary artery disease undergoing OPCAB were recruited. Differential leukocyte counts were obtained by a routine hematology analyzer. Data regarding mortality during a median follow-up time of 5.3 years were obtained from the Polish National Health Service database. An independent population of 242 patients served as a validation cohort. Results: All-cause mortality was influenced by different clinical risk factors. In multivariate regression analysis, chronic obstructive pulmonary disease, stroke history, post-operative NLR and LVEF were independent predictors of mortality. Combing all independent predictors predicted long-term all-cause mortality with 68.5% sensitivity and 71.5% specificity (AUC = 0.704, p &lt; 0.001). After weighing these variables according to their estimates in a multivariate regression model, we developed a score to predict mortality in patients undergoing OPCAB (PREDICT-OPCAB Score, ranging from 0 to 10). Patients with a high score were at higher risk of mortality within the median 5.3 years of follow-up (score 0-3: 8.3%; 4-6: 27.0%; 7-10: 40.0%; p &lt; 0.001 for score 0-3 vs. 4-6 and 7-10). This association was confirmed in the validation cohort. Conclusions: We developed and validated the first simplified risk score to predict mortality following OPCAB based on easily accessible clinical factors. This risk score can be used when obtaining a patient's informed consent and as an aid in determining treatment. Answer: Yes, the EuroSCORE can predict early and mid-term mortality after off-pump coronary artery bypass grafting (OPCAB). Studies have shown that both the standard and logistic EuroSCORE models overestimated in-hospital mortality but demonstrated good predictability for mid-term mortality in patients undergoing isolated OPCAB (PUBMED:17532408). The EuroSCORE II has also been found to have satisfactory calibration and discrimination power to predict mortality after OPCAB surgery (PUBMED:26275518). Furthermore, the EuroSCORE II was shown to perform well in predicting short-term outcomes among patients undergoing general cardiac surgeries, including OPCAB (PUBMED:35873132). However, it is important to note that while the EuroSCORE can be helpful in planning resource allocation and tailoring follow-up for patients, it may not always accurately predict mortality in all subgroups. For example, in high-risk patients undergoing combined aortic valve replacement and coronary artery bypass grafting surgery, EuroSCORE II underestimated early and mid-term outcomes (PUBMED:23990048). Additionally, the EuroSCORE's discriminatory power was found to be better in on-pump CABG than in off-pump CABG, suggesting that the off-pump technique may modify the risk stratification of patients undergoing CABG (PUBMED:20103507). In summary, the EuroSCORE and EuroSCORE II are useful tools for predicting early and mid-term mortality in patients undergoing OPCAB, but their performance may vary depending on patient subgroups and specific surgical contexts.
Instruction: Endoscopic Third Ventriculostomy Success Score (ETVSS) predicting success in a series of 50 pediatric patients. Are the outcomes of our patients predictable? Abstracts: abstract_id: PUBMED:22706984 Endoscopic Third Ventriculostomy Success Score (ETVSS) predicting success in a series of 50 pediatric patients. Are the outcomes of our patients predictable? Purpose: In our series of endoscopic third ventriculostomy (ETV), we sought to establish the relationship between the preoperative prediction using the Endoscopic Third Ventriculostomy Success Score (ETVSS) and the postsurgical success rate. Materials And Methods: This descriptive analytical study comprised 50 pediatric patients who underwent 58 ETV procedures between 2003 and 2011. Data regarding clinical, surgical, and radiological findings were obtained from a continuously updated database. For each patient, we calculated the ETVSS, based on the patient's age, hydrocephalus etiology, and presence of a previous shunt. We considered success to be an established or improved clinical state and at least one of the following radiological criteria: (a) reduction in ventricular size or stable ventricles with disappearance of periventricular edema and increased subarachnoid space over cerebral convexities, (b) flow artifact in sagittal T2FSE MR, or (c) bidirectional flow signal in 2D-CPC MR. Statistical significance was set at p &lt; 0.05. Six months was the minimum postoperative follow-up required. Results: The ETV was successful in 29 patients (58 %). Patients aged over 1 year achieved the best results (p &lt; 0.019). For those who underwent successful ETV, the mean ETVSS was 71.03 (95 % CI, 66.23-75.84). In those for whom the ETV was not successful, the mean ETVSS was 60 (95 % CI, 53.09-66.90); (p &lt; 0.007). Conclusions: The success of ETV in our series could have been predicted by ETVSS. Predictability could help establish stricter surgical selection criteria, thereby obtaining higher success rates, as well as preparing the patients and their families for expected outcomes. abstract_id: PUBMED:26207604 Predicting success of endoscopic third ventriculostomy: validation of the ETV Success Score in a mixed population of adult and pediatric patients. Object: Endoscopic third ventriculostomy (ETV) has become the first line of treatment in obstructive hydrocephalus. The Toronto group (Kulkarni et al.) developed the ETV Success Score (ETVSS) to predict the clinical response following ETV based on age, previous shunt, and cause of hydrocephalus in a pediatric population. However, the use of the ETVSS has not been validated for a population comprising adults. The objective of this study was to validate the ETVSS in a "closed-skull" population, including patients 2 years of age and older. Methods: In this retrospective observational study, medical charts of all consecutive cases of ETV performed in two university hospitals were reviewed. The primary outcome, the success of ETV, was defined as the absence of reoperation or death attributable to hydrocephalus at 6 months. The ETVSS was calculated for all patients. Discriminative properties along with calibration of the ETVSS were established for the study population. The secondary outcome is the reoperation-free survival. Results: This study included 168 primary ETVs. The mean age was 40 years (range 3-85 years). ETV was successful at 6 months in 126 patients (75%) compared with a mean ETVSS of 82.4%. The area under the receiver operating characteristic curve was 0.61, revealing insufficient discrimination from the ETVSS in this population. In contrast, calibration of the ETVSS was excellent (calibration slope = 1.01), although the expected low numbers were obtained for scores &lt; 70. Decision curve analyses demonstrate that ETVSS is marginally beneficial in clinical decision-making, a reduction of 4 and 2 avoidable ETVs per 100 cases if the threshold used on the ETVSS is set at 70 and 60, respectively. However, the use of the ETVSS showed inferior net benefit when compared with the strategy of not recommending ETV at all as a surgical option for thresholds set at 80 and 90. In this cohort, neither age nor previous shunt were significantly associated with unsuccessful ETV. However, better outcomes were achieved in patients with aqueductal stenosis, tectal compressions, and other tumor-associated hydrocephalus than in cases secondary to myelomeningocele, infection, or hemorrhage (p = 0.03). Conclusions: The ETVSS did not show adequate discrimination but demonstrated excellent calibration in this population of patients 2 years and older. According to decision-curve analyses, the ETVSS is marginally useful in clinical scenarios in which 60% or 70% success rates are the thresholds for preferring ETV to CSF shunt. Previous history of CSF shunt and age were not associated with worse outcomes, whereas posthemorrhagic and postinfectious causes of the hydrocephalus were significantly associated with reduced success rates following ETV. abstract_id: PUBMED:34560294 Transependymal Edema as a Predictor of Endoscopic Third Ventriculostomy Success in Pediatric Hydrocephalus. Background: The Endoscopic Third Ventriculostomy Success Score (ETVSS) is based on the clinical features of hydrocephalus except for radiological findings. A previous study suggested that transependymal edema (TEE) as a radiological finding may be a reliable predictor of endoscopic third ventriculostomy (ETV) success in patients of all ages. We aimed to investigate whether TEE on preoperative magnetic resonance imaging can predict ETV success in pediatric patients. Methods: Medical and radiological records of all pediatric patients with an initial ETV in our hospital between 2013 and 2019 were retrospectively reviewed. Results: This study included 32 patients with hydrocephalus. The median age at surgery was 10.0 years (interquartile range: 5.6-12.9 years). There were 20 patients in the high ETVSS (90-80) group and 12 patients in the moderate ETVSS (70-50) group. The median follow-up period was 29.0 months (interquartile range: 12.9-46.2 months). The ETV success rate at the final follow-up was 81%. Preoperative brain magnetic resonance imaging revealed TEE in 20 patients and third ventricle floor ballooning in 25 patients, of whom 19 (95%) and 22 (88%), respectively, achieved successful ETV. Patients with TEE had a significantly better outcome than patients without TEE (95% vs. 58%, P = 0.018). Multivariate analysis demonstrated that the presence of TEE (odds ratio 13.6, 95% confidence interval 1.3-137.5, P = 0.027) is a significant predictor of ETV success. Conclusions: In our cohort with a high or moderate ETVSS, the ETV success rate in patients with TEE was significantly higher than in patients without TEE, suggesting that TEE may be a useful predictor of ETV success in pediatric hydrocephalus. abstract_id: PUBMED:31565904 Predicting endoscopic third ventriculostomy success in adult hydrocephalus: preliminary assessment of a modified ETV success score for adults (ETVSS-A) in a series of 47 patients. Background: Endoscopic third ventriculostomy is an established treatment for non-communicating hydrocephalus. In carefully selected patients, it can be adopted for the management of communicating variant; however controversy exists in regards to the definition of the appropriate candidates. Predictive score of Endoscopic Third Ventriculostomy Success (ETVSS) has been reported for pediatric and mixed populations only. Our purpose was to define an ETV success score for adult population (ETVSS-A), measuring the strength of correlation between preoperative score retrospectively evaluated and the success rates achieved in a class of adult patients. Methods: A retrospective analysis of 47 cases which received ETV procedure at our Institution between 2015 and 2018 was run. Demographic data,clinical history,preoperative and postoperative signs were reviewed and ETVSS-A was calculated.Thereafter ETVSS-A results were compared with the actual success rates. Results: Twenty-nine patients (61.7%) presented unchanged or improved clinical status with a mean ETVSS-A of 54.5%; 18 patients (38.3%) worsened with mean ETVSS-A of 37.7%. We found that age, type of hydrocephalus and symptoms of admission are each apart important factors in predicting ETV success: older patients and those with non-obstructive hydrocephalus had the lowest predicted ETV success. In patients in whom ETV was actually successful, the preoperative ETVSS-A was significantly higher as compared to those patients in whom we observed a poor surgical outcome. Conclusions: From the results of this series, though small and retrospectively analyzed, it seems that ETVSS-A can be considered as a useful instrument to help neurosurgeon in predicting the ETV success and though define a more accurate surgical strategy in cases of hydrocephalus. Wider series and prospective studies are attended to validate these preliminary results. abstract_id: PUBMED:30497212 The long-term outcomes of endoscopic third ventriculostomy in pediatric hydrocephalus, with an emphasis on future intellectual development and shunt dependency. OBJECTIVEThe goal of this study was to clarify the long-term outcome of endoscopic third ventriculostomy (ETV) in pediatric hydrocephalus in light of the ETV Success Score (ETVSS), shunt dependency, and intellectual development.METHODSThe authors retrospectively analyzed pediatric patients with hydrocephalus who underwent ETV between 2002 and 2012 and who were followed for longer than 5 years as a single-center cohort. The data of the patients' pre- and postoperative status were collected. The relationships between ETVSS and the full-scale IQ as well as shunt dependency were analyzed. The usefulness of ETVSS for repeat ETV and the change of radiological parameters of ventricle size before and after ETV were also analyzed. The success of ETV was defined as no requirement for further CSF diversion procedures.RESULTSFifty ETVs were performed in 40 patients. The average ETVSS was 61 and the success rate at 6 months was 64%. The mean follow-up was 9.9 years (5.2-15.3 years), and the long-term success rate of ETV was 50%. The Kaplan-Meier survival curve continued to show a statistically significant difference among patients with a low, moderate, and high ETVSS, even after 6 months (p = 0.002). After 15 months from the initial ETV, no patients required additional CSF diversion surgery. There was no statistical significance between ETVSS and the long-term full-scale IQ or shunt dependency (p = 0.34 and 0.12, respectively). The radiological improvement in ventricle size was not associated with better future educational outcome.CONCLUSIONSThe ETVSS was correlated with the long-term success rate. After 15 months from the initial ETV, no patients required an additional CSF diversion procedure. The ETVSS was not considered to be correlated with long-term intellectual status. abstract_id: PUBMED:28708018 Endoscopic third ventriculostomy and repeat endoscopic third ventriculostomy in pediatric patients: the Dutch experience. OBJECTIVE After endoscopic third ventriculostomy (ETV), some patients develop recurrent symptoms of hydrocephalus. The optimal treatment for these patients is not clear: repeat ETV (re-ETV) or CSF shunting. The goals of the study were to assess the effectiveness of re-ETV relative to initial ETV in pediatric patients and validate the ETV success score (ETVSS) for re-ETV. METHODS Retrospective data of 624 ETV and 93 re-ETV procedures were collected from 6 neurosurgical centers in the Netherlands (1998-2015). Multivariable Cox proportional hazards modeling was used to provide an adjusted estimate of the hazard ratio for re-ETV failure relative to ETV failure. The correlation coefficient between ETVSS and the chance of re-ETV success was calculated using Kendall's tau coefficient. Model discrimination was quantified using the c-statistic. The effects of intraoperative findings and management on re-ETV success were also analyzed. RESULTS The hazard ratio for re-ETV failure relative to ETV failure was 1.23 (95% CI 0.90-1.69; p = 0.20). At 6 months, the success rates for both ETV and re-ETV were 68%. ETVSS was significantly related to the chances of re-ETV success (τ = 0.37; 95% bias corrected and accelerated CI 0.21-0.52; p &lt; 0.001). The c-statistic was 0.74 (95% CI 0.64-0.85). The presence of prepontine arachnoid membranes and use of an external ventricular drain (EVD) were negatively associated with treatment success, with ORs of 4.0 (95% CI 1.5-10.5) and 9.7 (95% CI 3.4-27.8), respectively. CONCLUSIONS Re-ETV seems to be as safe and effective as initial ETV. ETVSS adequately predicts the chance of successful re-ETV. The presence of prepontine arachnoid membranes and the use of EVD negatively influence the chance of success. abstract_id: PUBMED:31691874 Prediction of endoscopic third ventriculostomy (ETV) success with preoperative third ventricle floor bowing (TVFB): a supplement to ETV success score. Preoperative judgement of which children is likely to benefit from endoscopic third ventriculostomy (ETV) is still the most difficult challenge. This study aimed to compare the efficiency of third ventricular floor bowing (TVFB) and ETV success score (ETVSS) in selecting ETV candidates and achieve a better preoperative patient selection method for ETV based on our institutional experience. Children (≤ 16 years old) with newly diagnosed hydrocephalus treated with ETV between January 2013 and June 2018 were included in this prospective study. Patients with TVFB will receive ETV procedure in the pediatric subgroup of our department. ETVSS was calculated in every patient. The ETVSS predicted ETV success rate and the actual ETV success rate in our institution were compared and further analyzed. One hundred twenty-nine children with TVFB were enrolled in our study. The mean age at ETV was 5.84 ± 5.17 years (range, 0.04-16). Brain tumors, aqueductal stenosis, and inflammatory are the most common hydrocephalus etiologies. The most common complication was noninfectious fever (3.1%). During the average follow-up of 19.5 ± 14.95 months, twenty-five patients had depicted ETV failure. The actual ETV success rate (81%) in our study was higher than the success rate (69%) predicted by ETVSS. TVFB is a pragmatic, efficient, and simple model to predict the ETV outcome. We suggest that for hydrocephalic patients with preoperative third ventricular floor bowing, ETV should be the first-treatment choice regardless of the ETV success score. And for patients without such sign, ETVSS should be applied to select ETV candidates. abstract_id: PUBMED:35532636 Preoperative Third Ventricle Floor Bowing is Associated with Increased Surgical Success Rate in Patients Undergoing Endoscopic Third Ventriculostomy - A Systematic Review and Meta-analysis. Background: Endoscopic third ventriculostomy (ETV) is a procedure that involves devising an opening in the third ventricle floor, allowing cerebrospinal fluid to flow into the prepontine cistern and the subarachnoid space. Third ventricular floor bowing (TVFB) serves as an indicator of intraventricular obstruction in hydrocephalus and existence of pressure gradient across third ventricular floor, which is the prerequisite of a successful ETV. Objective: In this systematic review and meta-analysis, we aimed to synthesize the latest evidence on the TVFB as a marker for surgical success in patients undergoing ETV. Material And Methods: We performed a comprehensive search on topics that assesses the association of TVFB with the surgical success in patients undergoing ETV from several electronic databases. Results: There was a total of 568 subjects from six studies. TVFB was associated with 85% (81-88%) ETV success. TVFB was associated with OR 4.13 [2.59, 6.60], P &lt; 0.001; I2: 6% for ETV success. Subgroup analysis on pediatric patients showed 86% (82-91%) success rate. In terms of value for ETV success compared to ETV Success Score (ETVSS), a high ETVSS does not significantly differ (P = 0.31) from TVFB and TVFB was associated with OR 3.14 [1.72, 5.73], P &lt; 0.001; I2: 69% compared to intermediate/moderate ETVSS. Funnel plot analysis showed an asymmetrical funnel plot due to the presence of an outlier. Upon sensitivity analysis by removing the outlier, the OR was 3.62 [2.22, 5.89], P &lt; 0.001; I2: 0% for successful surgery in TVFB. Conclusions: TVFB was associated with an increased rate of successful surgery in adults and children undergoing ETV. abstract_id: PUBMED:35751962 Prediction of 6 months endoscopic third ventriculostomy success rate in patients with hydrocephalus using a multi-layer perceptron network. Objective: Discrimination between patients most likely to benefit from endoscopic third ventriculostomy (ETV) and those at higher risk of failure is challenging. Compared to other standard models, we have tried to develop a prognostic multi-layer perceptron model based on potentially high-impact new variables for predicting the ETV success score (ETVSS). Methods: Clinical and radiological data of 128 patients have been collected, and ETV outcomes were evaluated. The success of ETV was defined as remission of symptoms and not requiring VPS for six months after surgery. Several clinical and radiological features have been used to construct the model. Then the Binary Gravitational Search algorithm was applied to extract the best set of features. Finally, two models were created based on these features, multi-layer perceptron, and logistic regression. Results: Eight variables have been selected (age, callosal angle, bifrontal angle, bicaudate index, subdural hygroma, temporal horn width, third ventricle width, frontal horn width). The neural network model was constructed upon the selected features. The result was AUC:0.913 and accuracy:0.859. Then the BGSA algorithm removed half of the features, and the remaining (Age, Temporal horn width, Bifrontal angle, Frontal horn width) were applied to construct models. The ANN could reach an accuracy of 0.84, AUC:0.858 and Positive Predictive Value (PPV): 0.92, which was higher than the logistic regression model (accuracy:0.80, AUC: 0.819, PPV: 0.89). Conclusion: The research findings have shown that the MLP model is more effective than the classic logistic regression tools in predicting ETV success rate. In this model, two newly added features, the width of the lateral ventricle's temporal horn and the lateral ventricle's frontal horn, yield a relatively high inter-observer reliability. abstract_id: PUBMED:28686122 Failure of ETV in patients with the highest ETV success scores. OBJECTIVE Endoscopic third ventriculostomy (ETV) is a surgical alternative to placing a CSF shunt in certain patients with hydrocephalus. The ETV Success Score (ETVSS) is a reliable, simple method to estimate the success of the procedure by 6 months of postoperative follow-up. The highest score is 90, estimating a 90% chance of the ETV effectively treating hydrocephalus without requiring a shunt. Treatment with ETV fails in certain patients, despite their being the theoretically best candidates for the procedure. In this study the authors attempted to identify factors that further predicted success in patients with the highest ETVSSs. METHODS A retrospective review was performed of all patients treated with ETV at 3 institutions. Demographic, radiological, and clinical data were recorded. All patients by definition were older than 1 year, had obstructive hydrocephalus, and did not have a prior shunt. Failure of ETV was defined as the need for a shunt by 1 year. The ETV was considered a success if the patient did not require another surgery (either shunt placement or a repeat endoscopic procedure) by 1 year. A statistical analysis was performed to identify factors associated with success or failure. RESULTS Fifty-nine patients met the entry criteria for the study. Eleven patients (18.6%) required further surgery by 1 year. All of these patients received a shunt. The presenting symptom of lethargy statistically correlated with success (p = 0.0126, odds ratio [OR] = 0.072). The preoperative radiological finding of transependymal flow (p = 0.0375, OR 0.158) correlated with success. A postoperative larger maximum width of the third ventricle correlated with failure (p = 0.0265). CONCLUSIONS The preoperative findings of lethargy and transependymal flow statistically correlated with success. This suggests that the best candidates for ETV are those with a relatively acute elevation of intracranial pressure. Cases without these findings may represent the failures in this highly selected group. Answer: The Endoscopic Third Ventriculostomy Success Score (ETVSS) has been used to predict the success of endoscopic third ventriculostomy (ETV) in pediatric patients. In a series of 50 pediatric patients who underwent ETV, the ETVSS was calculated based on the patient's age, hydrocephalus etiology, and presence of a previous shunt. The success of ETV was defined by clinical improvement and radiological criteria, including reduction in ventricular size or stable ventricles with disappearance of periventricular edema and increased subarachnoid space over cerebral convexities, flow artifact in sagittal T2FSE MR, or bidirectional flow signal in 2D-CPC MR. The study found that ETV was successful in 29 patients (58%), and the mean ETVSS was significantly higher in those with successful outcomes compared to those with unsuccessful outcomes (71.03 vs. 60; p < 0.007) (PUBMED:22706984). The ETVSS has been validated in a mixed population of adult and pediatric patients, showing that it is marginally beneficial in clinical decision-making, with a reduction of avoidable ETVs per 100 cases if the threshold used on the ETVSS is set at 60 or 70. However, the ETVSS showed inferior net benefit compared with the strategy of not recommending ETV at all for thresholds set at 80 and 90. In this cohort, neither age nor previous shunt were significantly associated with unsuccessful ETV, but posthemorrhagic and postinfectious causes of hydrocephalus were significantly associated with reduced success rates (PUBMED:26207604). Additional studies have suggested that radiological findings such as transependymal edema (TEE) and third ventricle floor bowing (TVFB) may be reliable predictors of ETV success in pediatric patients. In one study, patients with TEE had a significantly better outcome than those without TEE (95% vs. 58%, P = 0.018), and TEE was a significant predictor of ETV success (PUBMED:34560294). Another study suggested that TVFB is a pragmatic, efficient, and simple model to predict ETV outcome, with an actual ETV success rate (81%) higher than the rate predicted by ETVSS (69%) (PUBMED:31691874 Continuing from where I stopped, a systematic review and meta-analysis found that preoperative third ventricle floor bowing (TVFB) was associated with an increased rate of successful surgery in both adults and children undergoing ETV. The success rate associated with TVFB was 85% (81-88%), and TVFB was associated with an odds ratio of 4.13 for ETV success. This analysis suggested that TVFB could be a useful supplement to ETVSS in predicting ETV success (PUBMED:35532636). In another study, a multi-layer perceptron network was used to predict the 6-month ETV success rate in patients with hydrocephalus. The model included several clinical and radiological features and demonstrated higher accuracy and positive predictive value than the classic logistic regression tools, suggesting that this model could be more effective in predicting ETV success (PUBMED:35751962). However, it is important to note that even patients with the highest ETVSS can experience failure of ETV. A study examining patients with the highest ETVSS found that 18.6% required further surgery by 1 year. Factors such as the presence of lethargy and transependymal flow preoperatively were statistically correlated with success, suggesting that the best candidates for ETV might be those with a relatively acute elevation of intracranial pressure (PUBMED:28686122). In conclusion, while the ETVSS can be a useful tool for predicting the success of ETV in pediatric patients, its predictive value may be enhanced by considering additional radiological findings such as TEE and TVFB, as well as other clinical factors. These findings suggest that the outcomes of pediatric patients undergoing ETV can be predictable to a certain extent, but individual patient factors must be carefully considered to optimize the prediction of ETV success.
Instruction: Do physicians underrecognize obesity? Abstracts: abstract_id: PUBMED:24945167 Do physicians underrecognize obesity? Objectives: A physician's advice is among the strongest predictors of efforts toward weight management made by obese patients, yet only a minority receives such advice. One contributor could be the physician's failure to recognize true obesity. The objectives of this study were to assess physicians' ability to recognize obesity and to identify factors associated with recognition and documentation of obesity. Methods: Internal medicine residents and attending physicians at three academic urban primary care clinics and their adult patients participated in a study using recognition and documentation of patient obesity as the main measures. Results: A total of 52 physicians completed weight assessments for 400 patients. The mean patient age was 51 years, 56% were women, 77% were Hispanic, and 67% had one or more obesity-related comorbidity. There were 192 (48%) patients, of whom 66% were correctly identified by physicians as being obese, 86% of those with a body mass index (BMI) ≥ 35, but only 49% of those with a BMI of 30 to 34.9 (P &lt; 0.0001). Fewer obese Hispanic patients were identified than were non-Hispanic patients (62% vs 76%; P = 0.03). No physician characteristics were significantly associated with recognition of obesity. Physicians documented obesity as a problem for 51% of patients. Attending physicians documented obesity more frequently than did residents (64% vs 43%, odds ratio 2.5, 95% confidence interval 1.3-4.6) and normal-weight physicians documented obesity more frequently than overweight physicians (58% vs 41%, odds ratio 2.0, 95% confidence interval 1.0-4.0). Documentation was more common for patients with a BMI ≥ 35 and for non-Hispanics. Documentation was not more common for patients with obesity-related comorbidities. Conclusions: Physicians have difficulty recognizing obesity unless patients' BMI is ≥ 35. Training physicians to recognize true obesity may increase rates of documentation, a first step toward treatment. abstract_id: PUBMED:29479464 Evidence of a gap in understanding obesity among physicians. Background: Experience suggests that some physicians view obesity as a purely lifestyle condition rather than a chronic metabolic disease. Physicians may not be aware of the role of biological factors in causing weight regain after an initial weight loss. Methods: A questionnaire was administered at continuing medical education conferences, both primary care and obesity-specific. The questionnaire included items about biological and behavioral factors that predispose to weight regain and general items about treatment of obesity. The sample was separated into primary care physicians (PCPs) and physicians preparing for the obesity medicine (OMs) exam. Results: Among all respondents, behavioral factors were given higher importance ratings, relative to biological factors in causing weight regain. Respondents rated behaviour modification as more effective, relative to medications or surgery to treat obesity. OMs gave higher importance ratings to biological factors, relative to PCPs. OMs also gave higher effectiveness ratings for medications and surgery, relative to PCPs. However, even OMs gave higher effectiveness ratings for behaviour modification, relative to medications or surgery. Respondents who reported a belief in the role of behavioral factors rated lifestyle modification as more effective. Respondents who reported a belief in both behavioral and biological factors rated medications as more effective. Conclusions: Physicians rate biological factors as less important, relative to behavioral factors in causing weight regain. Physicians rate medications and surgery as less effective, relative to lifestyle modification alone. Belief in the importance of behavioral factors correlated with a higher effectiveness rating for lifestyle modification. A better understanding of the biological basis for weight regain may help to increase comfort with the use of biological treatments for obesity. abstract_id: PUBMED:31707395 Examining Weight Bias among Practicing Canadian Family Physicians. Objectives: The aim of this study was to examine the attitudes of practicing Canadian family physicians about individuals with obesity, their healthcare treatment, and perceptions of obesity treatment in the public healthcare system. Method: A national sample of Canadian practicing family physicians (n = 400) completed the survey. Participants completed measures of explicit weight bias, attitudes towards treating patients with obesity, and perceptions that people with obesity increase demand on the public healthcare system. Results: Responses consistent with weight bias were not observed overall but were demonstrated in a sizeable minority of respondents. Many physicians also reported feeling frustrated with patients with obesity and agreed that people with obesity increase demand on the public healthcare system. Male physicians had more negative attitudes than females. More negative attitudes towards treating patients with obesity were associated with greater perceptions of them as a public health demand. Conclusion: Results suggest that negative attitudes towards patients with obesity exist among some family physicians in Canada. It remains to be determined if physicians develop weight bias partly because they blame individuals for their obesity and its increased demand on the Canadian public healthcare system. More research is needed to better understand causes and consequences of weight bias among health professionals and make efforts towards its reduction in healthcare. abstract_id: PUBMED:31405649 Assessment of physicians' knowledge to combat obesity in Bangladesh. Objective: Physicians need to play a proactive role to combat obesity and its associated comorbidities. The present survey was conducted to assess the awareness, knowledge, practice and attitude of the physicians in Bangladesh in the prevention and management of obesity. Methods: Three hundred physicians were randomly selected from a medical university, a government medical college and a private medical college in Bangladesh to be included in this survey. All of them voluntarily participated in the survey upon the assurance of anonymity. All the selected physicians were provided with a questionnaire consisting of nine questions for assessing their awareness, knowledge, practice and attitude regarding obesity. Results: Out of 300 participants, about 77% claimed that they know their own BMI and BMI cut-off points for overweight and obesity. But 38% physicians were unable to write the cut-off points correctly. Near about 50% physicians claimed that they know the BMI cut-off points for Asian population. However, only 7% were able to correctly write the BMI cut-off points for Asian population. About 47% physicians agreed that they do not calculate BMI or evaluate other measures of body fatness during clinical practice. However, 99% of the physicians considered that measuring BMI during consultation or clinical practice is important. Conclusions: It may be concluded that Bangladeshi physicians' have positive attitude for managing obesity but their practice is grossly inadequate. Most importantly, knowledge and awareness of the physicians about diagnosis of obesity is very poor. abstract_id: PUBMED:11815326 Personal and professional nutrition-related practices of US female physicians. Background: The extent to which female physicians personally and clinically adhere to dietary recommendations is unknown and has implications for patients. Objectives: We aimed to identify US female physicians' personal and professional nutrition- and weight-related habits and to identify which, if any, of their personal habits predicted their clinical practices. Design: Our sample included the 4501 respondents to the Women Physicians' Health Study, a large, cross-sectional, questionnaire-based study of the health behaviors and counseling practices of US female physicians. Results: Forty-three percent of physicians performed nutrition counseling, and 50% performed weight counseling with patients at least yearly. Forty-six percent thought that discussing nutrition was highly relevant to their practices, 47% thought the same about discussing weight, and 21% stated that they had received extensive related training. Primary care physicians, obstetricians-gynecologists, pediatricians, vegetarians, and those with a personal history of obesity were more likely to provide nutrition and weight counseling to patients. Female physicians report regularly performing more nutrition and weight counseling than they do most other types of prevention-related counseling. Female physicians report relatively healthy diet-related habits, and these personal habits are related to their likelihood to counsel their patients about nutrition and weight. Conclusions: Nutrition and weight-related issues are important to female physicians in both their personal and professional lives, and these 2 spheres influence each other. abstract_id: PUBMED:20522620 Physicians' perspectives on referring obese adolescents to pediatric multidisciplinary weight management programs. Objective: To identify factors that might influence physicians' referrals of obese adolescents to pediatric multidisciplinary weight management (PMWM) programs. Design/methods: Survey of a national sample of 375 pediatricians (PDs) and 375 family physicians (FPs) explored program availability, referral history, desired services, and when in the course of treatment physicians would refer. Differences were examined via chi(2) tests. Results: Response rate was 67%. More PDs than FPs reported having a PMWM program available (46% vs 10%, P &lt; .01). More PDs (PD 83% vs FP 53%, P &lt; .01) and female physicians (88% vs 65%, P &lt; .01) reported having made a referral. Most physicians wanted coordinated diet, activity, and behavioral therapy (79%). Almost all physicians indicated they would refer when unsure of what else to do, or if requested by the patient/parent. Conclusions: PMWM program referrals appear limited by availability. These data also suggest physicians may be reticent to refer. Further work should examine whether this affects patient outcomes. abstract_id: PUBMED:27155958 Teaching Physicians Motivational Interviewing for Discussing Weight With Overweight Adolescents. Purpose: We tested whether an online intervention combined with a patient feedback report improved physicians' use of motivational interviewing (MI) techniques when discussing weight with overweight and obese adolescents. Methods: We randomized 46 pediatricians and family physicians and audio recorded 527 patient encounters. Half of the physicians received an individually tailored, online intervention. Then, all physicians received a summary report detailing patient's weight-related behaviors. We coded MI techniques and used multilevel linear mixed-effects models to examine arm differences. We assessed patients' motivation to change and perceived empathy after encounter. Results: We found arm differences in the Intervention Phase and the Summary Report Phase: Empathy (p &lt; .001), MI Spirit (p &lt; .001), open questions (p = .02), and MI consistent behaviors (p = .04). Across all three phases (Baseline, Intervention, and Summary Report), when physicians had higher Empathy scores, patients were more motivated to change diet (p = .03) and physical activity (p = .03). In addition, patients rated physicians as more empathic when physicians used more MI consistent techniques (p = .02). Conclusions: An individually tailored, online intervention coupled with a Summary Report improved physicians' use of MI, which improved the patient experience. abstract_id: PUBMED:16601775 Risk factors for cardiovascular diseases in physicians. The aim of the study was to determine the prevalence of risk factors for cardiovascular diseases among physicians at a teaching hospital. In total, 203 men and 167 women were included in the study. The participants filled in a questionnaire; their height, weight, blood pressure, serum cholesterol and glucose levels were added. 19.2 % males and 13.8 % females were smokers, hypertension was diagnosed in 10 % of males and in 6.6 % of females, 52.2 % males and 17.4 % females were overweight, 37 % males and 43.1 % females had hypercholesterolemia. The above findings suggest that Czech physicians have more favourable values of all the studied cardiovascular diseases risk factors than the general Czech population. However, Czech physicians smoke more than those in other countries and their level of cardiovascular diseases risk factors is unsatisfactory and calls for further intensive prevention. Preliminary outcomes of the study repeated after two years show no positive trends as well as physicians' low willingness to actively participate in lowering cardiovascular diseases risk factors. abstract_id: PUBMED:20401742 To cut or not to cut: physicians' perspectives on referring adolescents for bariatric surgery. Background: As the prevalence and severity of obesity among adolescents has increased, so has the number seeking bariatric surgery. Little is known about the opinions and referral behaviors of primary care physicians regarding bariatric surgery among adolescents. Therefore, the objective of this study was to assess primary care physicians' opinions regarding referral of obese adolescents for bariatric surgery. Methods: In spring of 2007, a two-page survey was fielded to a national random sample of physicians (375 pediatricians and 375 family physicians). The survey explored physicians' opinions about: (1) whether they would ever refer an adolescent for bariatric surgery, (2) the minimum age at which bariatric surgery should be considered, and (3) prerequisites to bariatric surgery. Chi-square tests were used to examine associations in responses. Results: The response rate was 67%. Nearly one-half of physicians (48%) would not ever refer an obese adolescent for bariatric surgery. The most frequently endorsed minimum age at which physicians would make a referral for bariatric surgery was 18 years (46%). Almost all respondents endorsed the need for participation in a monitored weight management program prior to bariatric surgery (99%). However, the recommended duration of treatment varied from 3 months to over 5 years, with almost half recommending 12 months of monitored treatment prior to surgery. Conclusion: Some severely obese adolescents may desire and potentially benefit from bariatric surgery, but referral for the procedure may depend heavily on the attitudes of their primary care physicians. abstract_id: PUBMED:15494881 Physicians' attitude toward identification and management of childhood obesity in Israel. Obesity is a serious health problem, and is becoming increasingly common in affluent societies. In 1998, an Expert Committee published guidelines regarding obesity evaluation and treatment. The purpose of this study was to assess the attitude of primary care physicians in Israel toward diagnosis and treatment of childhood obesity, as related to the recommended guidelines. Primary physicians caring for children and adolescents were asked to complete an anonymous questionnaire including personal and professional details, methods of diagnosis, documentation and treatment of childhood obesity, and familiarity with and implementation of the Expert Committee recommendations. One hundred forty-four physicians, treating approximately 100,000 children monthly, completed the questionnaire. Ninety-four percent were considered to have diagnosed obesity properly. Furthermore, only 19% reported weighing all children examined, while 99% of the physicians suggested some treatment for obesity. The most frequent recommendations for managing obesity were referral to a dietitian (92%), physical exercise (85%), and group treatment (27%). The majority of physicians (78%) were not familiar with the new Expert Committee recommendations regarding obesity treatment. This study suggests that the majority of primary physicians diagnose obesity properly and recommend accepted modalities to manage obesity. A comprehensive program to prevent and treat obesity is recommended to improve the health status of the population. Answer: Yes, physicians do underrecognize obesity. A study involving internal medicine residents and attending physicians at three academic urban primary care clinics found that only 66% of patients with obesity were correctly identified by physicians as being obese. This recognition was even lower for patients with a BMI of 30 to 34.9, with only 49% being correctly identified as obese. Additionally, fewer obese Hispanic patients were identified compared to non-Hispanic patients (62% vs 76%). The study also noted that physicians documented obesity as a problem for only 51% of patients, with attending physicians documenting obesity more frequently than residents (64% vs 43%) and normal-weight physicians documenting obesity more frequently than overweight physicians (58% vs 41%) (PUBMED:24945167). Another study highlighted a gap in understanding obesity among physicians, suggesting that some view obesity as a lifestyle condition rather than a chronic metabolic disease. Physicians in the study rated behavioral factors as more important than biological factors in causing weight regain and rated behavior modification as more effective than medications or surgery for treating obesity. Even among physicians preparing for the obesity medicine exam, behavior modification was rated higher than medications or surgery, although they did give higher importance ratings to biological factors compared to primary care physicians (PUBMED:29479464). Furthermore, a survey of Canadian family physicians revealed that while explicit weight bias was not observed overall, a sizeable minority of respondents demonstrated negative attitudes towards patients with obesity. Many physicians also reported feeling frustrated with patients with obesity and agreed that people with obesity increase demand on the public healthcare system (PUBMED:31707395). In Bangladesh, a survey assessing physicians' awareness, knowledge, practice, and attitude in the prevention and management of obesity found that while most physicians considered measuring BMI during consultation important, their actual practice was inadequate, and their knowledge about diagnosing obesity was very poor (PUBMED:31405649). These findings collectively indicate that there is a significant issue with the underrecognition of obesity by physicians, which may impact the advice and treatment provided to patients with obesity.
Instruction: Presence of lymphocyte aggregates in the synovium of patients with early arthritis in relationship to diagnosis and outcome: is it a constant feature over time? Abstracts: abstract_id: PUBMED:21173012 Presence of lymphocyte aggregates in the synovium of patients with early arthritis in relationship to diagnosis and outcome: is it a constant feature over time? Objectives: To evaluate the presence of lymphocyte aggregates in synovial tissue of patients with early arthritis in relationship to clinical outcome and to determine whether this is a stable feature over time. Methods: Arthroscopic synovial biopsy samples were collected in a prospective cohort of disease-modifying antirheumatic drug-naïve patients with early arthritis (&lt;1 year's disease duration) at baseline (n=93) and, if rheumatoid arthritis was suspected, after 6 months of follow-up (n=17). After 2 years of follow-up, definitive diagnosis and clinical outcome were assessed. Size of synovial lymphocyte aggregates was graded (score 1-3). Lymphoid neogenesis (LN) was defined by the presence of grade ≥2 aggregates and subclassified based on the presence of follicular dendritic cells (FDCs). Results: LN was present in 36% of all patients and FDCs in 15% of patients with LN. Presence of lymphocyte aggregates differed over time. LN was associated with the degree of synovial inflammation. There was no relationship between the presence of lymphocyte aggregates at baseline and definitive diagnosis or clinical outcome after follow-up. Conclusions: Presence of lymphocyte aggregates is a dynamic phenomenon related to the degree of synovitis and can be detected in different forms of early arthritis. This feature does not appear to be related to clinical outcome. abstract_id: PUBMED:16859518 Synovial fluid leukocyte apoptosis is inhibited in patients with very early rheumatoid arthritis. Synovial leukocyte apoptosis is inhibited in established rheumatoid arthritis (RA). In contrast, high levels of leukocyte apoptosis are seen in self-limiting crystal arthritis. The phase in the development of RA at which the inhibition of leukocyte apoptosis is first apparent, and the relationship between leukocyte apoptosis in early RA and other early arthritides, has not been defined. We measured synovial fluid leukocyte apoptosis in very early arthritis and related this to clinical outcome. Synovial fluid was obtained at presentation from 81 patients with synovitis of &lt; or = 3 months duration. The percentages of apoptotic neutrophils and lymphocytes were assessed on cytospin preparations. Patients were assigned to diagnostic groups after 18 months follow-up. The relationship between leukocyte apoptosis and patient outcome was assessed. Patients with early RA had significantly lower levels of neutrophil apoptosis than patients who developed non-RA persistent arthritis and those with a resolving disease course. Similarly, lymphocyte apoptosis was absent in patients with early RA whereas it was seen in patients with other early arthritides. The inhibition of synovial fluid leukocyte apoptosis in the earliest clinically apparent phase of RA distinguishes this from other early arthritides. The mechanisms for this inhibition may relate to the high levels of anti-apoptotic cytokines found in the early rheumatoid joint (e.g. IL-2, IL-4, IL-15 GMCSF, GCSF). It is likely that this process contributes to an accumulation of leukocytes in the early rheumatoid lesion and is involved in the development of the microenvironment required for persistent RA. abstract_id: PUBMED:23006144 Expression of IL-20 in synovium and lesional skin of patients with psoriatic arthritis: differential response to alefacept treatment. Introduction: Psoriatic arthritis (PsA) is an inflammatory joint disease associated with psoriasis. Alefacept (a lymphocyte function-associated antigen (LFA)-3 Ig fusion protein that binds to CD2 and functions as an antagonist to T-cell activation) has been shown to result in improvement in psoriasis but has limited effectiveness in PsA. Interleukin-20 (IL-20) is a key proinflammatory cytokine involved in the pathogenesis of psoriasis. The effects of alefacept treatment on IL-20 expression in the synovium of patients with psoriasis and PsA are currently unknown. Methods: Eleven patients with active PsA and chronic plaque psoriasis were treated with alefacept (7.5 mg per week for 12 weeks) in an open-label study. Skin biopsies were taken before and after 1 and 6 weeks, whereas synovial biopsies were obtained before and 4 and 12 weeks after treatment. Synovial biopsies from patients with rheumatoid arthritis (RA) (n = 10) were used as disease controls. Immunohistochemical analysis was performed to detect IL-20 expression, and stained synovial tissue sections were evaluated with digital image analysis. Double staining was performed with IL-20 and CD68 (macrophages), and conversely with CD55 (fibroblast-like synoviocytes, FLSs) to determine the phenotype of IL-20-positive cells in PsA synovium. IL-20 expression in skin sections (n = 6) was analyzed semiquantitatively. Results: IL-20 was abundantly expressed in both PsA and RA synovial tissues. In inflamed PsA synovium, CD68+ macrophages and CD55+ FLSs coexpressed IL-20, and its expression correlated with the numbers of FLSs. IL-20 expression in lesional skin of PsA patients decreased significantly (P = 0.04) 6 weeks after treatment and correlated positively with the Psoriasis Area and Severity Index (PASI). IL-20 expression in PsA synovium was not affected by alefacept. Conclusions: Conceivably, the relatively limited effectiveness of alefacept in PsA patients (compared with anti-tumor necrosis factor (TNF) therapy) might be explained in part by persistent FLS-derived IL-20 expression. abstract_id: PUBMED:24489612 Targeting the synovial angiogenesis as a novel treatment approach to osteoarthritis. Synovitis is a key feature in osteoarthritis and is associated with symptom severity. Synovial membrane inflammation is secondary to cartilage degradation which occurs in the early stage and is located adjacent to cartilage damage. This inflammation is characterized by the invasion and activation of macrophages and lymphocytes, the release in the joint cavity of large amounts of pro-inflammatory and procatabolic mediators, and by a local increase of synovial membrane vascularity. This latter process plays an important role in the chronicity of the inflammatory reaction by facilitating the invasion of the synovium by immune cells. Therefore, synovial membrane angiogenesis represents a key target for the treatment of osteoarthritis. This paper is a narrative review of the literature referenced in PubMed during the past 5 years. It addresses in particular three questions. What are the mechanisms involved in synovium blood vessels invasion? Are current medications effective in controlling blood vessels formation and invasion? What are the perspectives of research in this area? abstract_id: PUBMED:22849522 Light and electron microscopic features of synovium in patients with psoriatic arthritis. Introduction: Few ultrastructural studies have been reported in psoriatic arthritis (PsA). The authors report a series of synovial biopsies with emphasis on patients with early disease to look for distinctive light (LM) and electron microscopic (EM) features of possible importance. Methods: The authors examined synovial biopsies obtained primarily by needle biopsy from 13 PsA patients using LM and/or EM. Sections from 12 patients were evaluated by LM for vascularity, synovial lining thickness, fibrin deposition, and inflammation via a semi-quantitative scale. Nine EM specimens were descriptively analyzed. Clinical, synovial fluid (SF), and radiographic characteristics were recorded. Results: Patients were mostly male, with mean disease duration before biopsy of 2.19 ± 2.60 years; 7 patients had arthritis for less than 1 year. All patients had peripheral arthritis, 2 had axial involvement. SFs disclosed predominance of polymorphonuclear leukocytes. LM demonstrated proliferation of synovial lining cells, lymphocyte and plasma cell infiltration, as well as dramatic clusters of small vessels in the superficial synovium. EMs showed more detailed vascular changes, including small, subendothelial, electron-dense deposits and scattered microparticles in vessel lumens and walls. Conclusions: Prominent vascularity is confirmed as an important feature of some PsA. Vascular changes and other features, including the first EM demonstration of microparticles in PsA (identified as potent factors in other inflammatory joint diseases), are potential targets for therapy. abstract_id: PUBMED:10408797 Clonal characteristics of T cell infiltrates in skin and synovium of patients with psoriatic arthritis. Psoriasis is a chronic inflammatory skin disease that is often complicated by an inflammatory arthritis. Considerable evidence implicates cellular immune responses in psoriatic skin lesions, but the pathogenesis of the associated arthritis has not been elucidated. We analyzed T cell antigen receptor beta chain variable (TCRbetaV) gene repertoires among peripheral blood lymphocytes, skin and synovium of nine patients with psoriatic arthritis. RNase protection assays were used to quantitate the expression levels of 25 TCRbetaV genes, and CDR3 region sequencing was used to further characterize selected expansions. All patients exhibited significant TCRbetaV biases in the peripheral blood and moreover, all had expansions common to both skin and synovium. CDR3 sequencing demonstrated these expansions frequently consisted of oligo- or monoclonal populations. Although no ubiquitous CDR3 nucleotide sequences were identified, two patients shared identical sequences and several highly homologous amino acid motifs were present in skin and synovium among and between individual patients. Findings of common TCRbetaV expansions in diverse inflammatory sites, among multiple afflicted individuals, suggest that these T cell proliferations are driven by engagements with a limited set of conventional antigens. These findings demonstrate an important role for cognate T cell responses in the pathogenesis of psoriatic arthritis, and further suggest the inciting antigen(s) is identical or homologous between afflicted skin and synovium. abstract_id: PUBMED:31461658 Molecular Portraits of Early Rheumatoid Arthritis Identify Clinical and Treatment Response Phenotypes. There is a current imperative to unravel the hierarchy of molecular pathways that drive the transition of early to established disease in rheumatoid arthritis (RA). Herein, we report a comprehensive RNA sequencing analysis of the molecular pathways that drive early RA progression in the disease tissue (synovium), comparing matched peripheral blood RNA-seq in a large cohort of early treatment-naive patients, namely, the Pathobiology of Early Arthritis Cohort (PEAC). We developed a data exploration website (https://peac.hpc.qmul.ac.uk/) to dissect gene signatures across synovial and blood compartments, integrated with deep phenotypic profiling. We identified transcriptional subgroups in synovium linked to three distinct pathotypes: fibroblastic pauci-immune pathotype, macrophage-rich diffuse-myeloid pathotype, and a lympho-myeloid pathotype characterized by infiltration of lymphocytes and myeloid cells. This is suggestive of divergent pathogenic pathways or activation disease states. Pro-myeloid inflammatory synovial gene signatures correlated with clinical response to initial drug therapy, whereas plasma cell genes identified a poor prognosis subgroup with progressive structural damage. abstract_id: PUBMED:2118831 Studies on the homing of Mycobacterium-sensitized T lymphocytes to the synovium during passive adjuvant arthritis. The migration of intravenously administered adjuvant sensitized T lymphocytes to the knee synovium of recipient rats undergoing passive adjuvant arthritis has been followed. Using fluorescein isothiocyanate (FITC)-labeled adjuvant-sensitized T cells and anticollagen IgG, the present studies demonstrate the presence of fluorescent cells in the inflamed knee synovium of recipient rats undergoing passive arthritis. Proliferation studies indicate that synovial cells from these rats respond to Mycobacterium tuberculosis (MT). Since cross-reactivity between Mycobacterial antigens and cartilage proteoglycans has been previously demonstrated, it is suggested that adjuvant-sensitized T cells that are injected into naive rats migrate to the synovium, proliferate in response to cartilage proteoglycan, and initiate passive arthritis. abstract_id: PUBMED:32825448 Inflammatory Cytokine-Producing Cells and Inflammation Markers in the Synovium of Osteoarthritis Patients Evidenced in Human Herpesvirus 7 Infection. A direct association between joint inflammation and the progression of osteoarthritis (OA) has been proposed, and synovitis is considered a powerful driver of the disease. Among infections implicated in the development of joint disease, human herpesvirus 7 (HHV-7) infection remains poorly characterized. Therefore, we assessed synovitis in OA patients; determined the occurrence and distribution of the HHV-7 antigen within the synovial membrane of OA-affected subjects; and correlated plasma levels of the pro-inflammatory cytokines tumor necrosis factor (TNF), interleukin-6 (IL-6), and TNF expressed locally within lesioned synovial tissues with HHV-7 observations, suggesting differences in persistent latent and active infection. Synovial HHV-7, CD4, CD68, and TNF antigens were detected immunohistochemically. The plasma levels of TNF and IL-6 were measured by an enzyme-linked immunosorbent assay. Our findings confirm the presence of persistent HHV-7 infection in 81.5% and reactivation in 20.5% of patients. In 35.2% of patients, virus-specific DNA was extracted from synovial membrane tissue samples. We evidenced the absence of histopathologically detectable synovitis and low-grade changes in the majority of OA patients enrolled in the study, in both HHV-7 PCR+ and HHV-7 PCR‒ groups. The number of synovial CD4-positive cells in the HHV-7 polymerase chain reaction (PCR)+ group was significantly higher than that in the HHV-7 PCR‒ group. CD4- and CD68-positive cells were differently distributed in both HHV-7 PCR+ and HHV-7 PCR‒ groups, as well as in latent and active HHV-7 infection. The number of TNF+ and HHV-7+ lymphocytes, as well as HHV-7+ vascular endothelial cells, was strongly correlated. Vascular endothelial cells, especially in the case of infection reactivation, appeared vulnerable. The balance between virus latency and reactivation is a long-term relationship between the host and infectious agent, and the immune system appears to be involved in displaying overreaction when a shift in the established equilibrium develops. abstract_id: PUBMED:11940115 Temporal changes in the distribution of thoracic duct lymphoblasts to synovium and other tissues of rats with adjuvant-induced arthritis. The distribution of lymphoblasts(lymphocytes in cell cycle) obtained from the central lymph of donor rats and transferred adoptively to syngeneic recipients has been shown previously to be influenced by the presence of arthritis in either donor or recipient rats. The intent of the present study was to examine patterns of distribution of lymphoblasts in the early period after transfer, when extravasation of donor lymphoblasts was expected to occur. Thoracic duct lymphoblasts labelled in vitro with [125I]-iododeoxyuridine were detected in recipient rats by external radiometry and autoradiography. Irrespective of donor status, fewer donor lymphoblasts accumulated in the feet of normal recipients when compared to arthritic recipients at 15 min, 2 h and 24 h after cell transfer.When recipients of similar disease status were compared, the percentages of injected lymphoblasts from normal and arthritic donors recovered in the feet were similar at 15 min and 2 h after transfer. The proportions of lymphoblasts recovered in the feetat 24 h after injection declined in normal recipients and arthritic recipients of cells from normal donor rats. Importantly,this decline did not occur when both the donor and the recipient were arthritic. In the hindpaws, donor lymphoblasts were located predominantly in the bone marrow, except in transfers between arthriticrats, when at 24 h they were predominantly in the synovium. At 15 min, lymphoblasts were detected within the lumen of vessels within synovium, whereas by 2 h extravasation of these cells was evident. In conclusion, lymphoblasts accumulate more readily in hindfeet that are inflamed. In the early hours after injection, lymphoblasts from normal and arthritic donors are recruited equally, but these early levels are only maintained for 24 hin the combination of arthritic donor and arthritic recipient. Adramatic change in the proportion of lymphoblasts located in synoviumat this later time suggests that a dynamic process of relocation,retention and/or local cell division maintains the numbers of arthritic donor cells in the latter combination. Answer: The presence of lymphocyte aggregates in the synovium of patients with early arthritis is not a constant feature over time. A study evaluating synovial tissue from patients with early arthritis found that the presence of lymphocyte aggregates differed over time and was associated with the degree of synovial inflammation. However, there was no relationship between the presence of lymphocyte aggregates at baseline and definitive diagnosis or clinical outcome after follow-up, suggesting that this feature does not predict the progression or outcome of the disease (PUBMED:21173012).
Instruction: Should I stay or should I go? Abstracts: abstract_id: PUBMED:29354045 Dopaminergic Therapy Increases Go Timeouts in the Go/No-Go Task in Patients with Parkinson's Disease. Parkinson's disease (PD) is characterized by resting tremor, rigidity and bradykinesia. Dopaminergic medications such as L-dopa treat these motor symptoms, but can have complex effects on cognition. Impulse control is an essential cognitive function. Impulsivity is multifaceted in nature. Motor impulsivity involves the inability to withhold pre-potent, automatic, erroneous responses. In contrast, cognitive impulsivity refers to improper risk-reward assessment guiding behavior. Informed by our previous research, we anticipated that dopaminergic therapy would decrease motor impulsivity though it is well known to enhance cognitive impulsivity. We employed the Go/No-go paradigm to assess motor impulsivity in PD. Patients with PD were tested using a Go/No-go task on and off their normal dopaminergic medication. Participants completed cognitive, mood, and physiological measures. PD patients on medication had a significantly higher proportion of Go trial Timeouts (i.e., trials in which Go responses were not completed prior to a deadline of 750 ms) compared to off medication (p = 0.01). No significant ON-OFF differences were found for Go trial or No-go trial response times (RTs), or for number of No-go errors. We interpret that dopaminergic therapy induces a more conservative response set, reflected in Go trial Timeouts in PD patients. In this way, dopaminergic therapy decreased motor impulsivity in PD patients. This is in contrast to the widely recognized effects of dopaminergic therapy on cognitive impulsivity leading in some patients to impulse control disorders. Understanding the nuanced effects of dopaminergic treatment in PD on cognitive functions such as impulse control will clarify therapeutic decisions. abstract_id: PUBMED:37689007 Attentional priming in Go No-Go search tasks. Go/No-Go responses in visual search yield different estimates of the operation of visual attention than more standard present versus absent tasks. Such minor methodological tweaks have a surprisingly large effect on measures that have, for the last half-century or so, formed the backbone of prominent theories of visual attention. Secondly, priming effects in visual search have a dominating influence on visual search, accounting for effects that have been attributed to top-down guidance in standard theories. Priming effects in visual search have, however, never been investigated for searches involving Go/No-Go present/absent decisions. Here, Go/No-Go tasks were used to assess visual search for an odd-one-out face, defined either by color or facial expression. The Go/No-Go responses for the color-based task were very fast for both present and absent trials and notably, they resulted in negative slopes of RT and set size. Interestingly "Go" responses were even faster for the target absent case. The "Go" responses were, on the other hand, much slower for expression and became higher with increased set-size, particularly for the target-absent response. Priming effects were considerable for the feature search, but for expression, the target absent priming was strong, but did not occur for target present trials, arguing that repetition priming for this search mainly reflects priming of context rather than target features. Overall, the results reinforce the point that Go/No-Go tasks are highly informative for theoretical accounts of visual attention and are shown here to cast a new light on attentional priming. abstract_id: PUBMED:29404378 Modeling Individual Differences in the Go/No-go Task with a Diffusion Model. The go/no-go task is one in which there are two choices, but the subject responds only to one of them, waiting out a time-out for the other choice. The task has a long history in psychology and modern applications in the clinical/neuropsychological domain. In this article we fit a diffusion model to both experimental and simulated data. The model is the same as the two-choice model and assumes that there are two decision boundaries and termination at one of them produces a response and at the other, the subject waits out the trial. In prior modeling, both two-choice and go/no-go data were fit simultaneously and only group data were fit. Here the model is fit to just go/no-go data for individual subjects. This allows analyses of individual differences which is important for clinical applications. First, we fit the standard two-choice model to two-choice data and fit the go/no-go model to RTs from one of the choices and accuracy from the two-choice data. Parameter values were similar between the models and had high correlations. The go/no-go model was also fit to data from a go/no-go version of the task with the same subjects as the two-choice task. A simulation study with ranges of parameter values that are obtained in practice showed similar parameter recovery between the two-choice and go/no-go models. Results show that a diffusion model with an implicit (no response) boundary can be fit to data with almost the same accuracy as fitting the two-choice model to two-choice data. abstract_id: PUBMED:32992713 Effect of Age in Auditory Go/No-Go Tasks: A Magnetoencephalographic Study. Response inhibition is frequently examined using visual go/no-go tasks. Recently, the auditory go/no-go paradigm has been also applied to several clinical and aging populations. However, age-related changes in the neural underpinnings of auditory go/no-go tasks are yet to be elucidated. We used magnetoencephalography combined with distributed source imaging methods to examine age-associated changes in neural responses to auditory no-go stimuli. Additionally, we compared the performance of high- and low-performing older adults to explore differences in cortical activation. Behavioral performance in terms of response inhibition was similar in younger and older adult groups. Relative to the younger adults, the older adults exhibited reduced cortical activation in the superior and middle temporal gyrus. However, we did not find any significant differences in cortical activation between the high- and low-performing older adults. Our results therefore support the hypothesis that inhibition is reduced during aging. The variation in cognitive performance among older adults confirms the need for further study on the underlying mechanisms of inhibition. abstract_id: PUBMED:26955650 BOLD data representing activation and connectivity for rare no-go versus frequent go cues. The neural circuitry underlying response control is often studied using go/no-go tasks, in which participants are required to respond as fast as possible to go cues and withhold from responding to no-go stimuli. In the current task, response control was studied using a fully counterbalanced design in which blocks with a low frequency of no-go cues (75% go, 25% no-go) were alternated with blocks with a low frequency of go cues (25% go, 75% no-go); see also "Segregating attention from response control when performing a motor inhibition task: Segregating attention from response control" [1]. We applied a whole brain corrected, paired t-test to the data assessing for regions differentially activated by low frequency no-go cues relative to high frequency go cues. In addition, we conducted a generalized psychophysiological interaction analysis on the data using a right inferior frontal gyrus seed region. This region was identified through the BOLD response t-test and was chosen because right inferior gyrus is highly implicated in response inhibition. abstract_id: PUBMED:26869060 Perioperative Predictors of Length of Stay After Total Hip Arthroplasty. Background: Few studies had examined whether specific patient variables or performance on functional testing can predict length of stay (LOS) after total hip arthroplasty (THA). Such tools would enable providers to minimize prolonged LOS by planning appropriate discharge dispositions preoperatively. Methods: We prospectively recruited 120 patients undergoing a THA through an anterior (n = 40), posterior (n = 40), or lateral (n = 40) approach. Patients performed a timed up-and-go (TUG) test preoperatively to determine if it was predictive of hospital LOS after THA. Other variables of interest included patient age, body mass index, age-adjusted Charlson Comorbidity Index, mean procedure time, and time spent in the postanesthetic care unit. A logistic regression analysis was performed to determine which variables predicted LOS greater than 48 hours, which is our institution's target time to discharge. Results: The TUG test was predictive of LOS beyond 48 hours. For every 5-second interval increase in TUG time, patients were twice as likely to stay in hospital beyond 48 hours (odds ratio [OR] = 2.02, 95% confidence interval [CI] = 1.02-4.01, P = .043). Patient age (OR = 0.97, 95% CI = 0.90-1.05, P = .46), body mass index (OR = 1.01, 95% CI = 0.86-1.18, P = .90), Charlson Comorbidity Index (OR = 1.29, 95% CI = 0.68-2.44, P = .44), mean procedure time (OR = 1.05, 95% CI = 0.97-1.14, P = .27), and mean time in the postanesthetic care unit (OR = 1.00, 95% CI = 0.99-1.00, P = .94) were not predictive of increased LOS. Conclusion: The TUG test was predictive of hospital LOS after THA. It is a simple functional test that can be used to assist with discharge planning preoperatively to minimize extended hospital stays. abstract_id: PUBMED:33863323 Predicting short stay total hip arthroplasty by use of the timed up and go-test. Background: One of the most important steps before implementing short stay total hip arthroplasty (THA) is establishing patient criteria. Most existing criteria are mainly based on medical condition, but as physical functioning is associated with outcome after THA, we aim to evaluate the added value of a measure of physical functioning to predict short-stay THA. Methods: We used retrospective data of 1559 patients who underwent an anterior THA procedure. Logistic regression analyses were performed to study the predictive value of preoperative variables among which preoperative physical functioning by use of the Timed Up and Go test (TUG) for short stay THA (&lt; 36 h). The receiver operating characteristic (ROC) curve and Youden Index were used to define a cutoff point for TUG associated with short stay THA. Results: TUG was significantly associated with LOS (OR 0.84, 95%CI 0.82-0.87) as analyzed by univariate regression analysis. In multivariate regression, a model with the TUG had a better performance with an AUC of 0.77 (95%CI 0.74-0.79) and a R2 of 0.27 compared to the basic model (AUC 0.75, 95%CI 0.73-0.77, R2 0.24). Patients with a preoperative TUG less than 9.7 s had an OR of 4.01 (95%CI 3.19-5.05) of being discharged within 36 h. Conclusions: Performance based physical functioning, measured by the TUG, is associated with short stay THA. This knowledge will help in the decision-making process for the planning and expectations in short stay THA protocols with the advantage that the TUG is a simple and fast instrument to be carried out. abstract_id: PUBMED:28940554 Go/no-go procedure with compound stimuli with children with autism. The go/no-go with compound stimuli is an alternative to matching-to-sample to produce conditional and emergent relations in adults. The aim of this study was to evaluate the effectiveness of this procedure with two children diagnosed with autism. We trained and tested participants to respond to conditional relations among arbitrary stimuli using the go/no-go procedure. Both learned all the trained conditional relations without developing response bias or responding to no-go trials. Participants demonstrated performance consistent with symmetry, but not equivalence. abstract_id: PUBMED:34514183 Factors Affecting the Length of Convalescent Hospital Stay Following Total Hip and Knee Arthroplasty. Objectives: : An important role of convalescent rehabilitation wards is the short-term improvement of mobility and activities of daily living (ADL). We aimed to identify predictors associated with the length of stay (LOS) in a convalescent hospital after total hip and knee arthroplasty. Methods: : This study included 308 patients hospitalized in a convalescent ward following total hip or total knee arthroplasty. The following factors were examined: age, sex, orthopedic comorbidities, motor component of the functional independence measure (M-FIM), M-FIM gain, pain, 10-m walk test, timed up and go (TUG) test, functional ambulation category (FAC), cognitive function, and nutritional status. LOS was categorized as shorter (≤40 days) or longer (&gt;40 days), based on the national average LOS in a convalescent ward, and was statistically analyzed with predictor variables. Results: : In our hospital, the average LOS was 36.9 ± 21.4 days, and the average M-FIM at admission to the convalescent ward and the M-FIM gain were 71.1 ± 7.0 and 16.3 ± 6.9, respectively. In univariate analysis, there was a significant correlation between LOS and M-FIM at admission and M-FIM gain, pain, TUG time, and FAC. Logistic multivariate analysis identified M-FIM at admission (odds ratio [OR] 0.93, 95% confidence interval [CI] 0.88-0.98) and TUG time (OR 1.10, 95% CI 1.03-1.18) as independent predictors of LOS. Conclusions: : The M-FIM and TUG test can be used to accurately estimate LOS and to plan rehabilitation treatment in a convalescent rehabilitation ward after lower-limb arthroplasty. Furthermore, preoperative or early postoperative intervention may lead to better quality motor activity and shorter LOS during the convalescent period. abstract_id: PUBMED:28357193 What Is Trained During Food Go/No-Go Training? A Review Focusing on Mechanisms and a Research Agenda. Purpose Of Review: During food go/no-go training, people consistently withhold responses toward no-go food items. We discuss how food go/no-go training may change people's behavior toward no-go food items by comparing three accounts: (a) the training strengthens 'top-down' inhibitory control over food-related responses, (b) the training creates automatic 'bottom-up' associations between no-go food items and stopping responses, and (c) the training leads to devaluation of no-go food items. Recent Findings: Go/no-go training can reduce intake of food and choices for food and facilitate short-term weight loss. It appears unlikely that food go/no-go training strengthens top-down inhibitory control. There is some evidence suggesting the training could create automatic stop associations. There is strong evidence suggesting go/no-go training reduces evaluations of no-go food items. Summary: Food go/no-go training can change behavior toward food and evaluation of food items. To advance knowledge, more research is needed on the underlying mechanisms of the training, the role of attention during go/no-go training, and on when effects generalize to untrained food items. Answer: The question "Should I stay or should I go?" can be metaphorically related to the decision-making processes studied in Go/No-Go tasks, which are cognitive tasks used to assess impulse control and response inhibition. In these tasks, participants must decide whether to respond (Go) or not respond (No-Go) based on specific stimuli. In the context of Parkinson's disease (PD), dopaminergic therapy has been shown to increase Go trial Timeouts in the Go/No-Go task, suggesting that patients on medication may adopt a more conservative response set, which decreases motor impulsivity (PUBMED:29354045). This contrasts with the known enhancement of cognitive impulsivity by dopaminergic therapy in some patients, leading to impulse control disorders. Attentional priming in Go/No-Go tasks has been found to have a significant influence on visual search, with priming effects accounting for effects previously attributed to top-down guidance in standard theories of visual attention (PUBMED:37689007). The Go/No-Go tasks have also been used to model individual differences in impulse control, with a diffusion model being applied to both experimental and simulated data, showing that it can be fit to individual subject data, which is important for clinical applications (PUBMED:29404378). In aging populations, auditory Go/No-Go tasks have revealed age-related changes in neural responses, with older adults exhibiting reduced cortical activation compared to younger adults, supporting the hypothesis that inhibition is reduced during aging (PUBMED:32992713). In the clinical domain, the Go/No-Go task has been used to study the neural circuitry underlying response control, with research showing differential activation in brain regions during response inhibition (PUBMED:26955650). In summary, whether one should "stay" (inhibit a response) or "go" (respond) in a Go/No-Go task depends on a complex interplay of cognitive processes, including impulse control, attentional priming, and the effects of aging or medication on neural and behavioral responses. These tasks provide valuable insights into the mechanisms of decision-making and response inhibition in both healthy individuals and clinical populations.
Instruction: Can the prognosis of individual patients with glioblastoma be predicted using an online calculator? Abstracts: abstract_id: PUBMED:23543729 Can the prognosis of individual patients with glioblastoma be predicted using an online calculator? Background: In an exploratory subanalysis of the European Organisation for Research and Treatment of Cancer and National Cancer Institute of Canada (EORTC/NCIC) trial data, Gorlia et al. identified a variety of factors that were predictive of overall survival, including therapy administered, age, extent of surgery, mini-mental score, administration of corticosteroids, World Health Organization (WHO) performance status, and O-methylguanine-DNA methyltransferase (MGMT) promoter methylation status. Gorlia et al. developed 3 nomograms, each intended to predict the survival times of patients with newly diagnosed glioblastoma on the basis of individual-specific combinations of prognostic factors. These are available online as a "GBM Calculator" and are intended for use in patient counseling. This study is an external validation of this calculator. Method: One hundred eighty-seven patients from 2 UK neurosurgical units who had histologically confirmed glioblastoma (WHO grade IV) had their information at diagnosis entered into the GBM calculator. A record was made of the actual and predicted median survival time for each patient. Statistical analysis was performed to assess the accuracy, precision, correlation, and discrimination of the calculator. Results: The calculator gives both inaccurate and imprecise predictions. Only 23% of predictions were within 25% of the actual survival, and the percentage bias is 140% in our series. The coefficient of variance is 76%, where a smaller percentage would indicate greater precision. There is only a weak positive correlation between the predicted and actual survival among patients (R(2) of 0.07). Discrimination is inadequate as measured by a C-index of 0.62. Conclusions: The authors would not recommend the use of this tool in patient counseling. If departments were considering its use, we would advise that a similar validating exercise be undertaken. abstract_id: PUBMED:36686121 Estimation of Survival in Patients with Glioblastoma Using an Online Calculator at a Tertiary-Level Hospital in Mexico. Background The mean survival duration of patients with glioblastoma after diagnosis is 15 months (14-21 months), while progression-free survival is 10 months (+/- one month). Although there are well-defined overall survival statistics for glioblastoma, individual survival prediction remains a challenge. Therefore, there is a need to validate an accessible and cost-effective prognostic tool to provide valuable data for decision-making. This study aims to calculate the mean survival of patients with glioblastoma at a tertiary-level hospital in Mexico using the online glioblastoma survival calculator developed by researchers at Harvard Medical School &amp; Brigham and Women's Hospital and compare it with the actual mean survival. Methodology We conducted a retrospective observational study of patients who received a histopathological diagnosis of glioblastoma from the National Institute of Neurology and Neurosurgery "Manuel Velasco Suárez" between 2015 and 2021. We included 50 patients aged 20-83 years, with a tumor size of 15-79 mm, and who had died 30 days after surgery. Patient survival was estimated using the online calculator developed at Harvard Medical School &amp; Brigham and Women's Hospital. The estimated mean survival was then compared with the actual mean survival of the patient. A two-tailed equivalence test for paired samples was performed to conduct this comparison. A value of p &lt; 0.05 was considered significant. Results The mean age of the sample was 55.5 years (confidence interval (CI) 95%, 52.61-58.71). The mean tumor size in our sample was 49.12 mm (±14.9mm). We identified a difference between the mean estimated survival and the mean actual survival of -1.37 months (CI 95%; range of -3.7 to +0.9). After setting the inferior (IL) and superior limits (SL) at -3.8 and +3.8 months, respectively, we found that the difference between the mean estimated survival and the actual mean survival is within the equivalence interval (IL: p = 0.0453; SL: p = 0.0002). Conclusions The actual survival of patients diagnosed with glioblastoma at the National Institute of Neurology and Neurosurgery was equivalent to the estimated survival calculated by the online prediction calculator developed at Harvard Medical School &amp; Brigham and Women's Hospital. This study validates a practical, cost-effective, and accessible tool for predicting patient survival, contributing to significant support for medical and personal decision-making for glioblastoma management. abstract_id: PUBMED:24111707 The validity of EORTC GBM prognostic calculator on survival of GBM patients in the West of Scotland. Objective: It is now accepted that the addition of temozolomide to radiotherapy in the treatment of patients with newly diagnosed glioblastoma multiforme (GBM) significantly improves survival. In 2008, a subanalysis of the original study data was performed, and an online "GBM Calculator" was made available on the European Organisation for Research and Treatment of Cancer (EORTC) website allowing users to estimate patients' survival outcomes. We tested this calculator against actual local survival data to validate its use in our patients. Materials And Methods: Prospectively collected clinical data were analysed on 105 consecutive patients receiving concurrent chemoradiotherapy following surgical treatment of GBM between December 2004 and February 2009. Using the EORTC online calculator, survival outcomes were generated for these patients and compared with their actual survival. Results: The median overall survival for the entire cohort was 15.3 months (range 2.8-50.5 months), with 1-year and 2-year overall survival of 65.7% and 19%, respectively. This is in comparison to the median overall predictive survival of 21.3 months, with 1-year and 2-year survival of 95% and 39.5%, respectively. Case by case analysis also showed that the survival was overestimated in nearly 80% of patients. Subgroup analyses showed similar overestimation of patients' survival, except calculator Model 3 which utilised MGMT status. Conclusion: Use of the EORTC GBM prognostic calculator would have overestimated the survival of the majority of our patients with GBM. Uncertainty exists as to the cause of overestimation in the cohort although local socioeconomic factors might play a role. The different calculator models yielded different outcomes and the "best" predictor of survival for the cohort under study utilised the tumour MGMT status. We would strongly encourage similar local studies of validity testing prior to employing the online prognostic calculator for other population groups. abstract_id: PUBMED:31586211 An Online Calculator for the Prediction of Survival in Glioblastoma Patients Using Classical Statistics and Machine Learning. Background: Although survival statistics in patients with glioblastoma multiforme (GBM) are well-defined at the group level, predicting individual patient survival remains challenging because of significant variation within strata. Objective: To compare statistical and machine learning algorithms in their ability to predict survival in GBM patients and deploy the best performing model as an online survival calculator. Methods: Patients undergoing an operation for a histopathologically confirmed GBM were extracted from the Surveillance Epidemiology and End Results (SEER) database (2005-2015) and split into a training and hold-out test set in an 80/20 ratio. Fifteen statistical and machine learning algorithms were trained based on 13 demographic, socioeconomic, clinical, and radiographic features to predict overall survival, 1-yr survival status, and compute personalized survival curves. Results: In total, 20 821 patients met our inclusion criteria. The accelerated failure time model demonstrated superior performance in terms of discrimination (concordance index = 0.70), calibration, interpretability, predictive applicability, and computational efficiency compared to Cox proportional hazards regression and other machine learning algorithms. This model was deployed through a free, publicly available software interface (https://cnoc-bwh.shinyapps.io/gbmsurvivalpredictor/). Conclusion: The development and deployment of survival prediction tools require a multimodal assessment rather than a single metric comparison. This study provides a framework for the development of prediction tools in cancer patients, as well as an online survival calculator for patients with GBM. Future efforts should improve the interpretability, predictive applicability, and computational efficiency of existing machine learning algorithms, increase the granularity of population-based registries, and externally validate the proposed prediction tool. abstract_id: PUBMED:33769170 Glioblastoma: assessment of the readability and reliability of online information. Introduction: Glioblastoma Multiforme (GBM) represents one of the most common and most aggressive forms of brain tumours with a poor prognosis. There is often uncertainty around diagnosis and prognosis amongst patients diagnosed with cancer. Most patients rely on internet to access health-related information. The aim of this study was to assess the readability and reliability of online information on GBM. Methods: The terms 'Glioblastoma' and 'GBM' were used to search Google and the first 50 websites identified were screened. For each website, the quality of each website was assessed using the DISCERN instrument, the Journal of the American Medical Association (JAMA) benchmark criteria and the Health on the Net Foundation code certification (HON-code). The readability was assessed using the Flesch Reading Ease Score (FRE), the Flesch-Kincaid grade level (FKGL) and the Gunning Fog Index (GFI). The relevant patient information by 4 International patient information websites were also assessed. Results: Following screening, 31 websites met the inclusion criteria with only four websites displaying the HON-code (12.9%). The median DISCERN score was 43 (range: 17-70) corresponding to 'fair' quality, and the median JAMA benchmark criteria score was 1. Display of the HON-code certificate or the publication date was associated with higher quality websites. The median FRE score corresponded to 'difficult' to read (34.4). The median GFI score (15.9) and FKGL score (13.3) corresponded to a 'college' level of education reading ability. The Cancer Australia online information was the most readable website while Cancer Research UK had the highest quality information. Conclusion: The readability and reliability of online information relating to GBM is inadequate. Health professionals need to provide or guide patients to information that is both readable and reliable. abstract_id: PUBMED:16454331 The radiobiologycal approaches of the individual prognosis of the radiotherapy efficacy of the tumor deseases The results of several years standing investigations about the develop of the way to the individual prognosis of tumour sensitivity to radiotherapy are brought. The initial level of proliferative activity of different tumour types of individual patients--carcinoma of oropharingeal zone, stomach, oesophagys, rectum, glioblastoma have been studied. It was shown that for the several tumours the high initial level of proliferative activity is the indication of good prognosis. For the all tumours studied the significant decreasing of proliferative activity in the beginning of the radiation treatment is the good prognostic factor of tumour regression (decreasing of volume on 70-100%) or the strong damages of tumour tissue (III-IV grade of patomorphosis). The data of literature of last years are discussed and the proposal is that for the determination of prognostic factors the multiparameter analysis is need. abstract_id: PUBMED:33344248 A Nomogram Predicts Individual Prognosis in Patients With Newly Diagnosed Glioblastoma by Integrating the Extent of Resection of Non-Enhancing Tumors. Background: The extent of resection of non-contrast enhancing tumors (EOR-NCEs) has been shown to be associated with prognosis in patients with newly diagnosed glioblastoma (nGBM). This study aimed to develop and independently validate a nomogram integrated with EOR-NCE to assess individual prognosis. Methods: Data for this nomogram were based on 301 patients hospitalized for nGBM from October 2011 to April 2019 at the Beijing Tiantan Hospital, Capital Medical University. These patients were randomly divided into derivation (n=181) and validation (n=120) cohorts at a ratio of 6:4. To evaluate predictive accuracy, discriminative ability, and clinical net benefit, concordance index (C-index), receiver operating characteristic (ROC) curves, calibration curves, and decision curve analysis (DCA) were calculated for the extent of resection of contrast enhancing tumor (EOR-CE) and EOR-NCE nomograms. Comparison between these two models was performed as well. Results: The Cox proportional hazards model was used to establish nomograms for this study. Older age at diagnosis, Karnofsky performance status (KPS)&lt;70, unmethylated O6-methylguanine-DNA methyltransferase (MGMT) status, wild-type isocitrate dehydrogenase enzyme (IDH), and lower EOR-CE and EOR-NCE were independent factors associated with shorter survival. The EOR-NCE nomogram had a higher C-index than the EOR-CE nomogram. Its calibration curve for the probability of survival exhibited good agreement between the identical and actual probabilities. The EOR-NCE nomogram showed superior net benefits and improved performance over the EOR-CE nomogram with respect to DCA and ROC for survival probability. These results were also confirmed in the validation cohort. Conclusions: An EOR-NCE nomogram assessing individualized survival probabilities (12-, 18-, and 24-month) for patients with nGBM could be useful to provide patients and their relatives with health care consultations on optimizing therapeutic approaches and prognosis. abstract_id: PUBMED:35538763 Expression of Glutathione Peroxidases and Its Effect on Clinical Prognosis in Glioma Patients Objective To investigate the relationship between the expression of glutathione peroxidase(GPX)genes and the clinical prognosis in glioma patients,and to construct and evaluate the model for predicting the prognosis of glioma. Methods The clinical information and GPX expression of 663 patients,including 153 patients of glioblastoma(GBM)and 510 patients of low-grade glioma(LGG),were obtained from The Cancer Genome Atlas(TCGA)database.The relationship between GPX expression and patient survival was analyzed.The key GPX affecting the prognosis of glioma was screened out by single- and multi-factor Cox's proportional-hazards regression models and validated by least absolute shrinkage and selection operator(Lasso)regression.Finally,we constructed the model for predicting the prognosis of glioma with the screening results and then used concordance index and calibration curve respectively to evaluate the discrimination and calibration of model. Results Compared with those in the control group,the expression levels of GPX1,GPX3,GPX4,GPX7,and GPX8 were up-regulated in glioma patients(all P&lt;0.001).Moreover,the expression levels of other GPX except GPX3 were higher in GBM patients than in LGG patients(all P&lt;0.001).The Kaplan-Meier curves showed that the progression-free survival of GBM with high expression of GPX1(P=0.013)and GPX4(P=0.040),as well as the overall survival,disease-specific survival,and progression-free survival of LGG with high expression of GPX1,GPX7,and GPX8,was shortened(all P&lt;0.001).GPX7 and GPX8 were screened out as the key factors affecting the prognosis of LGG.The results were further used to construct a nomogram model,which suggested GPX7 was the most important variable.The concordance index of the model was 0.843(95%CI=0.809-0.853),and the calibration curve showed that the predicted and actual results had good consistency. Conclusion GPX7 is an independent risk factor affecting the prognosis of LGG,and the nomogram model constructed with it can be used to predict the survival rate of LGG. abstract_id: PUBMED:37274309 High Expression of Triggering Receptor Expressed on Myeloid Cells 1 Predicts Poor Prognosis in Glioblastoma. Background: Glioblastoma (GBM) is a highly malignant tumor with poor prognosis, and new treatment strategies are urgently needed. Currently, the role of triggering receptor expressed on myeloid cells 1 (TREM-1) in tumors has been studied, but the role of TREM-1 in GBM remains unclear. Methods: Immunohistochemical staining for TREM-1 was performed in 91 patients diagnosed with GBM. Clinicopathological characteristics and survival times were recorded. TREM-1 expression and its effect on prognosis were analyzed using online Gene Expression Profiling Interactive Analysis (GEPIA), The Cancer Genome Atlas (TCGA), and Chinese Glioma Genome Atlas (CGGA) databases. The expression profile of TCGA-GBM cohort was used to perform functional enrichment analysis. The CIBERSORT method and Tumor Immune Estimation Resource (TIMER) database were used to estimate the tumor-infiltrating immune cells (TIICs). The ESTIMATE algorithm was used to estimate the immune-stromal scores. Finally, the relationships of TREM-1 with TIICs, immune-stromal score, and immune checkpoint genes (ICGs) were analyzed. Results: The expression of TREM-1 was upregulated in GBM, and high TREM-1 expression predicted a poor prognosis. TREM-1, surgical resection, postoperative radiotherapy, and temozolomide (TMZ) chemotherapy were associated with the survival time of patients with GBM, but only surgical resection and TREM-1 expression were independent prognostic factors. GBM with high TREM-1 expression exhibited increased neutrophil and macrophage infiltration. TREM-1 was positively associated with the immune-stromal score and multiple ICGs, and most of which were involved in immunosuppressive responses. Conclusion: The present study revealed that high expression of TREM-1 in GBM is an independent poor prognosis factor and that TREM-1 is associated with the immunosuppressive microenvironment. Thus, blocking TREM-1 may be a strategy for enhancing the GBM immune response. abstract_id: PUBMED:38322416 MAPK-activated protein kinase 2 is associated with poor prognosis of glioma patients and immune inhibition in glioma. Introduction: An effective therapeutic method to noticeably improve the prognosis of glioma patients has not been developed thus far. MAPK-activated protein kinase 2 (MAPKAPK2) is a serine/threonine kinase, which is involved in tumorigenesis, tumor growth, metastasis, and the inflammatory process. The clinical significance and molecular function of MAPKAPK2 in glioma remain unclear. Methods: MAPKAPK2 expression in human glioma tissues was detected by immunohistochemistry and analyzed from the transcriptome sequencing data in TCGA and CGGA. Prognostic nomogram was constructed to predict the survival risk of individual patients. GO and KEGG enrichment analyses were performed to analyze the function and pathways MAPKAPK2 involved. Single-cell RNA sequencing data was used to analyze the cell types in which MAPKAPK2 was enriched. Flow cytometry was used for cell cycle and apoptosis detection. The ability of cell proliferation and migration was analyzed by CCK8 and cell migration assay, respectively. Correlation analyses were performed to analyze the relationship of MAPKAPK2 with immune infiltration, immune regulators, chemokine, and chemokine receptors. Results: MAPKAPK2 was not only aberrantly upregulated in glioma tissues but also correlated with poor clinical characteristics. Moreover, MAPKAPK2 was prevalent in isocitrate dehydrogenase (IDH) wild-type and 1p/19q non-codeletion glioma cohorts and predicted poor prognosis of glioma patients. MAPKAPK2 may be involved in cell proliferation, cell migration, DNA damage repair, and immune regulation in glioma. MAPKAPK2 was enriched in microglia/macrophages and malignant tumor cells. Further investigation into cellular function revealed that inhibiting MAPKAPK2 suppressed the proliferation and migration of glioblastoma multiforme (GBM) cells in vitro. The inhibition of MAPKAPK2 significantly induced the G1 cell cycle arrest and cell apoptosis of GBM cells. Consistent with the enriched function of MAPKAPK2 in immune regulation, MAPKAPK2 was correlated with immune cell infiltration in glioma tissues. Mechanistically, a series of immune regulators, immunomodulatory chemokine, and chemokine receptors were positively correlated with MAPKAPK2 expression. Discussion: Our findings provide evidence of the clinical relevance of MAPKAPK2 in prognosis evaluation of glioma patients and highlight the underlying significance of MAPKAPK2 in glioma therapy. Answer: The prognosis of individual patients with glioblastoma can be predicted using online calculators, but the accuracy and reliability of these tools vary and should be interpreted with caution. One study conducted an external validation of the "GBM Calculator" developed from the EORTC/NCIC trial data and found it to be both inaccurate and imprecise, with only 23% of predictions within 25% of the actual survival and a coefficient of variance of 76%. The authors concluded that they would not recommend the use of this tool in patient counseling (PUBMED:23543729). In contrast, another study from a tertiary-level hospital in Mexico reported that the actual survival of patients diagnosed with glioblastoma was equivalent to the estimated survival calculated by an online prediction calculator developed at Harvard Medical School & Brigham and Women's Hospital. This study validated the calculator as a practical, cost-effective, and accessible tool for predicting patient survival (PUBMED:36686121). However, another study from the West of Scotland found that the EORTC GBM prognostic calculator overestimated survival in nearly 80% of patients. The authors suggested that local socioeconomic factors might play a role in this discrepancy and recommended local studies of validity testing before employing the online prognostic calculator for other population groups (PUBMED:24111707). A study that compared statistical and machine learning algorithms in predicting survival in GBM patients developed an online survival calculator using the accelerated failure time model, which demonstrated superior performance. This model was made available through a free, publicly accessible software interface (PUBMED:31586211). In summary, while online calculators can provide estimates for the prognosis of individual patients with glioblastoma, their predictions can vary significantly. Some calculators may offer useful predictions in certain contexts, as shown by the study in Mexico, while others may not be as reliable, as indicated by the study in the UK. It is essential to validate these tools in the specific patient population and healthcare setting in which they will be used, and to consider them as one of many factors in the decision-making process for glioblastoma management.
Instruction: Are effects from a brief multiple behavior intervention for college students sustained over time? Abstracts: abstract_id: PUBMED:20026170 Are effects from a brief multiple behavior intervention for college students sustained over time? Objective: This study examined whether 3-month outcomes of a brief image-based multiple behavior intervention on health habits and health-related quality of life of college students were sustained at 12-month follow-up without further intervention. Methods: A randomized control trial was conducted with 303 undergraduates attending a public university in southeastern US. Participants were randomized to receive either a brief intervention or usual care control, with baseline, 3-month, and 12-month data collected during fall of 2007. Results: A significant omnibus MANOVA interaction effect was found for health-related quality of life, p=0.01, with univariate interaction effects showing fewer days of poor spiritual health, social health, and restricted recent activity, p's&lt;0.05, for those receiving the brief intervention. Significant group by time interaction effects were found for driving after drinking, p=0.04, and moderate exercise, p=0.04, in favor of the brief intervention. Effect sizes typically increased over time and were small except for moderate size effects for social health-related quality of life. Conclusion: This study found that 3-month outcomes from a brief image-based multiple behavior intervention for college students were partially sustained at 12-month follow-up. abstract_id: PUBMED:25585901 A randomized trial of a brief intervention for obesity in college students. What Is Already Known About This Subject: • Brief motivational interventions have been found to be efficacious for obesity in older adult populations. • Brief motivational interventions including delivery of personalized feedback have been found to be efficacious for reducing college student drinking. What This Study Adds: • First study to test the efficacy of a one-session, brief motivational intervention for obesity among college students. • One session brief motivational interventions may have an impact on the reduction of calorie-dense foods and beverages. • A brief, one-session motivational interview with personalized feedback may not be an intensive enough intervention for obesity treatment among college students. Summary: Young adults are at an increased risk for weight gain as they begin college and this has implications for the onset of future health consequences. Brief motivational interventions (BMIs) have been found to be effective with college students for reducing risky health behaviours such as alcohol consumption, but have not been developed and tested with a primary goal of reducing obesity. BMIs have been developed and tested for the treatment of obesity and weight-related health behaviours (WRHB) in other populations, such as adults and adolescents, with promising results. The purpose of the following study was to develop and test the efficacy of a BMI for weight loss among overweight and obese college students. Seventy undergraduate students (85.7% female, 57.1% African-American) completed an assessment about WRHBs and then were randomized to either receive a single 60-min BMI plus a booster phone call or to assessment only. At 3 months post-intervention, effect sizes within the intervention group were twice as large as within the assessment-only group on reductions in high-calorie foods and beverages. However, there were no statistically significant differences between groups on body mass index or WRHBs. The one-session nature of the session might not have been enough to produce significant change in weight. abstract_id: PUBMED:35348433 Pilot of a telehealth brief alcohol intervention for college students at a Hispanic Serving Institution. ObjectiveWhile college can be a period of exploration, it is also marked by risky alcohol use. Brief alcohol screening and intervention for college students (BASICS) has yet to be used in the telehealth platform among minority students. This study assesses the short-term outcomes of a pilot telehealth brief alcohol abuse intervention for students attending a Hispanic Serving Institution (HSI). Participants: One hundred and fifty-two students attending a large public university participated. MethodsStudents participated in a BASICS-adapted telehealth brief intervention with a certified alcohol counselor. Baseline and 30-day follow-up surveys were completed electronically. ResultsThere were significant changes in drinking behaviors at 30-day follow-up after participating in the telehealth pilot among high-risk drinkers. ConclusionTelehealth interventions are accessible and convenient for students at a HSI, and brief alcohol interventions adapted from BASICS utilizing telehealth can significantly impact alcohol use behaviors. abstract_id: PUBMED:18800217 Efficacy of a brief image-based multiple-behavior intervention for college students. Background: Epidemiologic data indicate most adolescents and adults experience multiple, simultaneous risk behaviors. Purpose: The purpose of this study is to examine the efficacy of a brief image-based multiple-behavior intervention (MBI) for college students. Methods: A total of 303 college students were randomly assigned to: (1) a brief MBI or (2) a standard care control, with a 3-month postintervention follow-up. Results: Omnibus treatment by time multivariate analysis of variance interactions were significant for three of six behavior groupings, with improvements for college students receiving the brief MBI on alcohol consumption behaviors, F(6, 261) = 2.73, p = 0.01, marijuana-use behaviors, F(4, 278) = 3.18, p = 0.01, and health-related quality of life, F(5, 277) = 2.80, p = 0.02, but not cigarette use, exercise, and nutrition behaviors. Participants receiving the brief MBI also got more sleep, F(1, 281) = 9.49, p = 0.00, than those in the standard care control. Conclusions: A brief image-based multiple-behavior intervention may be useful in influencing a number of critical health habits and health-related quality-of-life indicators of college students. abstract_id: PUBMED:27070727 Does a Brief Motivational Intervention Reduce Frequency of Pregaming in Mandated Students? Background: Pregaming, also known as frontloading or predrinking, is a common but risky drinking behavior among college students. However, little is known about the way in which a brief motivational intervention (BMI) addressing general alcohol use and consequences may impact pregaming frequency. Objectives: This study examined whether mandated students reduced frequency of pregaming following a BMI when pregaming was spontaneously discussed and whether gender moderated these effects. Methods: Participants (n = 269, 32% female) were mandated college students who had received a campus-based alcohol citation and continued to exhibit risky alcohol use six weeks after receiving a brief advice session. Participants were randomized to a brief motivational intervention (BMI, n = 145) or assessment only (AO, n = 124) and completed follow-up assessments at 3, 6, and 9 months postintervention. Hierarchical Linear Modeling (HLM) was used to examine both between-person (Level 2) effects (i.e., condition) and within-person (Level 1) effects (i.e., time) on pregaming frequency. Analyses examining discussions of pregaming within the BMI were conducted using a subsample of the BMI sessions which had been transcribed (n = 121). Results: Participants in the BMI group did not significantly reduce the frequency of pregaming compared to those in the AO group, even when pregaming was explicitly discussed during the BMI. Moreover, the BMI was equally ineffective at reducing pregaming frequency for both males and females. Conclusion/Importance: Pregaming frequency appears to be resistant to conventional intervention efforts, but recent research suggests several innovative strategies for addressing pregaming in the college student population. abstract_id: PUBMED:34958980 Randomized clinical trial of a brief, scalable intervention for mental health sequelae in college students during the COVID-19 pandemic. This randomized clinical trial aimed to determine feasibility, acceptability, and initial efficacy of brief Dialectical Behavior Therapy (DBT) skills videos in reducing psychological distress among college students during the COVID-19 pandemic. Over six weeks, 153 undergraduates at a large, public American university completed pre-assessment, intervention, and post-assessment periods. During the intervention, participants were randomized to receive animated DBT skills videos for 14 successive days (n = 99) or continue assessment (n = 54). All participants received 4x daily ecological momentary assessments on affect, self-efficacy of managing emotions, and unbearableness of emotions. The study was feasible and the intervention was acceptable, as demonstrated by moderate to high compliance rates and video ratings. There were significant pre-post video reductions in negative affect and increases in positive affect. There was a significant time × condition interaction on unbearableness of emotions; control participants rated their emotions as more unbearable in the last four vs. first two weeks, whereas the intervention participants did not rate their emotions as any more unbearable. Main effects of condition on negative affect and self-efficacy were not significant. DBT skills videos may help college students avoid worsening mental health. This brief, highly scalable intervention could extend the reach of mental health treatment. abstract_id: PUBMED:35242504 Changes in college students' health behaviors and substance use after a brief wellness intervention during COVID-19. College students exhibit low levels of physical activity, high levels of sedentary behavior, poor dietary behaviors, sleep problems, high stress, and increased substance use. On-campus resources offering programs to improve college students' health have been limited during the pandemic. The purpose of this study was to test a brief intervention to improve multiple health behaviors among United States college students. The intervention was a single arm repeated measures study conducted over 12 weeks, utilizing the Behavior Image Model. The intervention involved three components: a survey, a 25-minute wellness specialist consult with a peer health coach, and a 15-minute goal planning session. Follow-up measures were completed at 2-, 6-, and 12-weeks post session to assess changes in wellness behaviors. Linear mixed effects models for repeated measures were used to analyze the association between intervention implementation on within-subject changes in physical activity, sedentary behavior, diet, general health, emotional wellness, and substance use. A total of 121 participants enrolled in the study and 90 (74.4%) completed the health coach session (71% female). At first follow-up, statistically significant increases were observed in vigorous physical activity days/week (coef. = 0.5,95%CI: 0.2,0.9), moderate physical activity days/week (coef. = 0.7, 95%CI: 0.2,1.1), general health (coef. = 4.8,95%CI: 2.1, 7.5), and emotional wellness (coef. = 8.6,95%CI: 5.8, 11.3). Statistically significant decreases in cannabis use (coef. = -2.3,95%CI:-4.1, -0.5) and alcohol consumption (coef. = -2.5,95%CI: -3.7,-1.3) were observed. Many of these changes were sustained at second and third follow-up. This brief wellness intervention shows promise to positively influence multiple health behaviors in college students. abstract_id: PUBMED:36532967 Does abstaining from alcohol in high school moderate intervention effects for college students? Implications for tiered intervention strategies. Brief motivational intervention (BMI) and personalized feedback intervention (PFI) are individual-focused brief alcohol intervention approaches that have been proven efficacious for reducing alcohol use among college students and young adults. Although the efficacy of these two intervention approaches has been well established, little is known about the factors that may modify their effects on alcohol outcomes. In particular, high school drinking may be a risk factor for continued and heightened use of alcohol in college, and thus may influence the outcomes of BMI and PFI. The purpose of this study was to investigate whether high school drinking was associated with different intervention outcomes among students who received PFI compared to those who received BMI. We conducted moderation analyses examining 348 mandated students (60.1% male; 73.3% White; and 61.5% first-year student) who were randomly assigned to either a BMI or a PFI and whose alcohol consumption was assessed at 4-month and 15-month follow-ups. Results from marginalized zero-inflated Poisson models showed that high school drinking moderated the effects of PFI and BMI at the 4-month follow-up but not at the 15-month follow-up. Specifically, students who reported no drinking in their senior year of high school consumed a 49% higher mean number of drinks after receiving BMI than PFI at the 4-month follow-up. The results suggest that alcohol consumption in high school may be informative when screening and allocating students to appropriate alcohol interventions to meet their different needs. abstract_id: PUBMED:25073447 Response of heavy-drinking voluntary and mandated college students to a peer-led brief motivational intervention addressing alcohol use. Little is known about the way in which mandated and heavy-drinking voluntary students comparatively respond to peer-led brief motivational interventions (BMIs) and the mediators and moderators of intervention effects. Research suggests that mandated students may be more defensive due to their involvement in treatment against their will and this defensiveness, in turn, may relate to treatment outcome. Furthermore, it is not clear how mandated and heavy-drinking voluntary students perceived satisfaction with peer-led BMIs relates to treatment outcomes. Using data from two separate randomized controlled trials, heavy drinking college students (heavy-drinking voluntary, n = 156; mandated, n = 82) completed a peer-led brief motivational intervention (BMI). Both mandated and heavy-drinking volunteer students significantly reduced drinking behaviors at 3-month follow-up, reported high levels of post-intervention session satisfaction, yet no effects for mediation or moderation were found. Findings offer continued support for using peer counselors to deliver BMIs; however, results regarding the mechanisms of change were in contrast to previous findings. Implications for treatment and future areas of research are discussed. abstract_id: PUBMED:30604290 Comparative Effectiveness of Brief Alcohol Interventions for College Students: Results from a Network Meta-Analysis. Late adolescence is a time of increased drinking, and alcohol plays a predominant role in college social experiences. Colleges seeking to prevent students' hazardous drinking may elect to implement brief alcohol interventions (BAIs). However, numerous manualized BAIs exist, so an important question remains regarding the comparative effectiveness of these different types of BAIs for college students. This study uses network meta-analyses (NMA) to compare seven manualized BAIs for reducing problematic alcohol use among college students. We systematically searched multiple sources for literature, and we screened studies and extracted data in duplicate. For the quantitative synthesis, we employed a random-effects frequentist NMA to determine the effectiveness of different BAIs compared to controls and estimated the relative effectiveness ranking of each BAI. A systematic literature search resulted in 52 included studies: On average, 58% of participants were male, 75% were binge drinkers, and 20% were fraternity/sorority-affiliated students. Consistency models demonstrated that BASICS was consistently effective in reducing students' problematic alcohol use (ES range: g = - 0.23, 95%CI [- 0.36, - 0.16] to g = - 0.36, 95% CI [- 0.55, - 0.18]), but AlcoholEDU (g = - 0.13, 95%CI [- 0.22, - 0.04]), e-CHUG (g = - 0.35, 95%CI [- 0.45, - 0.05]), and THRIVE (g = - 0.47, 95%CI [- 0.60, - 0.33]) were also effective for some outcomes. Intervention rankings indicated that BASICS, THRIVE, and AlcoholEDU hold the most promise for future trials. Several BAIs appear effective for college students. BASICS was the most effective but is resource intensive and may be better suited for higher risk students; THRIVE and e-CHUG are less resource intensive and show promise for universal prevention efforts. Answer: The study examined in abstract PUBMED:20026170 found that the 3-month outcomes from a brief image-based multiple behavior intervention for college students were partially sustained at 12-month follow-up without further intervention. Specifically, there were sustained improvements in health-related quality of life, with fewer days of poor spiritual health, social health, and restricted recent activity. Additionally, there were significant group by time interaction effects for driving after drinking and moderate exercise in favor of the brief intervention, with effect sizes typically increasing over time. The effect sizes were small except for moderate size effects for social health-related quality of life. This suggests that the effects from a brief multiple behavior intervention can be sustained over time, at least for certain health habits and aspects of health-related quality of life.
Instruction: Does tissue ischemia actually contribute to leak after sleeve gastrectomy? Abstracts: abstract_id: PUBMED:24374891 Does tissue ischemia actually contribute to leak after sleeve gastrectomy? An experimental study. Background: Staple line leak, although rare, is among the most common postoperative complications after sleeve gastrectomy (SG) and usually occurs in the gastroesophageal (GE) junction. Increased intragastric pressure, regional ischemia, and technical failure of stapling devices have been reported as the main risk factors of postoperative leak. The aim of this study was to evaluate the impact of ischemia and intraluminal pressure in leak appearance. Methods: Landrace swine (n = 12) were subjected to SG and total gastrectomy subsequently. Lactic acid, glycerol, and pyruvate were measured by microdialysis in GE junction and pylorus before and nine times after operation, and lactate/pyruvate (L/P) ratio was calculated as well. Moreover, ex vivo air was insufflated inside the tubularized stomach till a rupture of the staple line occurs. Maximum air pressure reached and location of rupture were recorded. Results: Increase of lactic acid and L/P ratio were demonstrated in GE junction measurements; however, when the measurements between GE junction and pylorus were compared, no statistically significant differences were found, with the exception of a slightly increased lactate concentration in pylorus in the midst of measurements. The maximum air pressure recorded varied from 3 to 75 mmHg (mean 24.5 mmHg) and the majority of ruptures (n = 8) occurred in GE junction. In one of them, clip displacement was noticed. Conclusions: No evidence of increased ischemia in GE junction compared to pylorus was recorded. Increased intraluminal pressure and stapling malfunction may play the most important role in leak appearance. abstract_id: PUBMED:26341085 Comparison of Reinforcement Techniques Using Suture on Staple-Line in Sleeve Gastrectomy. Background: Sleeve gastrectomy is a common procedure in recent years for treatment of morbid obesity however leak from staple-line is its main challenging complication. Despite numerous studies regarding leak after sleeve gastrectomy, there is still no conclusion on reinforcement of staple-line in this procedure. The purpose of this study was to compare two methods of oversewing staple-line versus no reinforcement. Methods: Resected stomachs of 30 patients undergoing laparoscopic sleeve gastrectomy were evaluated for bursting pressure immediately after extraction from the abdomen. Reinforcement technique was applied in random order to 3 segments of the staple-line on each specimen: continuous Lembert's sutures, continuous through-and-through sutures, and no reinforcement. Bursting pressure was determined by injection of methylene blue solution into lumen of resected stomach and recording pressure at which leakage occurs. Location of leak, intragastric pressure, and volume at first leak were recorded. Results: Baseline characteristics of patients were similar in randomized groups for order of reinforcement technique. Mean ischemia time of specimens was 17.4 ± 10.4 min. No leaks were observed in segments reinforced with Lembert's oversewing technique. The through-and-through reinforcement segments were first to leak in 21 out of 30 cases (70 %) with mean leak pressure of 570 mmHg and mean leak volume of 399 ml. Leakage occurred in 9 segments (30 %) with no reinforcement with a leak pressure of 329 mmHg and volume of 380 ml. Conclusions: In vitro, Lembert's suture reinforcement technique on stapled human stomach is associated with less leakage rate in comparison to through-and-through reinforcement and non-reinforced staple-line. abstract_id: PUBMED:35567877 Leak after sleeve gastrectomy with positive intraoperative indocyanine green test: Avoidable scenario? Background: The staple line gastric leak (GL) is estimated to be the most serious complication of the sleeve gastrectomy. The use of indocyanine green (ICG) has been introduced in minimally invasive surgery to show the vascularization of the stomach in real time and its application to the gastroesophageal junction (GE) during Laparoscopic Sleeve Gastrectomy (LSG) seems very promising. Case Presentation: We present the case of a 40-year-old female underwent laparoscopic sleeve gastrectomy. Intraoperative indocyanine green test showed a small dark area in the proximal third of the staple line reinforced with fibrin glue. Two weeks later the patient presented to the emergency room (ED) with abdominal pain, fever, vomiting, intolerance to oral intake and the evidence of a leak on the abdomen Computer Tomography (CT). The UIN for ClinicalTrial.gov Protocol Registration and Results System is: NCT05337644 for the Organization UFoggia. Conclusions: This case report shows that intraoperative ICG test can be helpful in determining which patients are at greater risk of the leak and, more importantly, the cause of the leak but further tests are needed to determine if the ICG predicts leak due to ischemia. abstract_id: PUBMED:26656668 Influence of intraoperative hypotension on leaks after sleeve gastrectomy. Background: Leak after a sleeve gastrectomy (SG) is a severe complication. Risk factors, such as regional ischemia, increased intraluminal pressure, technical failure of the stapling device, and surgeon error, have been reported. Objectives: It was hypothesized that intraoperative hypotension is another risk factor for leak, similar to that reported for colorectal surgery. Setting: Tertiary teaching hospital in The Netherlands. Methods: Results of a 7-year cohort of primary SGs were reviewed in relation to multiple intraoperative blood pressure measurements. The thresholds for the mean pressure were 40 to 70 mm Hg and for the systolic pressure 70 to 100 mm Hg. Only continuous episodes of 15 and 20 minutes were included. Results: Twenty-four leaks were identified in a cohort of 1041 primary SGs. Episodes of systolic blood pressure&lt;100 mm Hg for 15 min (P = .027) and 20 minutes (P = .008) were significantly related to a staple line leak. An episode of mean blood pressure&lt;70 mm Hg for 20 min was significantly related to leak (P = .014). Episodes with lower thresholds of pressure occurred less frequently and revealed no significant differences. Other identified risk factors were smoking (P = .019), fast-track recovery program (P = .006), use of a tri-stapler (P = .004), and duration of surgery (P = .000). In a multivariate analysis, only intraoperative systolic pressure&lt;100 mm Hg for 20 minutes remained significant (odds ratio, 2.45; P = .012). Conclusions: Intraoperative hypotension may contribute independently to a leak after SG. abstract_id: PUBMED:27301373 Leaks after laparoscopic sleeve gastrectomy: overview of pathogenesis and risk factors. Background: Leak is the second most common cause of death after bariatric surgery. The leak rate after laparoscopic sleeve gastrectomy (LSG) ranges between 1.1 and 5.3 %. The aim of the paper is to provide an overview of the current pathogenic and promoting factors of leakage after LSG on the basis of recent literature review and to report the evidence based preventive measures. Methods: Risk factors and pathogenesis of leakage after LSG were examined based on an extensive review of literature and evidence based analysis of the most recent published studies using Oxford centre for evidence-based medicine, 2011, levels of evidence. Results: Pathogenesis of leakage after LSG can be attributed to mechanical or ischemic causes. Many factors can predispose to leakage after LSG which are either technically related or patient related. Awareness of these predisposing factors and technical tips may decrease the incidence of leakage. Conclusions: This review reports factors promoting leak and gives technical recommendations to avoid leak after LSG based on the available evidence and expert consensus which encompasses: (1) use a bougie size ≥40 Fr, EL:1, (2) begin the gastric transection 5-6 cm from the pylorus, EL:2-3, (3) use appropriate cartridge colors from antrum to fundus, EL:1, (4) reinforce the staple line with buttress material, EL:1, (5) follow a proper staple line, (6) remove the crotch staples, EL:4, (7) maintain proper traction on the stomach before firing, (8) stay away from the angle of His at least 1 cm, EL:1, (9) check the bleeding from the staple line, (10) perform an intraoperative methylene blue test, EL:4. abstract_id: PUBMED:29467117 The Effects of Bougie Diameters on Tissue Oxygen Levels After Sleeve Gastrectomy: A Randomized Experimental Trial Background: Staple-line leak is the most frightening complication of laparoscopic sleeve gastrectomy and several predisposing factors such as using improper staple sizes regardless of gastric wall thickness, narrower bougie diameter and ischemia of the staple line are asserted. Aims: To evaluate the effects of different bougie diameters on tissue oxygen partial pressure at the esophagogastric junction after sleeve gastrectomy. Study Design: A randomized and controlled animal experiment with 1:1:1:1 allocation ratio. Methods: Thirty-two male Wistar Albino rats were randomly divided into 4 groups of 8 each. While 12-Fr bougies were used in groups 1 and 3, 8-Fr bougies were used in groups 2 and 4. Fibrin sealant application was also carried out around the gastrectomy line after sleeve gastrectomy in groups 3 and 4. Burst pressure of gastrectomy line, tissue oxygen partial pressure and hydroxyproline levels at the esophagogastric junction were measured and compared among groups. Results: Mortality was detected in 2 out of 32 rats (6.25%) and one of them was in group 2 and the cause of this mortality was gastric leak. Gastric leak was detected in 2 out of 32 rats (6.25%). There was no significant difference in terms of burst pressures, tissue oxygen partial pressure and tissue hydroxyproline levels among the 4 groups. Conclusion: The use of narrower bougie along with fibrin sealant has not had a negative effect on tissue perfusion and wound healing. abstract_id: PUBMED:32458365 Roux-en-Y Gastro-jejunostomy for Complex Leak After the "Nissen" Variant of Sleeve Gastrectomy. Background: Recently, improvised variants of sleeve gastrectomy SG were reported as alternative bariatric options in patients suffering from both morbid obesity and GERD, including mainly additional anterior or posterior fundoplication over a partially sleeved stomach. Methods: We present the case of a 29-year-old male patient with a body mass index (BMI) of 46.2 kg/m2 underwent laparoscopic SG with concomitant posterior fundoplication: Nissen-SG (N-SG). At postoperative day (POD) 4, he presented with epigastric pain, nausea, and 40 °C fever. The abdomen was tender with signs of peritonitis. Explorative laparotomy displayed a massive gastric leak with generalized peritonitis. Peritoneal lavage was performed. the patient was transferred to our department for the management of persistent SGL. Results: Initial management comprised total parenteral nutrition and wide-spectrum intravenous antibiotics. Three weeks later, the patient underwent laparoscopic exploration. As shown in the video, at least two leaks were individualized, including one, anterior, catheterized by the pigtails, and the other one, posterior, impossible to reach endoscopically (Fig. 1). A residual abscess, located between the left crus, the pancreas, and the upper edge of the spleen, was evacuated. Eventually, Roux-en-Y gastro-jejunostomy was performed CONCLUSION: The adjunction of a posterior fundoplication may have contributed to the multiple and complex occurrence of SGL. Having an ill-vascularized redundant fundus may have increased ischemia of the GE junction. Moreover, it is more difficult to perform endoscopic treatment in a plicated and sleeved stomach as well. abstract_id: PUBMED:31290111 Indocyanine Green Fluorescent Angiography During Laparoscopic Sleeve Gastrectomy: Preliminary Results. Introduction: Indocyanine green (ICG) fluorescent angiography has been routinely applied for various laparoscopic procedures to evaluate the tissue blood supply. A promising branch for this technology is represented by bariatric surgery, especially to estimate the risk of gastric leak after laparoscopic sleeve gastrectomy (LSG), which seems mainly related to ischemia of the stomach. Materials And Methods: 43 consecutive patients from January 2018 to March 2019 underwent in our institution LSG with intravenous injection of 5 ml ICG after the realization of gastric tube to evaluate the blood supply of the gastric tube. Results: In all 43 cases, there have been no adverse events related to ICG. The vascular supply to stomach was estimated "satisfactory" along the stapled line in all cases. However, one patient showed signs and symptoms indicative of gastric leak in the fifth post-op day and diagnosis was confirmed by CT scan with Gastrografin. Conclusions: From our preliminary data, the intraoperative view of the blood supply of the stomach does not seem to represent a prognostic factor for the risk of gastric leak, suggesting a complex multifactorial etiology (intragastric hypertension? Abnormal inflammatory response?) which needs further data to be established. abstract_id: PUBMED:25320526 Gastric leaks post sleeve gastrectomy: review of its prevention and management. Gastric sleeve gastrectomy has become a frequent bariatric procedure. Its apparent simplicity hides a number of serious, sometimes fatal, complications. This is more important in the absence of an internationally adopted algorithm for the management of the leaks complicating this operation. The debates exist even regarding the definition of a leak, with several classification systems that can be used to predict the cause of the leak, and also to determine the treatment plan. Causes of leak are classified as mechanical, technical and ischemic causes. After defining the possible causes, authors went into suggesting a number of preventive measures to decrease the leak rate, including gentle handling of tissues, staple line reinforcement, larger bougie size and routine use of methylene blue test per operatively. In our review, we noticed that the most important clinical sign or symptom in patients with gastric leaks are fever and tachycardia, which mandate the use of an abdominal computed tomography, associated with an upper gastrointrstinal series and/or gastroscopy if no leak was detected. After diagnosis, the management of leak depends mainly on the clinical condition of the patient and the onset time of leak. It varies between prompt surgical intervention in unstable patients and conservative management in stable ones in whom leaks present lately. The management options include also endoscopic interventions with closure techniques or more commonly exclusion techniques with an endoprosthesis. The aim of this review was to highlight the causes and thus the prevention modalities and find a standardized algorithm to deal with gastric leaks post sleeve gastrectomy. abstract_id: PUBMED:28681283 The visualization of gastro-esophageal junction vascular supply during a laparoscopic sleeve gastrectomy. Role of a new device. Leaks arising at the level of gastro-esophageal junction remain a major complication in patients who undergo a laparoscopic sleeve gastrectomy. Besides technical pitfalls, these fistulas are mainly attributable to tissue ischemia following the preparation of the gastric sleeve, explained by the irregular vascularization often present in the area of the gastro-esophageal junction. This video presentation shows how, the utilization of a new model of bougie can be of help in preventing a leak, allowing a better visualization of the particular vascular anatomy present at the level of the gastro-esophageal junction area. Currently, 25/334 (18 F-7 M) (7.4%) patients, who underwent a laparoscopic sleeve gastrectomy at our institution, were operated on using this new model of bougie. No leaks have been so far reported. Answer: Tissue ischemia has been considered a potential contributing factor to leaks after sleeve gastrectomy (SG), but the evidence is not conclusive. Some studies suggest that ischemia may not be the primary cause of leaks. For instance, an experimental study on Landrace swine found no evidence of increased ischemia in the gastroesophageal (GE) junction compared to the pylorus, suggesting that increased intraluminal pressure and stapling malfunction may play more significant roles in leak appearance (PUBMED:24374891). Similarly, a case report indicated that an intraoperative indocyanine green test could help identify patients at greater risk of leaks, but it was unclear if the leaks were due to ischemia, highlighting the need for further research to determine the role of ischemia in leak development (PUBMED:35567877). Other studies have focused on different aspects of the surgical procedure that could influence leak rates. For example, a study comparing reinforcement techniques on the staple-line in SG found that Lembert's suture reinforcement technique was associated with a lower leakage rate compared to through-and-through reinforcement and non-reinforced staple-line, but this study did not directly address the role of ischemia (PUBMED:26341085). Another study suggested that intraoperative hypotension might contribute independently to leaks after SG, drawing a parallel with findings from colorectal surgery (PUBMED:26656668). A comprehensive review of the pathogenesis and risk factors for leaks after LSG identified both mechanical and ischemic causes, with several technical and patient-related factors that could predispose to leakage. The review provided technical recommendations to avoid leaks, but it did not establish ischemia as the sole or primary cause (PUBMED:27301373). Additionally, a randomized experimental trial on rats found that the use of narrower bougie diameters with fibrin sealant did not negatively affect tissue perfusion and wound healing, suggesting that ischemia might not be the main concern when using smaller bougies (PUBMED:29467117). In summary, while tissue ischemia has been considered a potential risk factor for leaks after SG, the evidence does not definitively establish it as the primary cause. Other factors, such as surgical technique, intraluminal pressure, and intraoperative hypotension, may also play significant roles in the development of leaks post-SG.
Instruction: Does endothelin-1 play a role in the renal function of cirrhotic patients? Abstracts: abstract_id: PUBMED:9785916 Does endothelin-1 play a role in the renal function of cirrhotic patients? Methods: We investigated plasma and urinary endothelin-1 levels in eleven cirrhotic patients and ten healthy control subjects, to evaluate whether endothelin is involved in renal functional alterations of liver cirrhosis. Results: No significant difference in plasma ET-1 levels was observed between the two groups (p &gt; 0.05) but urinary ET-1 excretion was significantly higher in cirrhotics than in controls (p &lt; 0.001). Creatinine clearance (mean 56 +/- 7.6 ml/min) showed an inverse correlation with plasma ET-1 levels (p &lt; 0.05) in cirrhotics. Conclusions: We believe this may be caused by enhanced local ET-1 activity linked to an up-regulation of its specific receptors and/or increased renal synthesis, resulting in augmented urinary excretion. abstract_id: PUBMED:12270740 Role of endothelin-1 in a cirrhotic rat model with endotoxin induced acute renal failure. BACKGROUND/AIMS: Bacterial infections are known to trigger renal failure in patients with cirrhosis. However, the mechanisms for this process are unclear. The aim of this study was to investigate the role of endothelin-1 (ET-1) in a cirrhotic rat model with endotoxin induced renal failure by mixed ET-1 receptor antagonist, bosentan. METHODS: Cirrrhosis was induced by twice weekly intraperitoneal injections of CCl(4) together with phenobarbital in drinking water. Cirrhotic and non-cirrhotic rats were either pretreated with physiological saline or bosentan prior to administration of low dose endotoxin. Urine and blood samples were then collected within a period of 3 h for the estimation of ET-1, NO(3)(-)/NO(2)(-) levels ( nitric oxide metabolites: NO(x)) and renal function tests. RESULTS: Cirrhotic rats had higher ET-1 and NO(x) levels in comparison with non-cirrhotic rats. Endotoxin administration to cirrhotic rats led to the deterioration of the renal function, and elevation of plasma ET-1 and NO(x) levels. Bosentan pretreatment prior to endotoxin administration caused an increase in the urine volume and creatinine clearance of cirrhotic rats, but had no effect on Na(+) excretion. CONCLUSION: ET-1 has a significant role in endotoxin induced renal impairment in cirrhotic rats, and ET-1 receptor antagonism provides partial protection of the renal function. abstract_id: PUBMED:12206011 The protection of renal function in the ACEI treatment of renal hypertension Objective: To explore the influence of angiotensin-converting enzyme inhibitors (ACEI) on plasma endothelin (ET-1), nitric oxide (NO) and renal function in renal-hypertension patients. Methods: Sixty renal-hypertension patients (Group II) were treated with ACEI (lotensin) for 10 weeks then we measure their blood pressure (BP), plasma ET-1, NO and renal functions (BUN, Scr and proteinuria) before and after the treatment. Thirty healthy persons (Group I) acted as control. Results: The level of plasma ET-1 was higher and plasma NO was lower in Group II than those in Group I. After the treatment of ACEI plasma ET-1 and proteinuria were decreased (P &lt; 0.01), and NO increased in Group II significantly (P &lt; 0.01), while BUN and Scr decreased in abnormal-renal function patients (Group II2) (P &lt; 0.05, P &lt; 0.01). Conclusion: The Study indicates that: ACEI is effective to renal hypertension; it decreases plasma ET-1 and increases NO in the renal hypertension patients; ACEI may play an important role in protection of renal functions and prolonging the chronic renal failure. abstract_id: PUBMED:12746226 Effects of the neutral endopeptidase inhibitor thiorphan on cardiovascular and renal function in cirrhotic rats. 1. Cirrhosis is associated with cardiovascular and renal dysfunction including sodium retention. Many vasoactive peptides such as atrial natriuretic peptide (ANP) and endothelin-1 (ET-1) are degraded by neutral endopeptidase 24.11 (NEP). We investigated the hemodynamic and renal effects of thiorphan, a NEP inhibitor, in a rat cirrhosis model. 2. Cirrhosis was induced by chronic bile duct ligation, and controls had sham operation. Systemic and renal hemodynamics in conscious, restrained animals were determined using radiolabeled microspheres, and glomerular filtration rate (GFR) was measured by (3)H-inulin clearance. Plasma ANP and ET-1, and renal cGMP and Na(+) - K(+) ATPase activity were assayed. These variables were measured at baseline and after intravenous infusion of thiorphan (0.5 mg kg(-1) loading dose followed by 0.1 mg kg(-1) min(-1) x 30 min). 3. Thiorphan significantly decreased cardiac output, and increased systemic vascular resistance in controls, whereas in cirrhotic rats these variables were unchanged. 4. Compared to the controls, cirrhotic rats showed a decreased baseline GFR and urine sodium excretion, and the latter was significantly increased by thiorphan. 5. Thiorphan increased plasma ET-1 levels in controls, but not cirrhotic rats. ANP levels were not significantly increased in either group by thiorphan. 6. Thiorphan significantly increased cGMP concentrations and decreased Na(+) - K(+) ATPase activity of renal medulla but not cortex in cirrhotic rats; no effect was observed in the control rats. 7. We conclude that thiorphan induces natriuresis in cirrhotic rats by a direct renal medullary mechanism via cGMP and Na(+) - K(+) ATPase, without affecting systemic hemodynamics. This may potentially be useful in patients with ascites. abstract_id: PUBMED:12080656 Study on the relationship between plasma endothelin nitric oxide concentration and renal hypertension and renal function Objective: To investigate the relationship between plasma endothelin(ET), nitric oxide(NO) levels and, renal hypertension and renal function. Methods: The plasma concentration of ET-1 was detected by immunofluorescence assay. The plasma concentration of NO was detected by biochemistry assay. Results: 1. In renal disease patients, plasma concentration ET-1 was markedly elevated, and plasma concentration of NO was decreased, compared with the healthy subjects(P &lt; 0.01). 2. Plasma concentration of ET-1 was markedly increased and plasma concentration of NO was decreased in the patients with renal hypertension. 3. Plasma level of ET-1 was higher, and plasma level of NO was lower in the patients with renal function damage than that of those without renal function damage. 4. BP, BUN and Scr were positively correlated with plasma ET-1, but they were negatively correlated with plasma concentration of NO. Conclusion: Plasma ET-1 and NO may play an important role in pathogenesis of renal hypertension; the change of their levels may be related to the progress of these renal diseases. abstract_id: PUBMED:15788437 Enhanced external counterpulsation: a new technique to augment renal function in liver cirrhosis. Background: Advanced liver cirrhosis is characterized by cardiovascular changes, such as low arterial blood pressure, peripheral vasodilation and renal vasoconstriction. As a consequence, renal hypoperfusion, impaired diuresis and natriuresis and eventual hepatorenal syndrome may ensue. Previous studies using head-out water immersion to increase central blood volume have demonstrated the functional nature of the renal abnormalities. Enhanced external counterpulsation (EECP) is a new non-invasive cardiac assist device to augment diastolic blood pressure by electrocardiogram-triggered diastolic inflation and deflation of cuffs wrapped around the lower extremities. We investigated whether EECP would improve renal dysfunction of liver cirrhosis. Methods: Twelve healthy controls and 19 patients with liver cirrhosis were observed during 2 h of baseline followed by 2 h of EECP. The following parameters of renal and cardiovascular function were measured: renal plasma flow by para-aminohippurate clearance, glomerular filtration rate (GFR) by inulin clearance, urine flow rate, urinary excretion rates of sodium and chloride, mean arterial blood pressure (MAP), renal vascular resistance (RVR) and plasma concentrations of renin, atrial natriuretic peptide (ANP), endothelin-1, antidiuretic hormone, epinephrine and N-epinephrine. Results: EECP was well tolerated by healthy controls and cirrhotic patients alike. EECP increased MAP (cirrhotic patients: from 74+/-18 to 88+/-20 mmHg, P&lt;0.01; controls: from 89+/-8 to 94+/-5 mmHg, P = NS) and ANP (cirrhotic patients: from 23+/-18 to 30+/-20 ng/l, P&lt;0.05; controls: from 11+/-4 to 16+/-5 ng/l, P&lt;0.01). The plasma renin concentration decreased (cirrhotic patients: from 98+/-98 to 58+/-57 ng/l, P&lt;0.01; controls: from 4.6+/-1.6 to 3.4+/-1.1 ng/l, P&lt;0.01). This was associated with improvement of the urinary flow rate (cirrhotic patients: from 3.6+/-1.8 to 4.6+/-0.7 ml/min, P&lt;0.05; controls: from 1.8+/-1.5 to 2.8+/-1.9 ml/min, P&lt;0.05), as well as of the sodium and chloride excretion rates in both groups. However, in contrast to healthy controls, GFR and renal plasma flow in cirrhotic patients failed to rise significantly. Renal vascular resistance fell numerically in healthy controls (68+/-5 vs 55+/-4 mmHg . min/l; P = NS). In contrast, RVR showed a significant increase by approximately 20% in cirrhosis (67+/-4 vs 80+/-8 mmHg . min/l; P&lt;0.05). Endothelin-1 levels fell in controls (0.38+/-0.42 vs 0.31+/-0.35; P&lt;0.05), whereas they remained statistically unchanged in cirrhotic patients. Epinephrine, N-epinephrine and vasopressin were not altered by EECP in either group. Conclusions: EECP is an effective procedure to augment renal excretory function in healthy volunteers as well as in patients with cirrhosis. In healthy volunteers, GFR and renal plasma flow increased during EECP. In contrast, these parameters remained unchanged in the patients and their renal vascular resistance increased during EECP. Therefore, EECP improves diuresis, but does not influence the vasoconstrictive dysregulation of the kidneys in liver cirrhosis. abstract_id: PUBMED:19150307 Superimposed coagulopathic conditions in cirrhosis: infection and endogenous heparinoids, renal failure, and endothelial dysfunction. In this article, the authors discuss three pathophysiologic mechanisms that influence the coagulation system in patients who have liver disease. First, bacterial infections may play an important role in the cause of variceal bleeding in patients who have liver cirrhosis, affecting coagulation through multiple pathways. One of the pathways through which this occurs is dependent on endogenous heparinoids, on which the authors focus in this article. Secondly, the authors discuss renal failure, a condition that is frequently encountered in patients who have liver cirrhosis. Finally, they review dysfunction of the endothelial system. The role of markers of endothelial function in cirrhotic patients, such as von Willebrand factor and endothelin-1, is discussed. abstract_id: PUBMED:37899762 Enhanced external counterpulsation, focusing on its effect on kidney function, and utilization in patients with kidney diseases: a systematic review. Background: Enhanced external counterpulsation (EECP) is provided by a noninvasive device positively affecting cardiovascular function via mechanisms called diastolic augmentation and systolic unloading. The renal aspects of EECP therapy have not been extensively investigated. Objectives: To assess the effect of EECP on renal function and to determine the application in patients with kidney disease. Methods: MEDLINE, EMBASE, SCOPUS, and Cochrane CENTRAL databases were searched for all studies involving EECP treatments. The title and abstract of all searched literatures were screened, and those focusing on renal outcome or conducting in kidney disease patients were selected. Results: Eight studies were included in the qualitative analysis. EECP increases stroke volume, mean arterial pressure, renal artery blood flow, renal plasma flow, glomerular filtration rate (GFR), plasma atrial natriuretic peptide, urine volume, and urinary sodium chloride excretion, but reduces the plasma concentration of renin and endothelin-1 in healthy subjects. A single session of EECP after radioactive contrast exposure could provide increased contrast clearance, and this reduces contrast-induced kidney injury in patients, irrespective of previous kidney function. Thirty-five-hour sessions of EECP treatment were illustrated to increase long-term estimated GFR in patients with chronic angina and heart failure. In cirrhotic patients, EECP fails to improve GFR and renal vascular resistance. EECP device could maintain blood pressure, decrease angina symptoms, and increase cardiac perfusion in hemodialysis patients. Conclusion: EECP treatment potentially increases renal perfusion and prevents kidney injury in several conditions. EECP possibly provides beneficial effects on hemodynamics and cardiac function in hemodialysis patients. abstract_id: PUBMED:30095297 Endotoxemia-enhanced renal vascular reactivity to endothelin-1 in cirrhotic rats. Hepatorenal syndrome (HRS), a severe complication of advanced cirrhosis, is defined as hypoperfusion of kidneys resulting from intense renal vasoconstriction in response to generalized systemic arterial vasodilatation. Nevertheless, the mechanisms have been barely investigated. Cumulative studies demonstrated renal vasodilatation in portal hypertensive and compensated cirrhotic rats. Previously, we identified that blunted renal vascular reactivity of portal hypertensive rats was reversed after lipopolysaccharide (LPS). This study was therefore conducted to delineate the sequence of renal vascular alternation and underlying mechanisms in LPS-treated cirrhotic rats. Sprague-Dawley rats were randomly allocated to receive sham surgery (Sham) or common bile duct ligation (CBDL). LPS was induced on the 28th day after surgery. Kidney perfusion was performed at 0.5 or 3 h after LPS to evaluate renal vascular response to endothelin-1 (ET-1). Endotoxemia increased serum ET-1 levels ( P &lt; 0.0001) and renal arterial blood flow ( P &lt; 0.05) in both Sham and CBDL rats. CBDL rats showed enhanced renal vascular reactivity to ET-1 at 3 h after LPS ( P = 0.026). Pretreatment with endothelin receptor type A (ETA) antagonist abrogated the LPS-enhanced renal vascular response in CBDL rats ( P &lt; 0.001). There were significantly lower inducible nitric oxide synthase (iNOS) expression but higher ETA and phosphorylated extracellular signal-regulated kinase (p-ERK) expressions in renal medulla of endotoxemic CBDL rats ( P &lt; 0.05). We concluded that LPS-induced renal iNOS inhibition, ETA upregulation, and subsequent ERK signaling activation may participate in renal vascular hyperreactivity in cirrhosis. ET-1-targeted therapy may be feasible in the control of HRS. NEW &amp; NOTEWORTHY Hepatorenal syndrome (HRS) occurred in advanced cirrhosis after large-volume paracentesis or bacterial peritonitis. We demonstrated that intraperitoneal lipopolysaccharide (LPS) enhanced renal vascular reactivity to endothelin-1 (ET-1) in cirrhotic rats, accompanied by inducible nitric oxide synthase inhibition, endothelin receptor type A (ETA) upregulation, and subsequent extracellular signal-regulated kinase activation in renal medulla. Pretreatment with ETA antagonist abrogated the LPS-enhanced renal vascular response in common bile duct ligation rats. These findings suggest that further clinical investigation of ET-1-targeted therapy may be feasible in the control of HRS. abstract_id: PUBMED:8828357 Endothelial cell dysfunction in patients with impaired renal function In order to investigate endothelial cell dysfunction in patients with impaired renal function, we measured circulating endothelin (ET-1) and thrombomodulin (Tm) concentrations used as markers for endothelial cell injury in patients with renal failure. 1) ET-1 and Tm were significantly higher in patients with renal failure and pre-dialysis patients than in normal subjects. Tm in CRF patients was significantly greater than that in ARF patients. In contrast, ET-1 was significantly greater in ARF than in CRF. 2) A positive correlation was found between serum creatinine concentration (Cr) and Tm in pre-dialysis patients. However, no correlation was found between Cr and ET-1. 3) A positive correlation was found between Tm and the duration of dialysis in HD patients, but not in CAPD patients. 4) With the improvement of renal function after regular HD treatment, a substantial reduction was found in ARF patients in both Tm and ET-1, but not in CRF patients. The present study suggests the presence of endothelial cell dysfunction in patients with impaired renal function. The progression of endothelial cell damage may differ between patients on HD and those on CAPD. In addition, it is suggested that endothelial cell dysfunction reverses in ARF patients with improved renal function. Answer: Yes, endothelin-1 (ET-1) appears to play a role in the renal function of cirrhotic patients. Studies have shown that cirrhotic patients have higher urinary ET-1 excretion compared to controls, and there is an inverse correlation between creatinine clearance and plasma ET-1 levels in cirrhotic patients, suggesting that ET-1 may be involved in renal functional alterations associated with liver cirrhosis (PUBMED:9785916). Additionally, in a cirrhotic rat model with endotoxin-induced acute renal failure, ET-1 was found to have a significant role in the renal impairment, and ET-1 receptor antagonism provided partial protection of renal function (PUBMED:12270740). Furthermore, in cirrhotic rats, enhanced external counterpulsation (EECP) failed to improve glomerular filtration rate (GFR) and renal vascular resistance, indicating that while EECP can improve diuresis, it does not influence the vasoconstrictive dysregulation of the kidneys in liver cirrhosis, which may be related to the role of ET-1 (PUBMED:15788437). Moreover, in cirrhotic rats, endotoxemia was shown to enhance renal vascular reactivity to ET-1, suggesting that ET-1-targeted therapy may be feasible in controlling hepatorenal syndrome (HRS), a severe complication of advanced cirrhosis characterized by intense renal vasoconstriction (PUBMED:30095297). Overall, these findings suggest that ET-1 is indeed involved in the renal function alterations observed in cirrhotic patients.
Instruction: Do the benefits of shorter hospital stay associated with the use of fleece-bound sealing outweigh the cost of the materials? Abstracts: abstract_id: PUBMED:18055483 Do the benefits of shorter hospital stay associated with the use of fleece-bound sealing outweigh the cost of the materials? Objectives: To compare the cost of materials and hospitalization for standard techniques (suturing, stapling and electrocautery) for sealing the lung after pulmonary resection with those for a fleece-bound sealing procedure. Methods: This cost comparison analysis uses as its basis a prospective randomised clinical trial involving 152 patients with pulmonary lobectomy/segmentectomy (standard technique group: 77 patients; fleece-bound sealing group: 75 patients). The cost comparison was performed from the economic perspective of Austrian and German hospitals, taking into consideration the cost of materials for the two alternatives as well as the mean time to hospital discharge. Results: The clinical study found significantly smaller postoperative air leaks in the fleece-bound sealing group. The mean times to chest drain removal and to hospital discharge were also significantly reduced after application of fleece-bound sealing [5.1 vs. 6.3 days (P=0.022) and 6.2 vs. 7.7 days (P=0.01), respectively]. The cost of materials for sealing air leaks amounted to euro47 per patient in the standard technique group and euro410 per patient in the fleece-bound sealing group. The 1.5-day reduction in the length of hospital stay associated with fleece-bound sealing represents a saving of euro462 per patient. Conclusions: There was an overall saving of euro99 for the fleece-bound sealing procedure compared to standard techniques for sealing the lung following pulmonary resection. abstract_id: PUBMED:26803987 Total shoulder arthroplasty for proximal humerus fracture is associated with increased hospital charges despite a shorter length of stay. Background: Operation choice is a complex decision in the surgical management of proximal humerus fractures. Recently, there has been an increase in the use of total shoulder arthroplasty (TSA) for complex fracture patterns. Hypothesis: Patients with proximal humerus fractures who receive TSA are more likely to have higher hospital charges and a prolonged length of stay relative to patients receiving hemiarthroplasty (HA), open reduction with internal fixation (ORIF) or closed reduction with internal fixation (CRIF). Materials And Methods: A statewide electronic database was used to identify 13,316 hospital admissions from 2000-2011 were a proximal humerus fracture was surgically managed in an effort to determine the effect of operation choice on cost and length of stay. A univariate analysis was preformed to examine overall trends in surgical management. Additionally, a periodic, multivariate logistic regression analysis was used to determine how operation choice affected the odds of a high cost hospital stay or a prolonged length of stay after controlling for age, comorbidity burden, gender, and insurance type. Results: After controlling for confounding factors, patients receiving total shoulder arthroplasty (TSA) were 2.25 times more likely to have high total hospital charges than patients receiving HA and 3.21 times more likely than patients receiving ORIF. Additionally, TSA was found to be a significant negative predictor of prolonged length of stay (pLOS). HA, ORIF and CRIF did not significantly predict pLOS. Discussion: The use of TSA for acute proximal humerus fractures is associated with increased hospital costs despite a shorter length of stay when compared to other operative choices. As reverse total shoulder arthroplasty becomes more popular for treatment of this injury, it is important that functional outcomes be interpreted in the context of relative cost trade-offs. Level Of Evidence: Level IV. abstract_id: PUBMED:27593828 Fleece-Bound Tissue Sealing in Microvascular Decompression. Aim: Cerebrospinal fluid (CSF) leakage is a feared complication after microvascular decompression (MVD). In this study, we present our experience of fleece-bound tissue sealing in MVD with an aim to minimize the rate of postoperative CSF leakage. Material And Methods: We treated 50 patients (female/male: 26/24) with neurovascular compression (NVC) syndromes (trigeminal neuralgia, hemifacial spasm and glossopharyngeal neuralgia) by MVD from 2003 to 2006. All patients underwent retromastoid craniectomy and duraplasty by fleece-bound tissue sealing using the so-called "sandwich technique" by a three-layer reconstruction and cranioplasty. Results: In 49 (98%) of 50 patients, we did not observe postoperative CSF leakage. One patient (2%) suffered postoperative CSF leakage and required surgical revision. Conclusion: Fleece-bound tissue sealing by a three-layer reconstruction is effective and safe in the prevention of cerebrospinal fluid leakage in microvascular decompression. abstract_id: PUBMED:26432722 Minimally invasive mitral valve surgery is associated with equivalent cost and shorter hospital stay when compared with traditional sternotomy. Objective: Mitral valve surgery is increasingly performed through minimally invasive approaches. There are limited data regarding the cost of minimally invasive mitral valve surgery. Moreover, there are no data on the specific costs associated with mitral valve surgery. We undertook this study to compare the costs (total and subcomponent) of minimally invasive mitral valve surgery relative to traditional sternotomy. Methods: All isolated mitral valve repairs performed in our health system from March 2012 through September 2013 were analyzed. To ensure like sets of patients, only those patients who underwent isolated mitral valve repairs with preoperative Society of Thoracic Surgeons scores of less than 4 were included in this study. A total of 159 patients were identified (sternotomy, 68; mini, 91). Total incurred direct cost was obtained from hospital financial records. Results: Analysis demonstrated no difference in total cost (operative and postoperative) of mitral valve repair between mini and sternotomy ($25,515 ± $7598 vs $26,049 ± $11,737; P = .74). Operative costs were higher for the mini cohort, whereas postoperative costs were significantly lower. Postoperative intensive care unit and total hospital stays were both significantly shorter for the mini cohort. There were no differences in postoperative complications or survival between groups. Conclusions: Minimally invasive mitral valve surgery can be performed with overall equivalent cost and shorter hospital stay relative to traditional sternotomy. There is greater operative cost associated with minimally invasive mitral valve surgery that is offset by shorter intensive care unit and hospital stays. abstract_id: PUBMED:26676672 Early use of V2 receptor antagonists is associated with a shorter hospital stay and reduction in in-hospital death in patients with decompensated heart failure. Tolvaptan is an oral antagonist of arginine vasopressin receptor 2 that has been approved in Japan to reduce congestive symptoms in patients with heart failure refractory to loop diuretics. However, it is unknown whether the early use of tolvaptan results in better clinical outcomes. We retrospectively analyzed 102 consecutive patients with decompensated heart failure treated with tolvaptan at our hospital. A given patient was defined as a responder when the maximum urine volume was greater than 150 % of that observed before tolvaptan use. A logistic regression analysis revealed that the early use of tolvaptan (within 3 days after admission) was an independent factor associated with tolvaptan responsiveness. There were no significant differences in the baseline clinical parameters between the early and late tolvaptan use groups. However, the early use of tolvaptan was associated with higher tolvaptan responsiveness, a shorter duration of carperitide infusion, earlier initiation of ambulatory cardiac rehabilitation, shorter hospital stay, lower rate of in-hospital death. The early use of tolvaptan was associated with a shorter hospital stay and reduced mortality in our retrospective cohort. It might therefore be beneficial to consider administering tolvaptan earlier in patients with heart failure. abstract_id: PUBMED:28475261 Impact of the Orthopaedic Nurse Practitioner role on acute hospital length of stay and cost-savings for patients with hip fracture: A retrospective cohort study. Aims: To compare acute hospital length of stay and cost-savings for patients with hip fracture before and after commencement of the Orthopaedic Nurse Practitioner and identify variables that increase length of stay in hospital. Background: Globally, hip fractures are associated with significant morbidity and mortality. Whilst the practical benefits of the Orthopaedic Nurse Practitioner have been anecdotally shown, an analysis showing the cost-saving benefits has yet to be published. Design: A retrospective cohort study. Methods: Data from two population-based cohorts (2010, 2013) of hip fracture patients aged ≥65 years were extracted from the electronic hospital database at a large Western Australian tertiary metropolitan hospital. Multivariate linear regression was used to model factors affecting length of stay in hospital. A simple economic analysis was undertaken and cost-savings were estimated. Results: For comparison (n = 354) and intervention (n = 301) groups, average age was 84 years and over 70% were female. Analyses showed length of stay was shorter in 2013 compared with 2010 (4.4-5.3 days). Shorter length of stay was associated with type of procedure and surgery within 24-hr and longer length of stay was associated with co-morbid conditions of pulmonary disease, congestive heart failure, dementia, anaemia on admission and complications of delirium, urinary tract infection, myocardial infarction and pneumonia. The cost-savings to the hospital over one year was $354,483 and the net annual cost-savings per patient was $1,178. Conclusion: Implementation of the Orthopaedic Nurse Practitioner role for care of hip fracture patients can reduce acute hospital length of stay resulting in important cost-savings. abstract_id: PUBMED:23019185 Making greater use of dedicated hospital observation units for many short-stay patients could save $3.1 billion a year. Using observation units in hospitals to provide care to certain patients can be more efficient than admitting them to the hospital and can result in shorter lengths-of-stay and lower costs. However, such units are present in only about one-third of US hospitals. We estimated national cost savings that would result from increasing the prevalence and use of observation units for patients whose stay there would be shorter than twenty-four hours. Using a systematic literature review, national survey data, and a simulation model, we estimated that if hospitals without observation units had them in place, the average cost savings per patient would be $1,572, annual hospital savings would be $4.6 million, and national cost savings would be $3.1 billion. Future policies intended to increase the cost-efficiency of hospital care should include support for observation unit care as an alternative to short-stay inpatient admission. abstract_id: PUBMED:23978425 Anastomotic stability and wound healing of colorectal anastomoses sealed and sutured with a collagen fleece in a rat peritonitis model. Background/objective: Anastomotic insufficiency is associated with increased morbidity and mortality. A collagen fleece that supports anastomosis is effective for preventing anastomosis insufficiency. The objective of this study was to compare between the stability of sutured anastomoses and that of anastomoses sealed with a thrombin/fibrinogen-coated collagen fleece in a rat peritonitis model. Methods: In 72 male Wistar rats, peritonitis was induced with a specially prepared human fecal solution. Surgery at the rectosigmoid junction was performed 24-36 hours later. The different anastomotic techniques used were circular sutured anastomoses, semicircular sutured anastomosis and closure of the anterior wall with collagen patch, and complete closure with a collagen fleece. Bursting pressure, histology of anastomosis, mRNA expression of collagen types I and III, matrix metalloproteinase-13, and vascular endothelial growth factor (VEGF) were investigated after 24 hours, 72 hours, and 120 hours. Results: All animals developed peritonitis of comparable severity. There were no differences in bursting pressures between the three suture techniques after 24 hours, 72 hours, or 120 hours. Anastomoses sealed with a collagen fleece appeared to be slightly less stable only at 24 hours, whereas they appeared to be more stable than semisutured or fully sutured anastomoses at 72 hours and 120 hours. Sealing with a collagen fleece was associated with an increase in granulation tissue, higher mRNA levels for collagen types I and III, and higher VEGF compared to sutured anastomoses. Conclusion: The use of a thrombin/fibrinogen-coated collagen fleece showed similar efficacy to conventional sutures in colorectal anastomoses in the presence of peritonitis inflammation, and may provide additional benefits due to an increase in mature granulation tissue. abstract_id: PUBMED:30012928 Length and Cost of Hospital Stay in Poor-Risk Patients With Critical Limb Ischemia Undergoing Revascularization. Background: The aim of the current study was to identify the distribution of length and cost of hospital stay and their associated risk factors in poor-risk Japanese critical limb ischemia (CLI) patients undergoing revascularization. Methods and Results: We analyzed prospectively collected data from 507 CLI patients who required assistance in their daily lives due to disability in activities of daily living and/or cognitive function impairment and who underwent revascularization. The median length and cost of hospital stay were 23 days (IQR, 9-52 days) and ¥2.25m (IQR, ¥1.33m-3.58m), respectively. Reduced albumin, tissue loss, infection, surgical reconstruction, and bilateral revascularization were associated with prolonged hospital stay (P=0.012, 0.019, &lt;0.001, &lt;0.001, and &lt;0.001, respectively). Doubling the length of the hospital stay was associated with a 44% increase in hospital cost. Regular dialysis, surgical reconstruction, and bilateral revascularization were independently associated with an approximately 20% increase in the cost of hospital stay (all P&lt;0.001). Conclusions: Length and cost of hospital stay varied considerably between patients. Low serum albumin, tissue loss, infection, surgical reconstruction, and bilateral revascularization were associated with longer hospital stay. Regular dialysis, surgical reconstruction, and bilateral revascularization were independently associated with an approximately 20% increase in the cost of hospital stay. abstract_id: PUBMED:33043743 G-CSF use post peripheral blood stem cell transplant is associated with faster neutrophil engraftment, shorter hospital stay and increased incidence of chronic GVHD. The use of G-CSF post allogeneic transplant has become a common practice to accelerate neutrophil engraftment. There is some controversy in its use. To further evaluate the effectiveness, we compared outcomes in patients who underwent PBSCT, either with or without the planned use of G-CSF post SCT. Among consecutive 162 patients from October 2012 to October 2014, 65 patients received G-CSF post-PBSCT, and 97 did not. More patients in G-CSF group received MAC (78% vs. 55%). Patients who received G-CSF had earlier neutrophil engraftment (median days 11 vs. 14) and shorter post-transplant hospital stay (median days 16 vs. 20, p = 0.001). G-CSF use was associated with a higher rate of extensive chronic GVHD (44.3% vs.61.5%, p = 0.027). G-CSF cost the equivalent of 0.25 hospital days but shortened the initial transplant admission by 4 days. Early cost-benefit may be later offset by the economic burden of chronic GVHD and associated complications. Answer: Yes, the benefits of shorter hospital stay associated with the use of fleece-bound sealing do outweigh the cost of the materials. According to the cost comparison analysis based on a prospective randomized clinical trial involving 152 patients, the use of fleece-bound sealing for lung sealing after pulmonary resection resulted in significantly smaller postoperative air leaks and reduced the mean times to chest drain removal and hospital discharge. The mean hospital discharge time was reduced from 7.7 days to 6.2 days (P=0.01), and the cost of materials for sealing air leaks was €410 per patient for the fleece-bound sealing group compared to €47 per patient for the standard technique group. Despite the higher cost of materials, the 1.5-day reduction in the length of hospital stay represented a saving of €462 per patient, leading to an overall saving of €99 for the fleece-bound sealing procedure compared to standard techniques (PUBMED:18055483).