input
stringlengths
6.82k
29k
Instruction: Is it feasible to implement enteral nutrition in patients with enteroatmospheric fistulae? Abstracts: abstract_id: PUBMED:24920224 Is it feasible to implement enteral nutrition in patients with enteroatmospheric fistulae? A single-center experience. Background: Published experience in feeding patients with enteroatmospheric fistulae is scarce. This study aimed to determine if enteral nutrition (EN) could be safely delivered in the presence of enteroatmospheric fistula. Materials And Methods: This is a retrospective descriptive study from a major fistula treatment center in China. Medical records of patients who developed enteroatmospheric fistulae in the open abdomen after abdominal trauma were reviewed. The timing of initiation and achievement of full strength (25 kcal/kg/d) EN after enteroatmospheric fistula were noted, as well as the incidence of feeding-associated complications and weaning of parenteral nutrition (PN). The outcomes of open abdomen and enteroatmospheric fistula were also noted. Results: Nine patients were included in this study. EN was successfully implemented in all patients. The median timing of initiation and achievement of full strength of EN after enteroatmospheric fistula was 9 (interquartile range [IQR], 3–22) and 27 (IQR, 22–43) days, respectively. Feeding-associated complications developed in 1 (11.1%) patient. All patients were liberated from PN at hospital discharge. Split-thickness skin grafting was performed in all patients, of whom 5 underwent successful delayed abdominal closure, and 4 were awaiting definitive closure. Repair or resection of enteroatmospheric fistula occurred in 8 (88.9%) patients. Conclusion: This study showed that EN could be safely implemented in patients with enteroatmospheric fistulae without complicating the treatment of open abdomen and enteroatmospheric fistula. abstract_id: PUBMED:35094427 Chyme reinfusion improved outcomes after definitive surgery for small-intestinal enteroatmospheric fistula in patients with enteral nutrition. Purpose: In patients suffering from small-intestinal enteroatmospheric fistula who are receiving enteral nutrition (EN), although the function of the small intestine is sufficient, without chyme reinfusion (CR), disuse of the distal intestine of enteroatmospheric fistula may occur. However, CR reverses such pathological changes and have an influence on improving outcomes following definitive surgery (DS) for small-intestinal enteroatmospheric fistula. This study attempted to investigate the effect of preoperative CR in patients with EN on the outcomes after DS for small-intestinal enteroatmospheric fistula. Methods: According to whether CR was performed between January 2012 and December 2019, patients receiving DS for small intestinal enteroatmospheric fistula were divided into the CR group and non-CR group. The effect of preoperative CR was then investigated. Results: A total of 159 patients were finally enrolled, of which 72 patients were in the CR group and 87 patients were in the non-CR group. A total of 47 (29.56%) patients were found to have recurrent fistula after DS, the recurrent fistula rate in the CR group (multivariate odds ratio = 0.557; 95% CI, 0.351-0.842; P = 0.019) was lower. CR was also shown to promote postoperative recovery of bowel function (hazard ratio [HR] = 1.982; 95% CI, 1.199-3.275; P = 0.008), and shorten postoperative length of stay (LOS) (HR = 1.739; 95% CI, 1.233-2.453; P = 0.002). Conclusion: Preoperative CR may reduce the incidence of recurrent fistula, time to return of bowel function and postoperative LOS following DS for small-intestinal enteroatmospheric fistula. abstract_id: PUBMED:23490208 The use of negative-pressure wound therapy to manage enteroatmospheric fistulae in two patients with large abdominal wounds. Enteric fistulae are a relatively common complication of bowel surgery or in surgery where the bowel has been exposed. Fistulae can present a significant threat to patients' well-being. Changes in surgical techniques and in particular the rise in damage control surgery for emergency patients have led to an increase in open abdominal wounds. The presence of an enteroatmospheric fistula on the surface of a wound can cause a number of distressing symptoms/issues for the patient whilst providing a significant challenge for the clinician. The loss of fluid, proteins and electrolytes will place the patient in danger of becoming hypokalaemic and malnourished. A variety of techniques are available, most refer to a method of isolating the fistula using stoma rings or washers and ostomy paste. The role of negative pressure in the management of wounds with fistula is in its infancy; however, there is evidence to suggest that isolation techniques can be advantageous in managing wounds with fistulae. abstract_id: PUBMED:32573539 Management of patients with enterocutaneous fistulae Enterocutaneous fistula (ECF) is the most serious postoperative life-threating complication of various abdominal surgical interventions. Treatment of patients with ECF is associated with life-threatening complications including sepsis and septic shock, intestinal failure and severe water-electrolyte disorders that causes high mortality rates (35-75% according to national authors and 6-33% according to foreign colleagues). This issue is especially relevant in the cases of enteroatmospheric fistulae and high ECF with loss of intestinal contents of more than 500 ml per day. In the absence of correct conservative therapy, this quickly results progression of sepsis and development of multiple organ failure. Surgery without complex preoperative preparation in this period may be fatal and lead to clinical aggravation and death of patient in early postoperative period. Each patient requires an individual approach. However, there are general principles of treatment too. This literature review describes the main aspects of conservative treatment of patients with enteric fistulae. abstract_id: PUBMED:22911565 Enteral nutrition is feasible in pediatric stem cell transplantation patients. We aimed to demonstrate whether enteral nutrition (EN) is feasible in daily practice of hematopoietic stem cell transplantation (HSCT).Nutritional records of 100 patients were evaluated. Patients with poor oral intake were fed by EN with tube. A total of 79 patients required nutritional support. Of them, 71 were fed by EN only. Five were fed by EN plus parenteral nutrition (PN),three were fed by PN only. Median duration of EN was 21 days. In the EN only group, 68% gained or maintained their weight. EN should be considered as a feasible option for nutrition support in children undergoing HSCT. abstract_id: PUBMED:33368576 Nutrition Management of the High-Output Fistulae. Enterocutaneous fistulae (ECFs) are commonly encountered complications in medical and surgical practice. High-output fistulae are associated with significant morbidity and mortality, poor quality of life, and a substantial healthcare burden. An interdisciplinary team approach is crucial to prevent and mitigate the adverse clinical consequences of high-output ECFs including sepsis, metabolic derangements, and malnutrition. Patients with ECFs are at a significantly higher risk of developing malnutrition and close monitoring by nutrition support professionals and/or a nutrition support team is an essential component of their medical management. High-output ECFs often require the initiation of nutrition support through either enteral or parenteral routes. Historically, parenteral nutrition (PN) has been the primary method of nutrition support in these patients. However, oral and enteral nutrition (EN) should remain viable options if an evaluation of the location of the ECF, amount of remaining functional bowel, and volume of ECF output identifies favorable conditions. Additionally, in contrast to PN, oral nutrition and EN are the preferred method of feeding because of the maintenance of the structural and functional integrity of the gastrointestinal tract. The inclusion of pharmacological interventions can greatly assist with the reduction and stabilization of ECF output and thereby permit sustained enteral feeding. Initiation of supplemental or full PN will be required if oral nutrition and EN lead to metabolic derangements, fail to meet energy requirements, or do not maintain or improve the patient's nutrition status. The main focus of this review is to discuss the nutrition management of patients with high-output ECFs. abstract_id: PUBMED:27428564 A Retrospective, Observational Study of Enteral Nutrition in Patients with Enteroatmospheric Fistulas. Enteroatmospheric fistulas (EAFs) represent a challenging problem in patients with an open abdomen (OA). A retrospective, descriptive study was conducted to evaluate the effects of enteral alimentation on wound status and management and nutrition. All patients with an EAF in an OA treated between October 2012 and December 2014 at a university hospital in Germany were included without criteria for exclusions. Demographic and morbidity-related data collected included age, gender, surgeries, OA grading, body mass index (BMI), serum albumin, and serum creatinin. Underlying diseases and time between the index operation and the formation of the OA and EAFs were analyzed in relation to the initiation of enteral nutrition (EN), which can aggravate and contaminate the OA due to intestinal secretions necessary for digestion. The OA was described in size and area of the fascia defect and classified according to the Björck Scale. The number and location of the fistulas and the duration of negative pressure wound therapy (NPWT) were documented. Outcome parameters included fistula volume, wound management (eg, dressing change frequency, need for wound revision), feeding tolerance, systemic impact of nutrition, nutrition status at discharge, and mortality. Data were analyzed using primary descriptive statistics. The Mann-Whitney test was used to evaluate changes in fistula volume and BMI; categorical data were compared using Fisher's exact test. A P value less than 0.05 was considered significant. Ten (10) patients (8 women, median age of participants 55.4 [range 44-71] years) were treated during the study time period. Seven (7) patients had the first fistula orifice in the upper jejunum (UJF); 8 had more than 1 fistula. EN was initiated with high caloric liquid nutrition and gradually increased to a 25 kcal/kg/day liquid or solid nutrition. All patients were provided NPWT at 75 mm Hg to 100 mm Hg. EN was not followed by a significant increase of median daily fistula volume (1880 mL versus 2520 mL, P = 0.25) or the need for more frequent changes of NPWT dressings (days between changes 2.6 versus 2.9, P = 0.19). In 9 patients, the severity of wound complications such as inflammation or skin erosion decreased both in frequency and magnitude (eg, affected area). All patients achieved a sufficient oral intake, but only 3 were discharged from the hospital without parenteral nutrition. In this study, EN did not cause additional problems in wound management but did not provide sufficient alimentation in patients with a UJF. Prospective studies are needed to clarify associations between EN, nutrition, and wound management. abstract_id: PUBMED:10514751 Home enteral nutrition Enteral nutrition in the home is applied to stabilized patients who do not require hospitalization or to chronically ill patients who can stay in their homes. However, ensuring the correct administration of this treatment requires a coordinated, expert multidisciplinary team. This article reviews the conditions for use of enteral nutrition in the home, the means of access, the nutritional formulas, the administrative technique, and the complications enteral nutrition in the home may present. Furthermore, the composition and characteristics of the multidisciplinary team which will be in charge of carrying out this treatment is discussed. abstract_id: PUBMED:33629866 The XI SENPE Jesús Culebras Lecture. Enteral nutrition in critically ill patients. History of an evolution Introduction: Enteral nutrition is part of the treatment plan designed for a great number of critically ill patients. After a first description in ancient Egypt, enteral nutrition was only rapidly developed during the last century. Advances in indications, tube feeding methods, enteral formula selection, diagnosis and treatment of gastrointestinal-related complications, efficacy monitorization, and use of protocols for enteral nutrition administration in clinical practice make this nutritional technique more feasible and secure for critically ill patients. Nevertheless, several issues in this field need more investigation to increase enteral nutrition development, efficacy, and safety in these patients. abstract_id: PUBMED:3929510 Enteral nutrition in gastrointestinal fistulas. Review of the literature The use of enteral feeding systems in patients with enterocutaneous fistulae is reviewed. The following problems are discussed: methods of administration, effects of enteral nutrition on patients with a gastro-intestinal fistula, clinical results (nutritional schemes, timing of nutritional support and caloric input). It is concluded that an enteral feeding system is the first choice in cases of distal GIT-fistulae, and in low-output fistulae. In other cases, it may be used in combination with TPN. The need for fistula-registration is thought to be of importance in view of prospective studies in this field. Answer: Yes, it is feasible to implement enteral nutrition (EN) in patients with enteroatmospheric fistulae. A study conducted at a major fistula treatment center in China showed that EN could be safely delivered to patients with enteroatmospheric fistulae without complicating the treatment of open abdomen and enteroatmospheric fistula. In this retrospective descriptive study, nine patients successfully received EN, with the median timing of initiation and achievement of full strength EN being 9 and 27 days, respectively, after the development of the fistula. Only one patient developed feeding-associated complications, and all patients were weaned off parenteral nutrition (PN) by hospital discharge (PUBMED:24920224). Additionally, another study found that chyme reinfusion (CR) in patients with small-intestinal enteroatmospheric fistula who were receiving EN improved outcomes after definitive surgery for the fistula. The study divided 159 patients into two groups, CR and non-CR, and found that the CR group had a lower rate of recurrent fistula, promoted postoperative recovery of bowel function, and shortened postoperative length of stay (PUBMED:35094427). Furthermore, a retrospective observational study in Germany evaluated the effects of enteral alimentation on wound status and management and nutrition in patients with enteroatmospheric fistulas. The study found that EN did not cause additional problems in wound management but did not provide sufficient alimentation in patients with an upper jejunum fistula. However, all patients achieved a sufficient oral intake, and only three required parenteral nutrition at discharge (PUBMED:27428564). These studies indicate that with careful management and monitoring, EN can be a viable option for patients with enteroatmospheric fistulae, potentially improving patient outcomes and reducing the reliance on parenteral nutrition.
Instruction: Diagnostic accuracy of the Barthel Index for measuring activities of daily living outcome after ischemic hemispheric stroke: does early poststroke timing of assessment matter? Abstracts: abstract_id: PUBMED:21183748 Diagnostic accuracy of the Barthel Index for measuring activities of daily living outcome after ischemic hemispheric stroke: does early poststroke timing of assessment matter? Background And Purpose: This study investigated the diagnostic accuracy of the Barthel Index (BI) in 206 stroke patients, measured within 72 hours, for activities of daily living at 6 months and determined whether the timing of BI assessment during the first days affects the accuracy of predicting activities of daily living outcome at 6 months. Methods: Receiver operating characteristic curves were constructed to determine the area under the curve and optimal cutoff points for BI at Days 2, 5, and 9. OR, sensitivity, specificity, positive predictive value, and negative predictive value were calculated to predict BI ≥19. Results: The area under the curve ranged from 0.785 on Day 2 to 0.837 and 0.848 on Days 5 and 9. Comparison of the receiver operating characteristic curves showed that the area under the curve was significantly different between Days 2 and 5 (P<0.001) and between Days 2 and 9 (P<0.001). No significant difference was found between Days 5 and 9 (P=0.08). Using a BI cutoff score of 7, the positive predictive value gradually increased from 0.696 on Day 2 to 0.817 on Day 2 to 0.864 on Day 9, whereas negative predictive value declined from 0.778 on Day 2 to 0.613 on Day 9. Conclusions: Assessment of the BI early poststroke showed good discriminative properties for final outcome of BI at 6 months. However, Day 5 proved to be the earliest time for making an optimal prediction of final outcome of activities of daily living. The BI should be measured at the end of the first week in hospital-based stroke units for early rehabilitation management. abstract_id: PUBMED:20439108 Predictive value of the NIHSS for ADL outcome after ischemic hemispheric stroke: does timing of early assessment matter? Background And Purpose: Early prediction of future functional abilities is important for stroke management. The objective of the present study was to investigate the predictive value of the 13-item National Institutes of Health Stroke Scale (NIHSS), measured within 72 h after stroke, for the outcome in terms of activities of daily living (ADL) 6 months post stroke. The second aim was to examine if the timing of NIHSS assessment during the first days post stroke affects the accuracy of predicting ADL outcome 6 months post stroke. Methods: Baseline characteristics including neurological deficits were measured in 188 stroke patients, using the 13-item NIHSS, within 72 h and at 5 and 9 days after a first-ever ischemic hemispheric stroke. Outcome in terms of ADL dependency was measured with the Barthel Index (BI) at 6 months post stroke. The area under the curve (AUC) from the receiver operating characteristic (ROC) was used to determine the discriminative properties of the NIHSS at days 2, 5 and 9 for outcome of the BI. In addition, at optimal cut-off odds ratio (OR), sensitivity, specificity, positive (PPV) and negative predicted values (NPV) for the different moments of NIHSS assessment post stroke were calculated. Results: One hundred and fifty-nine of the 188 patients were assessed at a mean of 2.2 (1.3), 5.4 (1.4) and 9.0 (1.8) days after stroke. Significant Spearman rank correlation coefficients were found between BI at 6 months and NIHSS scores on days 2 (r(s)=0.549, p<0.001), 5 (r(s)=0.592, p<0.001) and 9 (r(s)=0.567, p<0.001). The AUC ranged from 0.789 (95%CI, 0.715-0.864) for measurements on day 2 to 0.804 (95%CI, 0.733-0.874) and 0.808 (95%CI, 0.739-0.877) for days 5 and 9, respectively. Odds ratio's ranged from 0.143 (95%CI, 0.069-0.295) for assessment on day 2 to a maximum of 0.148 (95%CI, 0.073-0.301) for day 5. The NPV gradually increased from 0.610 (95%CI, 0.536-0.672) for assessment on day 2 to 0.679 (95%CI, 0.578-0.765) for day 9, whereas PPV declined from 0.810 (95%CI, 0.747-0.875) for assessment on day 2 to 0.767 (95%CI, 0.712-0.814) for day 9. The overall accuracy of predictions increased from 71.7% for assessment on day 2 to 73.6% for day 9. Conclusions: When measured within 9 days, the 13-item NIHSS is highly associated with final outcome in terms of BI at 6 months post stroke. The moment of assessment beyond 2 days post stroke does not significantly affect the accuracy of prediction of ADL dependency at 6 months. The NIHSS can therefore be used at acute hospital stroke units for early rehabilitation management during the first 9 days post stroke, as the accuracy of prediction remained about 72%, irrespective of the moment of assessment. abstract_id: PUBMED:33984006 Stroke outcome assessment: Optimizing cutoff scores for the Longshi Scale, modified Rankin Scale and Barthel Index. The Longshi Scale, a visual-based scale, is reliable and valid in activity assessment, but lacks cutoff definition corresponding to classical scales such as the modified Rankin Scale and Barthel Index. Therefore, this study aimed to investigate the relationships of the Longshi Scale with the modified Rankin Scale and Barthel Index and optimize cutoff scores of these scales in stroke outcomes assessment. This is a cross-sectional study. Stroke patients were measured concurrently by the Longshi scale, modified Rankin Scale and Barthel Index. Kruskal-Wallis test and Spearman correlation analysis were used to analyze the differences and associations among the three scales. The receiver operating characteristic curve was performed to determine the optimal cutoff scores. A total of 5475 stroke patients (67.3% ischemic) were included in this study. There are close relationships of the Longshi Scale with adjusted modified Rankin Scale and Barthel Index (r = -0.861, 0.922; p<0.001, <0.001; respectively). The activity levels assessed by adjusted modified Rankin Scale and Barthel Index among different Longshi scale grades were significantly different (χ2:4217.27, 4676.55; p<0.001, <0.001; respectively). The optimal cutoff scores were adjusted modified Rankin Scale 4, 3, 3, 3, 2 for Longshi scale grade 2 to 6 (sensitivity%: 96.12, 70.24, 89.10, 96.80, 86.23, specificity%: 72.72, 98.29, 93.81, 79.82, 92.89, respectively), and Barthel Index 15, 45, 60, 75, 80 for Longshi scale grade 2 to 6 (sensitivity%: 92.54, 89.28, 91.32, 90.30, 95.65, specificity%: 95.48, 89.51, 94.02, 90.41, 90.62, respectively). In conclusion, the classification of Longshi Scale is consistent with those of modified Rankin Scale and Barthel Index. We recommend the Longshi Scale as an effective supplement for modified Rankin Scale and Barthel Index in assessing the outcome in acute stroke patients. abstract_id: PUBMED:26039534 Effects of candesartan in acute stroke on activities of daily living and level of care at 6 months. Objective: The Scandinavian Candesartan Acute Stroke Trial (SCAST) indicated that blood pressure-lowering treatment with candesartan in the acute phase of stroke has a negative effect on functional outcome at 6 months, measured by the modified Rankin scale. We wanted to see if similar effects can be observed on activities of daily living and level of care. Methods: SCAST was an international multicentre, randomized and placebo-controlled trial of candesartan in 2029 patients recruited within 30 h of acute ischaemic or haemorrhagic stroke. Treatment lowered blood pressure by 5/2 mmHg from day 2 onwards, and was administered for 7 days. At 6 months, activities of daily living were assessed by the Barthel index, and categorized as 'dependency' (≤55 points), 'assisted dependency' (60-90), or 'independency' (≥95). Level of care was categorized as 'living at own home without public help', 'living at home with public help, or in institution for rehabilitation', or 'living in institution for long or permanent stay'. We used ordinal and binary logistic regression for statistical analysis, and adjusted for predefined key variables. Results: Data were available in 1825 patients, of which 1559 (85%) patients had ischaemic and 247 (13%) had haemorrhagic stroke. There were no statistically significant effects of candesartan on the Barthel index or on level of care (adjusted common odds ratio for poor outcome 1.09, 95% confidence interval 0.88-1.35, P = 0.44; and odds ratio 1.05, 95% confidence interval 0.82-1.34, P = 0.69, respectively). In the individual Barthel index domains, there were also no statistically significant differences. Conclusion: Blood pressure-lowering treatment with candesartan had no beneficial effect on activities of daily living and level of care at 6 months. This result is compatible with the results of the main analysis of the modified Rankin scale, and supports the conclusion that there is no indication for routine blood pressure treatment with candesartan in the acute phase of stroke. abstract_id: PUBMED:37579640 Tele-rehabilitation on independence in activities of daily living after stroke: A Matched Case-Control Study. Objectives: To compare independence in activities of daily living (ADLs) in post-acute patients with stroke following tele-rehabilitation and matched in-person controls. Materials And Methods: Matched case-control study. A total of 35 consecutive patients with stroke who followed tele-rehabilitation were compared to 35 historical in-person patients (controls) matched for age, functional independence at admission and time since injury to rehabilitation admission (<60 days). The tele-rehabilitation group was also compared to the complete cohort of historical controls (n=990). Independence in ADLs was assessed using the Functional Independence Measure (FIM) and the Barthel Index (BI). We formally compared FIM and BI gains calculated as discharge score - admission scores, efficiency measured as gains / length of stay and effectiveness defined as (discharge score-admission score)/ (maximum score-admission score). We analyzed the minimal clinically important difference (MCID) for FIM and BI. Results: The groups showed no significant differences in type of stroke (ischemic or hemorrhagic), location, severity, age at injury, length of stay, body mass index, diabetes, dyslipidemia, hypertension, aphasia, neglect, affected side of the body, dominance or educational level. The groups showed no significant differences in gains, efficiency nor effectiveness either using FIM or Barthel Index. We identified significant differences in two specific BI items (feeding and transfer) in favor of the in-person group. No differences were observed in the proportion of patients who achieved MCID. Conclusions: No significant differences were seen between total ADL scores for tele-rehabilitation and in-person rehabilitation. Future research studies should analyze a combined rehabilitation approach that utilizes both models. abstract_id: PUBMED:21447954 Impact of white matter changes on activities of daily living in mild to moderate dementia. The association between white matter changes and activities of daily living (ADL) in a large, well-defined cohort of patients with mild-to-moderate dementia (either Alzheimer's disease or subcortical vascular dementia) were investigated. A total of 289 patients were divided into three groups (140 mild, 99 moderate, and 50 severe) depending on the degree of white matter changes as indicated on brain magnetic resonance image scans. Further, we analyzed the three groups' performances on basic and instrumental ADL. The degree of white matter changes was associated with greater age, hypertension, previous history of stroke, higher Hachinski Ischemic Score, worse global cognitive and functional status, and an increased impairment of basic ADL and instrumental ADL. The increased impairment with regard to the severe group's performance on both the basic and instrumental ADL remained significant after adjustment for age and hypertension. Tasks involving physical activities were most significant. This was the first study investigating the association between white matter changes and ADL in a large, well-defined dementia cohort. The present study suggests that severe white matter changes may be associated with higher impairment on both basic and instrumental ADL. abstract_id: PUBMED:32912554 Effect of Care Capacity on Stroke Patients' Recovery in Activities of Daily Living: A Multi-Hospital Study. Background: This study aimed to investigate whether care capacity for patients following stroke contributes to improved activities of daily living (ADL) at discharge from hospital based on the degree of stroke severity. Design: Retrospective, observational, longitudinal study. Setting: Acute phase hospitals. Participants: From 2005 to 2011, 5006 patients with stroke at acute phase hospitals were registered in a database. There were 2501 individuals from 11 hospitals who met the following four criteria: (1) a pre-stroke modified Rankin Scale (mRS) score of 0-3; (2) admission to hospital within 7 days of suffering a stroke; (3) ischemic or hemorrhagic stroke; and (4) staying in hospital for 8-60 days. Main Outcome Measures: The main outcome measure was the Functional Independence Measure (FIM, version 3.0) score at discharge. The FIM is an internationally used scale, which is used as an ADL outcome assessment scale for after rehabilitation. Results: Among patients with stroke, those with care capacity had higher FIM scores at hospital discharge than did those without care capacity (unstandardized coefficient = 2.3, P = 0.015). Examination of this relationship by stroke severity showed that the FIM score at discharge was only significantly higher in patients who suffered from a moderate to severe stroke (unstandardized coefficient = 7.0, P = 0.040). No associations were observed in patients who suffered from minor, moderate, or severe stroke. Conclusions: These results suggest that care capacity facilitates total recovery of the FIM, especially among patients who suffer from a moderate to severe stroke. abstract_id: PUBMED:16082082 Ischemic lesion volume correlates with long-term functional outcome and quality of life of middle cerebral artery stroke survivors. Purpose: Previous studies investigating relationships between stroke lesion volume and outcome were restricted to short follow-up periods (3-6 months) and outcome measures of stroke severity and activities only, whereas functional improvement has been found to extend far beyond six months. Therefore, this study investigated relationships between infarct volume and a broad range of outcomes of stroke survivors at a long follow-up period. Methods: Correlations between lesion volumes (determined by conventional MRI scans in the second week post-stroke) and outcomes after one year of 75 first-ever ischemic stroke survivors were investigated. Results: Moderate Spearman Rank correlation coefficients were found between lesion volume and motor impairment (Motricity Index (MI): -0.43, p < 0.01; Fugl Meyer Motor Assessment Scale (FM): -0.43; p < 0.01). Correlation coefficients with activities of daily living were moderate but low associated with Barthel Index (r(s) = 0.30; p < 0.01), modified Rankin Scale (r(s) = 0.39; p < 0.01) and Frenchay Activities Index (r(s) = -0.35; p < 0.01). Lesion volume had a significant but low association (r(s) = 0.27; p = 0.02) with patient's health status measured with Sickness Impact Profile 68 (SIP68)) and a moderate correlation with well-being assessed with Life Satisfaction Questionnaire (LSQ; r(s) = -0.45; p < 0.01). Found correlation coefficients were slightly stronger after correction for mixed (cortical/subcortical) and purely subcortical lesion location. Conclusions: It can be concluded that infarct volume moderately correlates with long-term motor impairment, functional outcome and quality of life of patients surviving stroke. abstract_id: PUBMED:35954971 Changes in Long-Term Functional Independence in Patients with Moderate and Severe Ischemic Stroke: Comparison of the Responsiveness of the Modified Barthel Index and the Functional Independence Measure. This study investigated the long-term functional changes in patients with moderate-to-severe ischemic stroke. In addition, we investigated whether there was a difference between the modified Barthel Index (MBI) and Functional Independence Measure (FIM) according to severity. To evaluate the changes in the long-term functional independence of the subjects, six evaluations were conducted over 2 years, and the evaluation was performed using MBI and FIM. A total of 798 participants participated in this study, of which 673 were classified as moderate and 125 as severe. During the first 3 months, the moderate group showed greater recovery than the severe group. The period of significant change in the National Institutes of Health Stroke Scale (NIHSS) score was up to 6 months after onset in the moderate group, and up to 3 months after onset in the severe group. In the severe group, MBI evaluation showed significant changes up to 6 months after onset, whereas FIM showed significant changes up to 18-24 months. Our results showed that functional recovery of patients with ischemic stroke in the 3 months after onset was greater in the moderate group than in the severe group. FIM is more appropriate than MBI for evaluating the functional status of patients with severe stroke. abstract_id: PUBMED:12971703 No benefit from D-amphetamine when added to physiotherapy after stroke: a randomized, placebo-controlled study. Objective: To assess the effect of D-amphetamine on the recovery of activities of daily living and motor functions after stroke, Design: Randomized, placebo-controlled study, Setting: Inpatient rehabilitation centre, Subjects: Twenty-four stroke survivors after a first ischaemic supratentorial stroke within six weeks before study onset, severely to moderately affected, with a Barthel Index (0-100) ranging from 25 to 50, no severe concomitant internal, neurological or psychiatric diseases, and participating in a comprehensive rehabilitation programme of 10-12 weeks, Interventions: Ten sessions with 10 mg D-amphetamine (or placebo) every fourth day totalling 100 mg in a time period of 36 days combined with physical therapy according to the neurodevelopmental concept within 60 minutes after drug intake. Main Outcome Measures: Barthel Index (0-100) served as the primary outcome measure and the Rivermead Motor Assessment Score with its three sections (gross function, leg and trunk, and arm) as the secondary outcome measure, assessed at days 0, 20, 36, 90, 180 and 360. Results: The two groups did not differ with respect to clinical data and outcome measures at study onset. All patients improved significantly except for arm function over the intervention period and up to day 90 after study onset. The comparison between groups did not reveal any difference at any time; amphetamine-treated patients did not show any increase in motor function or ADL compared with the control group. Conclusions: The placebo-controlled study failed to show any effect of D-amphetamine on stroke recovery compared with control. The small number of patients, the timing and content of physical therapy were limiting factors of the present study. Further trials are warranted. Answer: The diagnostic accuracy of the Barthel Index (BI) for measuring activities of daily living (ADL) outcomes after ischemic hemispheric stroke is influenced by the timing of the assessment poststroke. A study that investigated the predictive value of the BI when measured within 72 hours after stroke found that the BI showed good discriminative properties for final ADL outcomes at 6 months. However, the accuracy of predicting ADL outcomes improved when the BI was measured later. Specifically, Day 5 poststroke was identified as the earliest time for making an optimal prediction of final ADL outcomes. The area under the curve (AUC) for the BI's predictive value increased significantly from Day 2 to Day 5 and Day 9, indicating that the timing of the assessment does affect the accuracy of the BI in predicting ADL outcomes at 6 months. The positive predictive value (PPV) of the BI also increased from Day 2 to Day 9, while the negative predictive value (NPV) declined over the same period (PUBMED:21183748). Another study on the predictive value of the National Institutes of Health Stroke Scale (NIHSS) for ADL outcomes found that the NIHSS is highly associated with final BI outcomes at 6 months poststroke when measured within 9 days. The accuracy of prediction did not significantly change beyond 2 days poststroke, suggesting that the NIHSS can be used for early rehabilitation management during the first 9 days poststroke with consistent accuracy (PUBMED:20439108). In summary, the timing of early poststroke assessment of the BI does matter for predicting ADL outcomes after ischemic hemispheric stroke, with later assessments within the first week poststroke providing more accurate predictions.
Instruction: Does postoperative blood pressure influence development of aortic regurgitation following continuous-flow left ventricular assist device implantation? Abstracts: abstract_id: PUBMED:26108216 Does postoperative blood pressure influence development of aortic regurgitation following continuous-flow left ventricular assist device implantation?†. Objectives: The true impact of postoperative blood pressure (BP) control on development of aortic regurgitation (AR) following continuous-flow left ventricular assist device (CF-LVAD) implantation remains uncertain. This study examines the influence of BP in patients with de novo AR following CF-LVAD implantation. Methods: All patients with no or <mild AR who underwent CF-LVAD implantation from July 2006 to July 2012 at our institute and with subsequent device-support of ≥3months (n = 90) were studied. Serial echocardiograms and BP readings were obtained preoperatively, postoperatively at 1, 3 and 6 months, and then at a minimum of 4-monthly intervals. BP readings were compared between patients who developed mild AR (AR group) versus those who did not (non-AR group). Logistic regression analysis was used to define independent predictors of ≥mild AR following CF-LVAD implantation. Results: Median duration of CF-LVAD support was 575 days (range: 98-2433 days). Forty-eight patients (53.3%) developed mild AR over a median duration of 126 days. BP readings (median values, mmHg) between AR and non-AR groups showed statistically significant differences: at 3 months-systolic 99.5 vs 92.5 (P = 0.038), diastolic 81.5 vs 66 (P < 0.001), mean 86.5 vs 74 (P < 0.001) and at 6 months-diastolic 73 vs 62 (P = 0.044), mean 83 vs 74.5 (P = 0.049), respectively. Systolic BP at 3 months (P = 0.047, 95% CI 0.453-0.994, OR 0.671), aortic valve (AoV) closure (P = 0.01, 95% CI 0.002-0.429, OR 0.029) and duration of support (P = 0.04, 95% CI 1.000-1.009, OR 1.004) were found to be independent predictors of AR following CF-LVAD implantation. Conclusions: BP readings at 3 months and 6 months showed a statistically significant association with ≥mild AR following CF-LVAD implantation, with systolic blood pressure at 3 months, aortic valve closure and longer support duration being independent predictors. Clinical implications of these data may warrant consideration of aggressive early control of BP to protect against the development/progress of AR following CF-LVAD implantation. abstract_id: PUBMED:31852537 Pulsatile arterial blood pressure mimicking aortic valve opening during continuous-flow LVAD support: a case report. Background: Left ventricular assist devices (LVAD) have become a common treatment option in advanced heart failure. Lack of aortic valve opening during left ventricular unloading is a common complication and associated with a worse outcome. Maintaining a minimum pulse pressure is an important goal during the early postoperative period after LVAD implantation since it is commonly seen as secure sign of aortic valve opening. Aims/objective: We report a case of an LVAD-supported patient with early permanent closure of the aortic valve despite a pulse pressure > 15 mmHg at all times following LVAD implantation. We demonstrate how careful assessment of the invasive arterial blood pressure curve can indicate aortic valve closure irrespective of pulsatile blood flow. Method: A 69-year old male patient with terminal ischemic cardiomyopathy was referred for long-term mechanical circulatory support. Due to mild aortic regurgitation both an aortic bioprosthesis and a continuous-flow left ventricular assist device were implanted. Postoperative echocardiography documented a patent aortic bioprosthesis and an acceptable residual systolic left ventricular contractility. During invasive arterial blood pressure monitoring repetitive transient slight blood pressure decreases followed by slight blood pressure increases coincided with programmed LVAD flushing cycles. Permanent pulsatile flow with a pulse pressure of ≥15 mmHg conveyed systolic opening of the aortic valve. Echocardiography, however, proved early permanent aortic valve closure. In retrospect, transformation of the automated LVAD flushing cycles into visible changes of the arterial blood pressure curve during invasive blood pressure monitoring is indicative of ejection of the complete cardiac output through LVAD itself, and therefore an early clinical sign of aortic valve closure. Discussion/conclusion: We present this interesting didactic case to highlight caveats during the early postoperative period after LVAD implantation. Moreover, this case demonstrates that careful and differentiated observation of the arterial blood pressure waveform provides crucial information in this unique and growing patient population of continuous-flow LVAD support. abstract_id: PUBMED:32080864 Impact of turbulent blood flow in the aortic root on de novo aortic insufficiency during continuous-flow left ventricular-assist device support. Severe aortic insufficiency (AI) after implantation of continuous-flow left ventricular-assist device (LVAD) affects device performance and outcomes. However, the mechanism for the occurrence and progression of AI has not been elucidated. We investigated the impact of nonphysiological retrograde blood flow in the aortic root on AI after LVAD implantation. Blood flow pattern was analyzed in patients with and without AI (n = 3 each) who underwent LVAD implantation, by computational fluid dynamics with patient-specific geometries, which were reproduced using electrocardiogram-gated 320-slice computed tomographic images. The total volume of retrograde blood flow during one cardiac cycle (716 ± 88 mL) was higher and the volume of slow blood flow (<0.1 cm/s) (0.16 ± 0.04 cm3 ) was lower in patients with AI than in those without AI (360 ± 111 mL, P = .0495, and 0.49 ± 0.08 cm3 , P = .0495, respectively). No significant difference in wall shear stress on the aortic valve was observed between the groups. Patients with AI had a perpendicular anastomosis at the distal ascending aorta and the simulation in the modified anastomosis model of patients with AI showed that the retrograde blood flow pattern depended on the angle and position of anastomosis. Computational fluid dynamics revealed strong retrograde blood flow in the ascending aorta and aortic root in patients with AI after LVAD implantation. The angle and position of LVAD outflow anastomosis might impact retrograde blood flow and de novo AI after LVAD implantation. abstract_id: PUBMED:25870369 Blood pressure and adverse events during continuous flow left ventricular assist device support. Background: Adverse events (AEs), such as intracranial hemorrhage, thromboembolic event, and progressive aortic insufficiency, create substantial morbidity and mortality during continuous flow left ventricular assist device support yet their relation to blood pressure control is underexplored. Methods And Results: A multicenter retrospective review of patients supported for at least 30 days and ≤18 months by a continuous flow left ventricular assist device from June 2006 to December 2013 was conducted. All outpatient Doppler blood pressure (DOPBP) recordings were averaged up to the time of intracranial hemorrhage, thromboembolic event, or progressive aortic insufficiency. DOPBP was analyzed as a categorical variable grouped as high (>90 mm Hg; n=40), intermediate (80-90 mm Hg; n=52), and controlled (<80 mm Hg; n=31). Cumulative survival free from an AE was calculated using Kaplan-Meier curves and Cox hazard ratios were derived. Patients in the high DOPBP group had worse baseline renal function, lower angiotensin-converting enzyme inhibitor or angiotensin receptor blocker usage during continuous flow left ventricular assist device support, and a more prevalent history of hypertension. Twelve (30%) patients in the high DOPBP group had an AE, in comparison with 7 (13%) patients in the intermediate DOPBP group and only 1 (3%) in the controlled DOPBP group. The likelihood of an AE increased in patients with a high DOPBP (adjusted hazard ratios [95% confidence interval], 16.4 [1.8-147.3]; P=0.012 versus controlled and 2.6 [0.93-7.4]; P=0.068 versus intermediate). Overall, a similar association was noted for the risk of intracranial hemorrhage (P=0.015) and progressive aortic insufficiency (P=0.078) but not for thromboembolic event (P=0.638). Patients with an AE had a higher DOPBP (90±10 mm Hg) in comparison with those without an AE (85±10 mm Hg; P=0.05). Conclusions: In a population at risk, higher DOPBP during continuous flow left ventricular assist device support was significantly associated with a composite of AEs. abstract_id: PUBMED:32998831 The impact of uncorrected mild aortic insufficiency at the time of left ventricular assist device implantation. Objective: The study objective was to investigate the progression of uncorrected mild aortic insufficiency and its impact on survival and functional status after left ventricular assist device implantation. Methods: We retrospectively reviewed 694 consecutive patients who underwent implantation of a continuous-flow left ventricular assist device between January 2006 and March 2018. Pre-left ventricular assist device transthoracic echocardiography identified 111 patients with mild aortic insufficiency and 493 patients with trace or no aortic insufficiency. To adjust for differences in preoperative factors, propensity score matching was used, resulting in 101 matched patients in each of the mild aortic insufficiency and no aortic insufficiency groups. Results: Although both groups showed similar survival (P = .58), the mild aortic insufficiency group experienced higher incidence of readmission caused by heart failure (hazard ratio, 2.62; 95% confidence interval, 1.42-4.69; P < .01). By using the mixed effect model, pre-left ventricular assist device mild aortic insufficiency was a significant risk factor for both moderate or greater aortic insufficiency and worsening New York Heart Association functional status (P < .01). Conclusions: Patients with uncorrected mild aortic insufficiency had a higher risk of progression to moderate or greater aortic insufficiency after left ventricular assist device implantation with worse functional status and higher incidence of readmission caused by heart failure compared with patients without aortic insufficiency. Further investigations into the safety and efficacy of concomitant aortic valve procedures for mild aortic insufficiency at the time of left ventricular assist device implant are warranted to improve patients' quality of life, considering the longer left ventricular assist device use as destination therapy and bridge to transplant with the new US heart allocation system. abstract_id: PUBMED:26084757 Influence of aortic valve opening in patients with aortic insufficiency after left ventricular assist device implantation. Objectives: Aortic valve insufficiency (AI) following left ventricular assist device (LVAD) implantation can potentially limit the success of mechanical circulatory support. We examined the prevalence of significant AI in the new generation of LVADs and analysed the role of aortic valve opening in the development of AI in these patients. Methods: Currently, 102 patients that received a continuous flow LVAD (cf-LVAD) between July 2009 and December 2013 are being treated in our outpatient clinic with an HeartWare ventricular assist device (HVDA) (n = 77) or HeartMate II (HMII, n = 25) and were included and analysed in a retrospective study. The mean age of the 12 female and 90 male patients was 54 ± 12 years. Ischaemic cardiomyopathy was diagnosed in 40% of patients. Echocardiographic measurements were reviewed before and after implantation. AI was considered significant if it was more than mild. Results: Mean LVAD support duration was 572 ± 437 days. Significant AI was found in 32 patients (31.4%). De novo AI occurred in mean after 183 ± 246 days of support. One patient presented severe AI and received an aortic valve replacement. A permanently closed aortic valve correlates with a greater prevalence of AI when compared with intermittent or complete opening of aortic valve (P = 0.004). Aetiology of the cardiomyopathy and the type of device had no influence on the development of AI. Smaller left ventricle end-diastolic diameter and lower body surface area were significantly associated with the development of aortic insufficiency in our cohort. Conclusion: Aortic insufficiency has a high prevalence following assist device continuous flow support. Echocardiographic parameters are an integral part of ambulatory care of these patients and can guide the optimal setting for LVAD. An aortic valve that does not open should be avoided in order to prevent AI. Patients with HMII or HVAD did not show any differences terms of the prevalence of aortic insufficiency prevalence. abstract_id: PUBMED:28483150 De Novo Aortic Regurgitation After Continuous-Flow Left Ventricular Assist Device Implantation. Continuous-flow left ventricular assist devices (cf-LVAD) play an important role in the management of patients with advanced heart failure. De novo aortic regurgitation after cf-LVAD implantation may adversely impact device performance. We performed a systematic search of PubMed, SCOPUS, and Cochrane Library for articles reporting on the incidence and predictors of de novo aortic regurgitation among cf-LVAD recipients. Eight studies totaling 548 patients were identified in the meta-analysis. The pooled incidence of de novo AR across the analyzed studies was 37%. Factors influencing its development and progression are older age, persistent aortic valve closure, being female, and duration of cf-LVAD support. abstract_id: PUBMED:35737119 Impact of progressive aortic regurgitation on outcomes after left ventricular assist device implantation. Aortic regurgitation (AR) following continuous flow left ventricular assist device implantation (cf-LVAD) may adversely impact outcomes. We aimed to assess the incidence and impact of progressive AR after cf-LVAD on prognosis, biomarkers, functional capacity and echocardiographic findings. In an analysis of the PCHF-VAD database encompassing 12 European heart failure centers, patients were dichotomized according to the progression of AR following LVAD implantation. Patients with de-novo AR or AR progression (AR_1) were compared to patients without worsening AR (AR_0). Among 396 patients (mean age 53 ± 12 years, 82% male), 153 (39%) experienced progression of AR over a median of 1.4 years on LVAD support. Before LVAD implantation, AR_1 patients were less frequently diabetic, had lower body mass indices and higher baseline NT-proBNP values. Progressive AR did not adversely impact mortality (26% in both groups, HR 0.91 [95% CI 0.61-1.36]; P = 0.65). No intergroup variability was observed in NT-proBNP values and 6-minute walk test results at index hospitalization discharge and at 6-month follow-up. However, AR_1 patients were more likely to remain in NYHA class III and had worse right ventricular function at 6-month follow-up. Lack of aortic valve opening was related to de-novo or worsening AR (P < 0.001), irrespective of systolic blood pressure (P = 0.67). Patients commonly experience de-novo or worsening AR when exposed to continuous flow of contemporary LVADs. While reducing effective forward flow, worsening AR did not influence survival. However, less complete functional recovery and worse RV performance among AR_1 patients were observed. Lack of aortic valve opening was associated with progressive AR. abstract_id: PUBMED:25653250 Evaluation of late aortic insufficiency with continuous flow left ventricular assist device†. Objectives: The aim of this study was to evaluate late development of aortic insufficiency (AI) with continuous flow left ventricular assist device (CLVAD). Development of AI is an increasingly recognized important complication in CLVAD therapy, but there are still few reports about this topic. Methods: We analysed data from 99 patients who underwent CLVAD implantation. De novo AI was defined as the development of mild or greater AI in patients with none or trace preoperative AI. Anatomic and functional correlates of de novo AI were investigated. Results: Among the 17 patients with preoperative mild AI, no improvements were observed in mitral regurgitation or LV end-systolic dimension. Of the remaining 82 patients, de novo AI was identified in 43 patients (52%), on the most recent follow-up echocardiography, and did not influence survival nor improvement of LV geometry. Rate of freedom from de novo AI at 1 year after CLVAD implantation was 35.9%. Development of significantly greater AI was observed in patients without valve opening (AI grade 1.3 ± 1.0 vs 0.7 ± 0.9; P = 0.005). By multivariate Cox hazard model, smaller body surface area (BSA) [hazard ratio: 0.83 [95% confidence interval (CI): 0.72-0.97], P = 0.018], larger aortic root diameter (AOD) [hazard ratio: 1.11 (95% CI: 1.02-1.22), P = 0.012] and higher pulmonary artery systolic pressure (PASP) [hazard ratio: 1.24 (95% CI: 1.10-1.41), P < 0.001] were identified as the independent preoperative risk factors for de novo AI. In a subset of patients with speed adjustments, increase of CLVAD speed worsened AI and led to insufficient LV unloading in patients with aortic dilatation (AOD ≥ 3.5 cm). Conclusion: Any significant mortality difference related to preoperative or development of postimplant AI was not found. AI was associated with changes in LV size, and there appears to be an interaction between BSA, preoperative PASP, time since implant, aortic valve opening, aortic size and development of AI. Longitudinal clinical management in CLVAD patients, particularly in terms of CLVAD speed optimization, should include careful assessment. abstract_id: PUBMED:23246052 Durability of central aortic valve closure in patients with continuous flow left ventricular assist devices. Background: A competent aortic valve is essential to providing effective left ventricular assist device support. We have adopted a practice of central aortic valve closure by placing a simple coaptation stitch at left ventricular assist device implantation in patients with significant aortic insufficiency. We conducted a follow-up study to evaluate the efficacy and durability of this procedure. Methods: The study included patients who had undergone continuous flow left ventricular assist device implantation. The patients were divided into 2 groups, those who did not require any aortic procedure because the valve was competent and those who underwent central aortic valve closure for mild or greater aortic regurgitation. The clinical endpoints were mortality, progression or recurrence of aortic insufficiency, and reoperation for aortic valve pathologic features. Aortic insufficiency was measured qualitatively from mild to severe on a scale of 0 to 5. Results: A total of 123 patients received continuous flow left ventricular assist devices from February 2007 to August 2011. Of those, 18 (15%) underwent central aortic valve closure at left ventricular assist device implantation because of significant aortic insufficiency (1.8 ± 1.4) and 105 who did not (competent aortic valve, 0.15 ± 0.43; P < .01). At follow-up (median, 312 days; range, 0-1429 days), the mean aortic insufficiency score remained low for the patients with central aortic valve closure (0.27 ± 0.46) in contrast to those without central aortic valve closure who experienced aortic insufficiency progression (0.78 ± 0.89; P = .02). In addition, the proportion of patients with more than mild aortic insufficiency was significantly less in the central aortic valve closure group (0% vs 18%; P = .05). The patients in the central aortic valve closure group were significantly older and had a greater incidence of renal failure at baseline. The 30-day mortality was greater in the central aortic valve closure group, but the late survival was similar between the 2 groups. No reoperations were required for recurrent aortic insufficiency. Conclusions: The results of our study have shown that repair of aortic insufficiency with a simple central coaptation stitch is effective and durable in left ventricular assist device-supported patients, with follow-up extending into 2 years. Although aortic insufficiency progressed over time in those with minimal native valve regurgitation initially, no such progression was noted in those with central aortic valve closure. Additional investigation is needed to evaluate whether prophylactic central aortic valve closure should be performed at left ventricular assist device implantation to avoid problematic aortic regurgitation developing over time, in particular in patients undergoing left ventricular assist device implantation for life-long (destination therapy) support. Answer: Yes, postoperative blood pressure does influence the development of aortic regurgitation (AR) following continuous-flow left ventricular assist device (CF-LVAD) implantation. A study found that blood pressure readings at 3 months and 6 months postoperatively showed a statistically significant association with the development of mild or greater AR following CF-LVAD implantation. Specifically, systolic blood pressure at 3 months, aortic valve closure, and longer support duration were identified as independent predictors of AR following CF-LVAD implantation (PUBMED:26108216). This suggests that aggressive early control of blood pressure may be important to protect against the development or progression of AR after CF-LVAD implantation. Additionally, other studies have highlighted the importance of aortic valve opening in preventing AR. A permanently closed aortic valve has been correlated with a greater prevalence of AR compared to intermittent or complete opening of the aortic valve (PUBMED:26084757). Moreover, lack of aortic valve opening has been associated with de novo or worsening AR, irrespective of systolic blood pressure (PUBMED:35737119). Furthermore, the development of AR has been linked to factors such as older age, persistent aortic valve closure, being female, and duration of CF-LVAD support (PUBMED:28483150). The progression of AR after CF-LVAD implantation can impact outcomes, including survival, functional capacity, and echocardiographic findings, and may necessitate additional interventions such as aortic valve procedures (PUBMED:32998831). In summary, postoperative blood pressure management and ensuring aortic valve opening are important considerations in the care of patients with CF-LVADs to mitigate the risk of developing AR.
Instruction: Could albumin level explain the higher mortality in hemodialysis patients with pulmonary hypertension? Abstracts: abstract_id: PUBMED:22867112 Could albumin level explain the higher mortality in hemodialysis patients with pulmonary hypertension? Background: The pathogenesis of pulmonary hypertension (PH) in hemodialysis is still unclear. The aim of this study was to identify the risk factors associated with the presence of PH in chronic hemodialysis patients and to verify whether these factors might explain the highest mortality among them. Methods: We conducted a retrospective study of hemodialysis patients who started treatment from August 2001 to October 2007 and were followed up until April 2011 in a Brazilian referral medical school. According to the results of echocardiography examination, patients were allocated in two groups: those with PH and those without PH. Clinical parameters, site and type of vascular access, bioimpedance, and laboratorial findings were compared between the groups and a logistic regression model was elaborated. Actuarial survival curves were constructed and hazard risk to death was evaluated by Cox regression analysis. Results: PH > 35 mmHg was found in 23 (30.6%) of the 75 patients studied. The groups differed in extracellular water, ventricular thickness, left atrium diameter, and ventricular filling. In a univariate analysis, extracellular water was associated with PH (relative risk = 1.194; 95% CI of 1.006 - 1.416; p = 0.042); nevertheless, in a multiple model, only left atrium enlargement was independently associated with PH (relative risk =1.172; 95% CI of 1.010 - 1.359; p = 0.036). PH (hazard risk = 3.008; 95% CI of 1.285 - 7.043; p = 0.011) and age (hazard risk of 1.034 per year of age; 95% CI of 1.000 - 7.068; p = 0.047) were significantly associated with mortality in a multiple Cox regression analysis. However, when albumin was taken in account the only statistically significant association was between albumin level and mortality (hazard risk = 0.342 per g/dL; 95% CI of 0.119 - 0.984; p = 0.047) while the presence of PH lost its statistical significance (p = 0.184). Mortality was higher in patients with PH (47.8% vs 25%) who also had a statistically worse survival after the sixth year of follow up. Conclusions: PH in hemodialysis patients is associated with parameters of volume overload that sheds light on its pathophysiology. Mortality is higher in hemodialysis patients with PH and the low albumin level can explain this association. abstract_id: PUBMED:37640012 Development and Validation of a Nomogram for Predicting All-Cause Mortality in Patients with Hemodialysis Having Pulmonary Hypertension. Introduction: Patients with end-stage renal disease receiving hemodialysis (HD) have a high morbidity and mortality rate associated with pulmonary hypertension (PH). A nomogram was developed to predict all-cause mortality in HD patients with PH. In this study, we aimed to validate the usefulness of this nomogram. Methods: A total of 274 HD patients with PH were hospitalized at the Affiliated Hospital of Xuzhou Medical University between January 2014 and June 2019 and followed up for 3 years. Echocardiography detected PH when the peak tricuspid regurgitation velocity (TRV) was more than 2.8 m/s. To evaluate the all-cause mortality for long-term HD patients with PH, Cox regression analysis was performed to determine the factors of mortality that were included in the prediction model. Next, the area under the receiver-operating characteristic curve (AUC-ROC) was used to assess the predictive power of the model. Calibration plots and decision curve analysis (DCA) were used to assess the accuracy of the prediction results and the clinical utility of the model. Results: The all-cause mortality rate was 29.20% throughout the follow-up period. The nomogram comprised six commonly available predictors: age, diabetes mellitus, cardiovascular disease, hemoglobin, left ventricular ejection fraction, and TRV. The 1-year, 2-year, and 3-year AUC-ROC values were 0.842, 0.800, and 0.781, respectively. The calibration curves revealed excellent agreement with the nomogram, while the DCA demonstrated favorable clinical practicability. Conclusion: The first developed nomogram for predicting all-cause mortality in HD patients with PH could guide clinical decision-making and intervention planning. abstract_id: PUBMED:27122857 Association of Plasma Thrombospondin-1 Level with Cardiovascular Disease and Mortality in Hemodialysis Patients. Background: Thrombospondin-1 (TSP-1) is known to be involved in the regulation of angiogenesis, inflammation, and vascular function. Clinical studies have demonstrated its correlation with peripheral artery disease, coronary artery disease, and pulmonary hypertension. In this study, we explored its potential roles in the background of end-stage renal disease (ESRD). Methods: A total of 140 ESRD outpatients (ages 61.0 ± 12.4 years) were prospectively followed for 34 ± 7 months. Their TSP-1 levels were analyzed from pre-hemodialysis blood sample. Cardiovascular survey included ankle- brachial index (ABI), echocardiography and Tl-201 dipyridamole single-photon emission computed tomography (SPECT). Results: Plasma TSP-1 levels were higher in those patients with preexisting clinical evidence of cardiovascular disease (CVD) than those without (p = 0.002). TSP-1 concentrations were also correlated with ABI, left ventricular ejection fraction, and scar burden in SPECT. Stepwise logistic regression analysis revealed that TSP-1 level was independently associated with the presence of CVD, with an odds ratio of 1.38 [95% confidence interval (CI), 1.09-1.75, p = 0.008]. In survival analyses, 31 patients (22%) died during the follow-up, 16 (52%) arising from cardiovascular causes. Cox hazards analysis revealed that the patients with TSP-1 levels in the highest tertile had a 5.32- and 6.75-fold higher risk for all-cause and cardiovascular mortality than those in the lowest tertile. This predictive value for all-cause mortality still persisted after multivariate adjustment (hazard ratio, 8.71; 95% CI, 1.36-55.68; p = 0.02). Conclusions: This study hallmarks the association of elevated TSP-1 level with CVD and adverse outcome among hemodialysis patients. Key Words: Thrombospondin-1; End-stage renal disease; Cardiovascular disease; Mortality. abstract_id: PUBMED:19212417 Pulmonary hypertension is an independent predictor of mortality in hemodialysis patients. Pulmonary hypertension in patients with end-stage renal disease on hemodialysis is a newly described entity. To determine its impact, we measured selected clinical variables in the survival of 127 hemodialysis patients. Overall, pulmonary hypertension was found in 37 of these patients; it was already prevalent in 17 of them before initiation of dialysis and was associated with severe cardiac dysfunction. In the other 20 it developed after dialysis began, without obvious cause. These two subgroups of patients had similar survival curves, which were significantly worse in comparison to those without pulmonary hypertension. Following the initiation of hemodialysis, 20 patients with otherwise matched clinical variables survived significantly longer than the 20 who developed pulmonary hypertension after dialysis began. With univariate analysis, significant hazard ratios were found for age at onset of hemodialysis therapy (1.7), valvular diseases (1.8), pulmonary hypertension prevalence before hemodialysis (3.6) and incident after hemodialysis (2.4) for predicting mortality. In a multivariable Cox proportional hazard model, the development of pulmonary hypertension both before and after initiation of hemodialysis had significantly increased odds ratios and remained an independent predictor of mortality. Our study shows the incidence of pulmonary hypertension, after initiation of hemodialysis therapy, is a strong independent predictor of mortality nearly equal to that associated with long-standing severe cardiac abnormalities. abstract_id: PUBMED:32239335 The influence of cardiac valvular calcification on all-cause and cardiovascular mortality in maintenance hemodialysis patients. Objective: To investigate the effect of cardiac valve calcification (CVC) on all-cause and cardiovascular mortality in maintenance hemodialysis (MHD) patients. Methods: A retrospective cohort study was conducted in 183 long-term hemodialysis patients with complete follow-up data from January 1, 2012, to December 30, 2015. The baseline data between CVC and non-CVC groups were compared. Kaplan-Meier method was used to analyze all-cause and cardiovascular mortality. The effect of CVC on prognosis was analyzed using the Cox proportional hazard regression model and subgroup analysis. Results: Among 183 patients under hemodialysis, 104 (56.8%) were males, with an average age of 56.1 ± 17.0 years and 68 (37.2%) were complicated with valvular calcification. The median follow-up period was 30.8 months. All-cause and cardiovascular mortality were 50% vs. 14.8% and 25% vs. 7.0% in the CVC and non-CVC groups, respectively (P < 0.05). Kaplan-Meier indicated that differences in all-cause and cardiovascular mortality were statistically significant between the two groups (P < 0.001). Cox regression analysis showed that CVC significantly increased all-cause (hazards ratio [HR] 2.161 [1.083-4.315]) and cardiovascular mortality (3.435 [1.222-9.651]) after adjusting for multiple factors. Meanwhile, CVC also increases the incidence of new-onset cardiovascular events. Subgroup analysis revealed that all-cause and cardiovascular mortality were significantly higher in patients with aortic valve calcification (AVC) than in patients with mitral valve calcification (MVC). Multivariate calibration showed that AVC increased the risk of cardiovascular death (HR 5.486 [1.802-16.702]) (P < 0.05), whereas MVC did not. By further comparing the echocardiographic data of the two groups, the incidence of LVH and pulmonary hypertension in the AVC group was significantly higher than that in the MVC group. Conclusion: Valve calcification increases the risk of all-cause and cardiovascular mortality in MHD patients, also new-onset cardiovascular events, and aortic valve calcification contributes more to the risk of cardiovascular mortality. abstract_id: PUBMED:29604093 Serum albumin concentration as an independent prognostic indicator in patients with pulmonary arterial hypertension. Background: Serum albumin is a strong prognostic indicator for many disease processes, yet limited data exist regarding its prognostic relationship in pulmonary arterial hypertension (PAH). Our study aims to assess the relationship of hypoalbuminemia with disease severity and mortality in this population. Hypothesis: Serum albumin concentrations are a predictor of outcomes in PAH. Methods: A retrospective review of all patients with World Health Organization group 1 PAH evaluated between March 2001 and August 2008 was performed. Patients were stratified into groups based on serum albumin concentration ≤3.3 g/dL (hypoalbuminemia) vs >3.3 g/dL. Clinical, hemodynamic, and survival comparisons were compared between groups using Student t test and χ2 test, followed by univariate analysis and multivariate logistic regression. Results: A total of 163/273 (59.7%) patients had a documented serum albumin concentration. Hypoalbuminemia was present in 41 (25.2%) patients and serum albumin ≤3.3 g/dL represented the lowest quartile of serum albumin. Patients with hypoalbuminemia had higher rates of renal dysfunction (26.8% vs 9.8%, P =0.0069) and hepatic dysfunction (29.3% vs 6.6%, P <0.001), and lower hemoglobin levels (11.6 vs 13.4 g/dL, P < 0.001). Hemodynamic and functional capacity assessments were comparable between groups. Independent predictors of mortality included low albumin levels (hazard ratio [HR]: 0.485, P = 0.008), high right atrial systolic area (HR: 1.062, P = 0.003), low Fick-derived cardiac index (HR: 1.465, P = 0.016), and high New York Heart Association functional class (HR: 1.767, P = 0.042). Patients with hypoalbuminemia demonstrated a significantly lower survival rate at latest follow-up (P = 0.01). Conclusions: Lower serum albumin concentrations in patients with PAH are associated with higher mortality and can serve as a marker of disease severity in this patient population. abstract_id: PUBMED:18310865 Pulmonary hypertension in hemodialysis patients. The aim of this study was to evaluate the prevalence of primary pulmonary hypertension (PHT) among hemodialysis patients and search for possible etiologic factors. The prevalence of PHT was prospectively estimated by Doppler echocardiogram in 62 long-term hemodialysis patients on the day post dialysis. PHT (> 35 mm Hg) was found in 32 (51.6%) patients with a mean systolic pulmonary artery pressure of 39.6 +/- 13.3 mmHg. The hemoglobin and albumin levels were significantly lower in the PHT subgroup (11.1 +/- 1.86 vs 9.8 +/- 1.97 g/dL and 3.75 +/- 0.44 vs 3.38 +/- 0.32 g/dL, p = 0.01 and 0.02, respectively). Our study demonstrates a surprisingly high prevalence of PHT among patients receiving long-term hemodialysis. Early detection is important in order to avoid the serious consequences of the disease. abstract_id: PUBMED:35299658 Baseline Ratio of Soluble Fas/FasL Predicts Onset of Pulmonary Hypertension in Elder Patients Undergoing Maintenance Hemodialysis: A Prospective Cohort Study. Background: Pulmonary hypertension (PH) is one of the most common complications associated with end-stage renal disease (ESRD). Though numerous risk factors have been founded, other risk factors remain unidentified, particularly in patients undergoing maintenance hemodialysis with elder age. Soluble Fas (sFas) and its ligand FasL (sFasL) have been reported in chronic renal disease patients; however, they have not been identified in the PH patients of elder hemodialysis patients. We aimed to determine the roles of sFas/sFasL in onset of PH in elder patients undergoing maintenance hemodialysis with ESRD. Methods: Altogether, 163 patients aged 68.00 ± 10.51 years with ESRD who undergoing maintenance hemodialysis in a prospective cohort and were followed-up for a median of 5.5 years. They underwent echocardiography examinations, liver function assessments, residual renal function, and serum ion examinations, before and after dialysis. Furthermore, levels of sFas and sFasL at baseline had also been measured. We compared demographic data, echocardiographic parameters, liver function, ions, and residual renal function as well as serum sFas and sFasL between the PH and non-PH groups. These parameters were correlated with systolic pulmonary artery pressure (sPAP) using Spearman's correlation. Moreover, univariate and adjusted logistic regression analyses have also been conducted. Results: The incidence of PH in the elder dialysis patients was 39.1%. PH populations were demonstrated with significantly higher end-diastolic internal diameters of the left atrium, left ventricle, right ventricle (RV), and pulmonary artery, as well as the left ventricular posterior wall thickness (LVWP; all p < 0.05). A higher baseline serum sFas and sFasL levels have also been identified ( p < 0.001). They also showed lower fractional shortening and left ventricular ejection fraction (LVEF; p < 0.05). Following dialysis, the post-dialysis serum potassium concentration (K+) was significantly higher in the PH group ( p = 0.013). Furthermore, the adjusted regression identified that ratio of sFas/FasL (OR: 1.587, p = 0.004), RV (OR: 1.184, p = 0.014), LVPW (OR: 1.517, p = 0.007), and post-dialysis K+ (OR: 2.717, p = 0.040) was the independent risk factors for PH while LVEF (OR: 0.875, p = 0.040) protects patients from PH. Conclusion: The baseline ratio of sFas/sFasL, RV, LVPW, and post-dialysis K+ was independent risk factors for PH onset, while LVEF was a protective factor for PH. abstract_id: PUBMED:26874785 The Relationship of Fluid Overload as Assessed by Bioelectrical Impedance Analysis with Pulmonary Arterial Hypertension in Hemodialysis Patients. BACKGROUND Pulmonary arterial hypertension (PAH) is common disease among hemodialysis (HD) patients and is associated with increased morbidity and mortality. However, its pathogenesis has not been completely elucidated. We aimed to evaluate the frequency of PAH in HD patients, as well as the relationship between fluid status and PAH. MATERIAL AND METHODS We enrolled 77 HD patients in this study. Multifrequency bioimpedance analysis (BIA) was used to assess fluid status. BIA was performed before and 30 min after the midweek of HD. Overhydration (OH)/extracellular water (ECW)% ratio was used as an indicator of fluid status. Fluid overload was defined as OH/ECW ≥7%. Echocardiographic examinations were performed before and after the HD. Pulmonary arterial hypertension was defined as systolic pulmonary artery pressure at rest (sPAP) higher than 35 mmHg. RESULTS PAH was found in 33.7% of the HD patients. OH/ECW and the frequency of fluid overload were significantly higher in HD patients with PAH than those without PAH, whereas serum albumin and hemoglobin levels were significantly lower. sPAP level was significantly higher in HD patients with fluid overload than in those without fluid overload after hemodialysis session. Furthermore, sPAP, OH/ECW levels, and the frequency of PAH were significantly reduced after HD. We also found a significant positive correlation between sPAP and OH/ECW. Multivariate logistic regression analysis demonstrated fluid overload to be an independent predictor of PAH after HD. CONCLUSIONS PAH is prevalent among HD patients. This study demonstrated a strong relationship between fluid overload and PAH in HD patients. abstract_id: PUBMED:30637771 Low serum albumin: A significant predictor of reduced survival in patients with chronic heart failure. Background: Low serum albumin is common in patients with chronic heart failure (HF). Hypothesis: Albumin may have an impact on clinical outcome in HF. We evaluated the effect of albumin levels on clinical outcome in a real-world cohort of patients with HF. Methods: All patients with HF at a health maintenance organization were followed for cardiac-related hospitalizations and death. Results: A total of 5779 HF patients were included in the study; mean follow-up was 576 days; median serum albumin was 4.0 g/dL (interquartile range 3.7-4.2), and 12% of the patients had hypoalbuminemia (albumin<3.5 g/dL). Low albumin was associated with increasing age, higher urea and C-reactive protein, lower sodium, hemoglobin, iron, less treatment with angiotensin-converting enzyme inhibitor or angiotensin receptor blocker, reduced right ventricular function, and pulmonary hypertension. Cox regression analysis after adjustment for significant predictors demonstrated that decreasing quartiles of albumin was significantly associated with mortality: Lowest quartile compared to highest: hazard ratio (HR) 5.74, 95% confidence interval (CI) 4.08 to 8.07, P < 0.001. Cox regression analysis of albumin as a continuous parameter using restricted cubic splines after adjustment for significant parameters demonstrated that reduced albumin levels were directly associated with increased mortality (P < 0.001 for the adjusted model). Decreasing quartiles of albumin were also a significant predictor of increased cardiac-related hospitalizations. A decrease in albumin on follow-up was an independent predictor of increased mortality by Cox regression analysis: HR 2.58, 95% CI 2.12 to 3.14, P < 0.001. Conclusions: Low albumin provides important information regarding several detrimental processes in HF and is a significant predictor of a worse outcome in these patients. Answer: Yes, albumin level could explain the higher mortality in hemodialysis patients with pulmonary hypertension (PH). The study by PUBMED:22867112 found that while PH and age were significantly associated with mortality in a multiple Cox regression analysis, when albumin was taken into account, the only statistically significant association was between albumin level and mortality. The presence of PH lost its statistical significance, suggesting that low albumin levels may explain the higher mortality observed in hemodialysis patients with PH. This study concluded that mortality is higher in hemodialysis patients with PH and that the low albumin level can explain this association. Additionally, the study by PUBMED:30637771 supports the notion that low serum albumin is a significant predictor of reduced survival in patients with chronic heart failure, which often coexists with PH. The study found that decreasing quartiles of albumin were significantly associated with mortality, and a decrease in albumin on follow-up was an independent predictor of increased mortality. Furthermore, the study by PUBMED:29604093 found that lower serum albumin concentrations in patients with pulmonary arterial hypertension (PAH) are associated with higher mortality and can serve as a marker of disease severity in this patient population. These findings suggest that albumin levels are an important prognostic indicator in hemodialysis patients with PH and that interventions aimed at improving albumin levels could potentially improve survival outcomes in this patient group.
Instruction: Does pupil dilation influence subfoveal choroidal laser Doppler flowmetry? Abstracts: abstract_id: PUBMED:23648126 Does pupil dilation influence subfoveal choroidal laser Doppler flowmetry? Purpose: The aim of this study was to assess (i) whether pupil dilation with tropicamide influences subfoveal choroidal blood flow, as assessed by continuous laser Doppler flowmetry (LDF) and (ii) if this is the case, whether the effect is due to a haemodynamic response of the drug-induced dilation. Methods: Following the instillation of one drop of 1% tropicamide in one eye of 18 healthy, nonsmoking volunteers (age 20-25 years), the subfoveal choroidal LDF parameters (Vel, Vol and ChBF) were recorded during 30 min, at 3-min intervals under two paradigms: through an artificial pupil (4 mm diameter) placed in front of the cornea (P1) and without this artificial pupil (P2). Results: Tropicamide increased the pupil diameter from 3.3 ± 0.4 mm (mean ± SD) to 8.3 ± .4 mm. Full dilation was reached at ~24 min. During this period of time, linear regression analysis demonstrated that none of the LDF parameters varied significantly (p > 0.05), either under P1 or P2. Based on a group of 12 subjects, the smallest (%) change in the mean value of ChBF (ChBFm ) that would be detectable (sensitivity of the method, S) was found to be 2% for P1 and 6% for P2. The average coefficient of variation of ChBFm based on eight measurements during dilation was greater for P2 than for P1 by a factor of approximately 2. Conclusion: Tropicamide had no significant influence on the subfoveal choroidal LDF parameters measured by continuous LDF during pupil dilation. Furthermore, pupil dilation did not affect ChBFm by more than the calculated minimum percentage change of 6% detectable with our method. abstract_id: PUBMED:22992205 Assessment of retinal and choroidal blood flow changes using laser Doppler flowmetry in rats. Purpose: A new noninvasive laser Doppler flowmetry (LDF) probe (one emitting fiber surrounded by a ring of eight collecting fibers, 1-mm interaxis distance) was tested for its sensitivity to assess the retinal/choroidal blood flow variations in response to hypercapnia, hyperoxia, diverse vasoactive agents and following retinal arteries photocoagulation in the rat. Materials And Methods: After pupil dilation, a LDF probe was placed in contact to the cornea of anesthetized rats in the optic axis. Hypercapnia and hyperoxia were induced by inhalation of CO(2) (8% in medical air) and O(2) (100%) while pharmacological agents were injected intravitreously. The relative contribution of the choroidal circulation to the LDF signal was estimated after retinal artery occlusion by photocoagulation. Results: Blood flow was significantly increased by hypercapnia (18%), adenosine (14%) and sodium nitroprusside (16%) as compared to baseline values while it was decreased by hyperoxia (-8%) and endothelin-1 (-11%). Photocoagulation of retinal arteries significantly decreased blood flow level (-45%). Conclusions: Although choroidal circulation most likely contributes to the LDF signal in this setting, the results demonstrate that LDF represents a suitable in vivo noninvasive technique to monitor online relative reactivity of retinal perfusion to metabolic or pharmacological challenge. This technique could be used for repeatedly assessing blood flow reactivity in rodent models of ocular diseases. abstract_id: PUBMED:16527229 Long-term effect of laser treatment for dry age-related macular degeneration on choroidal hemodynamics. Purpose: To determine whether laser treatment applied according to the complications of age-related macular degeneration prevention trial (CAPT) has an effect on the choroidal circulation. Design: Randomized controlled trial. Methods: This study included 30 CAPT patients with bilateral drusen. Laser Doppler flowmetry was used to measure relative choroidal blood flow (Ch(flow)) in the fovea. Measurements were obtained through dilated pupils in both eyes of each patient before photocoagulation was applied in one eye. Measurements were repeated at three months (30 patients) and 28 months (23 patients). Results: Average Ch(flow) at baseline, three months, and 28 months was 7.2 +/- 2.1 (+/-1 SD), 7.3 +/- 2.5, and 6.8 +/- 2.7 arbitrary units (AU) in the control eyes and 6.6 +/- 1.6, 7.0 +/- 2.3, and 7.8 +/- 3.0 AU in the treated eyes. In comparison to control eyes, there was no significant change in Ch(flow) in the treated eyes at three months after treatment. At 28 months, however, there was a 5.6% drop in Ch(flow) in control eyes and an 18.2% increase in Ch(flow) in treated eyes from baseline. The average difference of 23.8% between the percentage changes in Ch(flow) observed in the control and treated eyes was statistically significant (paired two-tailed Student t test; P = .05). Conclusions: Our results suggest an increase in choroidal blood flow 28 months after laser treatment according to the CAPT protocol. This increase may play a role in the mechanism leading to the disappearance of drusen after photocoagulation. Whether removal of drusen after photocoagulation is beneficial to the patients is not known at this time. abstract_id: PUBMED:15148214 Effect of laser treatment for dry age related macular degeneration on foveolar choroidal haemodynamics. Aim: Previous studies have suggested that laser photocoagulation therapy is associated with the resolution of drusen in some age related macular degeneration (AMD) patients. The main aim of the study was to determine whether low intensity laser treatment applied according to the Complications of AMD Prevention Trial (CAPT) protocol produces changes in the choroidal circulation that may help explain the mechanism leading to the resolution of drusen material. Methods: This ancillary study included 30 CAPT patients with bilateral drusen that were treated and followed at the University of Pennsylvania. Laser Doppler flowmetry was used to measure relative choroidal blood velocity (Ch(vel)), volume (Ch(vol)), and flow (Ch(flow)) in the centre of the fovea. Measurements were obtained through a dilated pupil in both eyes of each patient at the initial CAPT visit before laser treatment was applied in one eye. Measurements were repeated in both eyes of each subject three months later. Analysis of laser Doppler measurements was performed in a masked fashion. Results: In comparison to baseline, no significant differences in Ch(vel), Ch(vol), or Ch(flow) were observed three months following the application of low intensity laser according to the CAPT protocol in the untreated and treated eyes. In comparison to the untreated eyes, no significant differences were detected in the treated eyes. Based on the variability of flow measurements in the untreated eyes, the authors estimated an 85% power to detect a 15% change in relative blood flow. Conclusions: The results suggest that large alterations in choroidal blood flow do not occur at three months after low intensity laser therapy following the CAPT protocol. abstract_id: PUBMED:24325439 Prevalence of choroidal naevi using scanning laser ophthalmoscope. Purpose: To determine the prevalence of choroidal naevi in a healthy population using an ultra-wide scanning laser ophthalmoscope which can capture up to a 200° field of view of the retina, without pupil dilation. Methods: Subjects were recruited from a college population. Each subject underwent an eye examination that included retinal and choroidal imaging using the Optos scanning laser ophthalmoscope and completed a self-administered questionnaire. Univariate and multivariate regression analysis were performed to identify associated factors with choroidal naevi. Results: The Optomap images of 406 participants (mean age 23.5 ± 4.4) were analysed. Thirty-nine subjects (10%) presented with at least one naevus in one eye, and multiple naevi were observed unilaterally in eight subjects. Drusen were noted in only one eye. The mean naevus diameter was 2.38 mm (± 2.07). Prevalence was not associated with either eye colour, time spent outdoors, visual impairment, or ethnic group, but was significantly associated with male gender (OR 2.41, p = 0.009), blond/red hair colour (OR 2.42, p = 0.037), skin that neither burns nor tans (OR 2.8, p = 0.02) and habitual sunglasses use (OR 0.33, p = 0.002). Conclusions: A higher prevalence of choroidal naevi was detected in this college population compared to other population-based studies. This is likely due to the use of an ultra-wide field scanning laser ophthalmoscope which provides up to 200° of the fundus. The naevi were associated with several factors. abstract_id: PUBMED:27055680 The effect of topical anti-muscarinic agents on subfoveal choroidal thickness in healthy adults. PurposeTo investigate the effects of tropicamide and cyclopentolate, which are two anti-muscarinic agents commonly used in the ophthalmologic practice, on subfoveal choroidal choroidal thickness (ChT) in healthy adults.MethodsA total of 74 healthy adult subjects were enrolled in the study. Subjects were randomly divided into two groups: (1) cyclopentolate group (n=37) in which the right eye (study eye) of each subject received topical cyclopentolate 1%, and the fellow eye (control eye) received artificial tears and (2) tropicamide group (n=37) in which the right eye (study eye) of each subject received topical tropicamide 1% and the fellow eye (control eye) received artificial tears. Each topical medication was applied three times with 10-min intervals. ChT measurements were performed at baseline and 40 min after the last drops of the topical medications by enhanced depth imaging (EDI) optical coherence tomography (OCT).ResultsIn the cyclopentolate group, subfoveal ChT significantly increased in the study eyes (P=0.013), whereas it did not significantly change in the control eyes (P=0.417). On the other hand, in the tropicamide group, no significant subfoveal ChT changes were observed in either the study eyes (P=0.715) or the control eyes (P=0.344).ConclusionsThe current study demonstrated that cyclopentolate caused significant choroidal thickening, whereas tropicamide had no significant effect on ChT in healthy adults. As a result, mydriasis by cyclopentolate may complicate ChT measurements by EDI OCT. Use of tropicamide may provide more reliable results for evaluation of ChT in ocular pathologies. abstract_id: PUBMED:27756076 Measurements of Retinal Perfusion Using Laser Speckle Flowgraphy and Doppler Optical Coherence Tomography. Purpose: This study evaluated the validity of retinal perfusion measurements using laser speckle flowgraphy (LSFG) by means of in vitro experiments and direct comparison with dual-beam Doppler optical coherence tomography (D-OCT) in a healthy Caucasian population. Methods: The flow velocity of scattering solution pumped through a glass capillary was measured at 17 different flow velocities (range, 0.5-47 mm/s) using LSFG. The flow within the glass capillary was produced by a computer-controlled infusion pump. In vivo, three consecutive LSFG scans were obtained in 20 eyes of 20 healthy Caucasian subjects before and after pharmacological pupil dilation. Relative flow volume (RFV), the primary output parameter of LSFG, was comparatively validated relative to absolute measurements of retinal blood flow and velocity as obtained from D-OCT. Results: In the in vitro experiments, RFV was found to saturate at a level of approximately 700 arbitrary units (au) or 23.5 mm/s of actual velocity. In vivo, RFV was in significant agreement with absolute blood flow measurements as obtained from D-OCT in arteries (r = 0.69, P = 0.001) and veins (r = 0.74, P < 0.001). However, linear regression analysis revealed significant positive zero offset values for RFV of 223.4 and 282.7 au in arteries and veins, respectively. Conclusions: Measurements of RFV were successfully obtainable, reproducible, and not influenced by pharmacological pupil dilation. Nevertheless, our data revealed flaws in the LSFG method of measuring retinal perfusion in Caucasians. Adjustment to the technique is required to address apparent issues with RFV, especially saturation effects with higher arterial flow rates. The present dataset may provide a valuable tool to do so. (Clinicaltrials.gov number NCT02582411). abstract_id: PUBMED:37424265 Acute angle-closure related with choroidal metastasis from lung adenocarcinoma: A case report and literature review. Purpose: Acute angle-closure is a rare manifestation of choroidal metastasis. We reported a case of choroidal metastasis from lung adenocarcinoma presenting with unilateral acute angle-closure attacks relieved with radiotherapy after failed conventional medical and laser treatments. This represented the first detailed report of treatments of secondary acute angle-closure attacks in patients with choroidal metastasis. Case Description: A 69-year-old female without ocular history was diagnosed with metastatic lung adenocarcinoma. One month later, she complained of blurred vision and pain in the right eye that lasted 2 days. IOP was 58 mmHg and best-corrected visual acuity (BCVA) was counting finger in the right eye. Slit-lamp examination revealed corneal edema with ciliary congestion, extremely shallow anterior chamber both centrally and peripherally, mid-dilated pupil and moderate cataract in the right eye. While the left eye was normal. B-scan ultrasound and orbital computed tomography showed an appositional choroidal detachment with an underlying choroidal thickening suggesting choroidal metastasis in the right eye. There was limited effect of medical and laser therapy. IOP was 9 mmHg in the right eye after two months of palliative external beam radiotherapy in the right orbit. BCVA was hand motion in the right eye. Slit lamp examination revealed clear cornea, deep anterior chamber in the right eye. Regression of choroidal detachment and choroidal metastasis in the right eye were observed in B-scan ultrasound. Conclusion: This case demonstrated that patient with secondary acute angle-closure attacks from large bullous choroidal detachment related to choroidal metastasis could only be successfully treated with radiotherapy as both medical and laser therapy might not be capable of breaking angle-closure attacks. abstract_id: PUBMED:37914931 Vagal sensory neurons mediate the Bezold-Jarisch reflex and induce syncope. Visceral sensory pathways mediate homeostatic reflexes, the dysfunction of which leads to many neurological disorders1. The Bezold-Jarisch reflex (BJR), first described2,3 in 1867, is a cardioinhibitory reflex that is speculated to be mediated by vagal sensory neurons (VSNs) that also triggers syncope. However, the molecular identity, anatomical organization, physiological characteristics and behavioural influence of cardiac VSNs remain mostly unknown. Here we leveraged single-cell RNA-sequencing data and HYBRiD tissue clearing4 to show that VSNs that express neuropeptide Y receptor Y2 (NPY2R) predominately connect the heart ventricular wall to the area postrema. Optogenetic activation of NPY2R VSNs elicits the classic triad of BJR responses-hypotension, bradycardia and suppressed respiration-and causes an animal to faint. Photostimulation during high-resolution echocardiography and laser Doppler flowmetry with behavioural observation revealed a range of phenotypes reflected in clinical syncope, including reduced cardiac output, cerebral hypoperfusion, pupil dilation and eye-roll. Large-scale Neuropixels brain recordings and machine-learning-based modelling showed that this manipulation causes the suppression of activity across a large distributed neuronal population that is not explained by changes in spontaneous behavioural movements. Additionally, bidirectional manipulation of the periventricular zone had a push-pull effect, with inhibition leading to longer syncope periods and activation inducing arousal. Finally, ablating NPY2R VSNs specifically abolished the BJR. Combined, these results demonstrate a genetically defined cardiac reflex that recapitulates characteristics of human syncope at physiological, behavioural and neural network levels. abstract_id: PUBMED:23277206 Effect of pupil dilation on macular choroidal thickness measured with spectral domain optical coherence tomography in normal and glaucomatous eyes. To compare choroidal thickness before and after pupil dilation. Macular subfoveal, nasal, temporal and average choroidal thicknesses were measured in one eye of 17 healthy individuals and 40 glaucoma patients using enhanced depth imaging optical coherence tomography before and after pupil dilation. Comparisons were made between pre- and post-dilation measurements, and between normal and glaucomatous eyes. No statistically significant differences were found between pre- and post-dilation choroidal thickness measurements both in normal (p = 0.361 for subfoveal, 0.760 for nasal, 0.941 for temporal, 0.881 for average) and glaucomatous eyes (p = 0.687 for subfoveal, 0.340 for nasal, 0.913 for temporal, and 0.642 for average). After adjusting for age, the comparison between normal and glaucomatous eyes showed no significant differences in measurements both before (p = 0.701-0.907) and after pupil dilation (p = 0.757-0.988). Similar results were obtained for measurements unadjusted for age. Measurements obtained under the two conditions correlated well in normal (r = 0.92-0.97, p < 0.001) and in glaucomatous eyes (r = 0.84-0.98, p < 0.001). Bland-Altman analyses showed good agreements between them in both groups of eyes, with mean difference ranges of 0.43-2.86 and 0.39-3.08 μm between pre- and post-dilation measurements in normal and glaucomatous eyes, respectively. Subfoveal and average choroidal thickness decreased significantly by 2 μm/year. Each millimeter increase in axial length decreased subfoveal choroidal thickness by 16.5 μm and average thickness by 14.1 μm. Macular choroidal thicknesses measured before and after pupil dilation are comparable and may be used interchangeably without significant discrepancies both in normal and glaucomatous eyes. Answer: Pupil dilation does not significantly influence subfoveal choroidal laser Doppler flowmetry (LDF) parameters. This conclusion is supported by a study where pupil dilation with tropicamide in healthy volunteers showed no significant change in subfoveal choroidal LDF parameters (velocity, volume, and blood flow) during the dilation period. Measurements were taken through an artificial pupil and without it, and in both cases, the choroidal blood flow measurements (ChBFm) did not change by more than the minimum detectable percentage change of 6% with the method used (PUBMED:23648126). Additionally, another study found that pupil dilation does not significantly affect macular choroidal thickness measurements, which can be related to choroidal blood flow, indicating that measurements taken before and after pupil dilation can be used interchangeably without significant discrepancies (PUBMED:23277206). Therefore, based on these findings, pupil dilation does not appear to have a significant impact on subfoveal choroidal LDF measurements.
Instruction: The prospective association between obesity and major depression in the general population: does single or recurrent episode matter? Abstracts: abstract_id: PUBMED:25880736 The prospective association between obesity and major depression in the general population: does single or recurrent episode matter? Background: Obesity and major depressive disorder (MDD) are important public health problems. MDD is a heterogeneous disorder and the direction of its association with obesity remains unclear. Evidence grows that recurrent MDD (MDD-R) differs in etiology and prognosis from single episode MDD (MDD-S), which could affect associations with obesity. However, evidence on this differential effect is lacking. The aim of this study was to examine the direction of the association between obesity and MDD, single or recurrent episode. Methods: A longitudinal study was performed in a cohort of 1094 participants of the PREVEND study, on whom data were collected at baseline and at an average 2-year follow-up. MDD-S and MDD-R were assessed by the Composite International Diagnostic Interview (CIDI 2.1). Obesity was defined as Body Mass Index ≥ 30 kg/m(2). Binary logistic regression analyses were conducted to examine whether obesity predicts MDD-S/MDD-R or vice versa, adjusted for potential confounders. Results: Prospective analyses showed that BMI at baseline was associated with the onset of MDD-R (Odds ratio, OR = 1.32; 95% confidence interval, 95%CI: 1.11; 1.57) during 2-year follow-up, but not with the onset of MDD-S (OR = 0.98; 95%CI: 0.89; 1.07). Obesity at baseline was not associated with the onset of MDD-S during follow-up (OR = 0.75; 95%CI: 0.25; 2.30), but associated with the onset of MDD-R during follow-up (OR = 11.63; 95%CI: 1.05; 128.60). Neither MDD-S nor MDD-R were associated with the development of obesity during 2-year follow-up (OR = 1.67, 95%CI: 0.64; 4.29 and OR = 2.32, 95%CI: 0.82; 6.58, respectively). Conclusions: Our findings add to the available evidence that obesity might specifically be associated with the onset of multiple episodes of major depression (MDD-R). Although the reverse association was not found, MDD-R tends to be also associated with subsequent development of obesity, but larger studies are needed to fully assess this issue. The heterogeneity of MDD should be considered when examining the effect of obesity on MDD. abstract_id: PUBMED:35546358 Association of high BMI with subclinical hypothyroidism in young, first-episode and drug-naïve patients with major depressive disorder: a large-scale cross-sectional study. Thyroid dysfunction is known to be associated with obesity, but the reliability of this relationship is easily affected by drug treatment, age, and subclinical hypothyroidism (SCH) with no apparent symptoms. Our research aims to compare obese and overweight BMI ranges with SCH and without SCH in a large sample of young, first-episode and drug-naïve (FEDN) patients with major depressive disorder (MDD), which has received little systemic investigation. A total of 1289 FEDN MDD young outpatients were recruited for this study. Serum thyroid function and lipid level parameters were measured; HAMD and PANSS scales were used to assess patients' depression and positive symptoms. A self-administered questionnaire collected other clinical and demographic data. The prevalence of SCH in FEDN MDD young patients was 58.26%. Compared to patients without SCH, the patients with SCH had a more prolonged illness duration, higher BMI levels, increased prevalence of overweight and obesity, higher HAMD score and PANSS-positive symptom scores, higher levels of TG, TC, LDL-C, and lower levels of HDL-C. Further logistic regression indicated that overweight BMI, obese BMI, illness duration, HAMD score, HDL-C, and TC were significantly associated with SCH. Our results indicate that obesity and overweight may be associated with SCH in young, FEDN MDD patients. The importance of regular thyroid function assessment in young FEDN MDD patients with high BMI should be taken into account. abstract_id: PUBMED:34223741 Elevated body weight modulates subcortical volume change and associated clinical response following electroconvulsive therapy. Background: Obesity is a frequent somatic comorbidity of major depression, and it has been associated with worse clinical outcomes and brain structural abnormalities. Converging evidence suggests that electroconvulsive therapy (ECT) induces both clinical improvements and increased subcortical grey matter volume in patients with depression. However, it remains unknown whether increased body weight modulates the clinical response and structural neuroplasticity that occur with ECT. Methods: To address this question, we conducted a longitudinal investigation of structural MRI data from the Global ECT-MRI Research Collaboration (GEMRIC) in 223 patients who were experiencing a major depressive episode (10 scanning sites). Structural MRI data were acquired before and after ECT, and we assessed change in subcortical grey matter volume using FreeSurfer and Quarc. Results: Higher body mass index (BMI) was associated with a significantly lower increase in subcortical grey matter volume following ECT. We observed significant negative associations between BMI and change in subcortical grey matter volume, with pronounced effects in the thalamus and putamen, where obese participants showed increases in grey matter volume that were 43.3% and 49.6%, respectively, of the increases found in participants with normal weight. As well, BMI significantly moderated the association between subcortical grey matter volume change and clinical response to ECT. We observed no significant association between BMI and clinical response to ECT. Limitations: Because only baseline BMI values were available, we were unable to study BMI changes during ECT and their potential association with clinical and grey matter volume change. Conclusion: Future studies should take into account the relevance of body weight as a modulator of structural neuroplasticity during ECT treatment and aim to further explore the functional relevance of this novel finding. abstract_id: PUBMED:31910766 PsyCoLaus: A Prospective Study of the Links between Mental Health and Cardiovascular Diseases Abstract. PsyCoLaus, which includes an investigation of mental disorders and cognitive functioning, aims to determine the prevalence and the course of mental disorders in the general population and to study the mechanisms underlying the association between these disorders and cardiovascular diseases. This investigation revealed a very high lifetime prevalence rate of 43.6 % for major depressive disorder in Lausanne. We have also observed that the association between major depression and cardio-metabolic risk factors is essentially attributable to the atypical subtype, characterized by an increased appetite, heaviness in limbs, hypersomnia and conserved affective reactivity. Patients who suffer from this type of depression have an increased risk to develop overweight, diabetes and the metabolic syndrome and deserve particular clinical attention on the metabolic level. abstract_id: PUBMED:37920541 Prevalence and associated clinical factors for overweight and obesity in young first-episode and drug-naïve Chinese patients with major depressive disorder. Background: Obesity and overweight are common in young patients with major depressive disorder (MDD). However, the prevalence and associated clinical factors of obesity/overweight in young first-episode and drug-naïve (FEDN) MDD patients are rarely reported in China. Methods: A cross-sectional study of 917 young patients (aged 18-35 years) with FEDN MDD was performed. Demographic and clinical data were collected. Depression, anxiety, and psychotic symptoms were assessed using the Hamilton Depression Scale (HAMD), the Hamilton Anxiety Scale (HAMA), and the Positive and Negative Syndrome Scale (PANSS) positive subscale, respectively. Results: Among the young MDD patients, the prevalence of obesity and overweight was 4.14 and 52.89%, respectively. Compared to normal-weight patients, overweight patients were older, had a greater age of onset, and had higher TSH and TG levels. Male MDD patients had a higher risk of obesity than female patients. Compared to obese patients, normal-weight and overweight patients had significantly lower HAMD scores, TC levels, and rates of TSH abnormalities. Logistic regression analysis showed that age, age of onset, and sex were independently associated with obesity, and TSH was independently associated with both obesity and overweight, in young MDD patients. Conclusion: Our findings suggest a high prevalence of overweight and obesity in young FEDN MDD patients. Several demographic and clinical variables are independently associated with overweight/obesity in these young MDD patients. abstract_id: PUBMED:23887062 Association between obesity and depressive disorder in adolescents at high risk for depression. Objective: To examine the relationship between Body Mass Index (BMI) and depressive disorder in adolescents at high risk for depression. Design: Prospective longitudinal 3-wave study of offspring of parents with recurrent depression. Replication in population-based cohort study. Subjects: Three hundred and thirty-seven families where offspring were aged 9-17 years at baseline and 10-19 years at the final data point. Replication sample of adolescents from population-based cohort study aged 11-13 years at first assessment and 14-17 years at follow-up. Measurements: High risk sample used BMI, skin-fold thickness, Diagnostic and Statistical Manual of Mental Disorders, fourth edition (DSM-IV)-defined major depressive disorder and depression symptoms using the Child and Adolescent Psychiatric Assessment (CAPA). Replication sample used BMI, DSM-IV depressive disorder and depression symptoms using the Development and Well-Being Assessment (DAWBA). Results: Two hundred and eighty-nine adolescents were included in the primary analyses. The mean BMI for each age group in this sample were significantly higher than population norms. There was no significant longitudinal association between categories of weight (or BMI) and new onset depressive disorder or depression symptoms. Similar results were found for skin-fold thickness. The association was also tested in a replication population-based sample and found to be non-significant in the subsample of offspring with mothers who had experienced recurrent depression in the past. BMI at age 12 years was, however, a significant predictor of depression symptoms but not of depressive disorder at age 15 years for the total unselected population. Conclusion: BMI does not significantly predict the development of depression in the offspring of parents with recurrent depression. abstract_id: PUBMED:21106248 Natural course of recurrent psychological distress in adulthood. Background: The course of major depressive disorder is often characterized by progressing chronicity, but whether this applies to the course of self-reported psychological distress remains unclear. We examined whether the risk of self-reported psychological distress becomes progressively higher the longer the history of distress and whether prolonged history of distress modifies associations between risk markers and future distress. Methods: Participants were British civil servants from the prospective Whitehall II cohort study (n=7934; 31.5% women, mean age 44.5 years at baseline) followed from 1985 to 2006 with repeat data collected in 7 study phases. Psychological distress was assessed with the 30-item General Health Questionnaire (GHQ). Sex, socioeconomic status, marital status, ethnicity, physical activity, alcohol consumption, smoking, and obesity were assessed as risk markers. Results: Recurrent history of psychological distress was associated with a progressively increasing risk of future distress in a dose-response manner. Common risk markers, such as low socioeconomic status, non-White ethnicity, being single, and alcohol abstinence, were stronger predictors of subsequent distress in participants with a longer history of psychological distress. Sex differences in psychological distress attenuated with prolonged distress history. Limitations: The participants were already adults in the beginning of the study, so we could not assess the progressive chronicity of psychological distress from adolescence onwards. Conclusions: These data suggest that self-reported psychological distress becomes more persistent over time and that a longer prior exposure to psychological distress increases sensitivity to the stressful effects of certain risk markers. abstract_id: PUBMED:28837952 Association of obesity with cognitive function and brain structure in patients with major depressive disorder. Background: Obesity has been implicated in the pathophysiology of major depressive disorder (MDD), which prompted us to examine the possible association of obesity with cognitive function and brain structure in patients with MDD. Methods: Three hundred and seven patients with MDD and 294 healthy participants, matched for age, sex, ethnicity (Japanese), and handedness (right) were recruited for the study. Cognitive function was assessed using the Brief Assessment of Cognition in Schizophrenia (BACS). Gray and white matter structures were analyzed using voxel-based morphometry and diffusion tensor imaging in a subsample of patients (n = 114) whose magnetic resonance imaging (MRI) data were obtained using a 1.5 T MRI system. Results: Verbal memory, working memory, motor speed, attention, executive function, and BACS composite scores were lower for the MDD patients than for the healthy participants (p < 0.05). Among the patient group, working memory, motor speed, executive function, and BACS composite scores were lower in obese patients (body mass index ≥ 30, n = 17) than in non-obese patients (n = 290, p < 0.05, corrected). MRI determined frontal, temporal, thalamic, and hippocampal volumes, and white matter fractional anisotropy values in the internal capsule and left optic radiation were reduced in obese patients (n = 7) compared with non-obese patients (n = 107, p < 0.05, corrected). Limitations: Sample size for obese population was not very large. Conclusions: Obesity is associated with decreased cognitive function, reduced gray matter volume, and impaired white matter integrity in cognition-related brain areas in patients with MDD. abstract_id: PUBMED:36806659 Prevalence and risk factors of thyroid dysfunction in outpatients with overweight or obese first-episode and drug-naïve major depressive disorder. Background: Thyroid dysfunction is common in patients with major depressive disorder (MDD). However, few studies have examined risk factors for thyroid dysfunction in overweight or obese first-episode and drug-naïve (FEDN) MDD patients. This study aimed to investigate the prevalence and risk factors of thyroid dysfunction in FEDN MDD patients with comorbid high body mass index (BMI). Methods: A total of 1718 FEDN MDD patients were included. Hamilton Depression Rating Scale (HAMD), Hamilton Anxiety Rating Scale (HAMA), and Positive and Negative Syndrome Scale (PANSS) positive subscale were used to assess the clinical symptoms of the patients. In addition, metabolic parameters and thyroid hormone levels were measured. Results: The prevalence of thyroid dysfunction was approximately 1.75 times higher in MDD patients with comorbid overweight or obesity (72.3 %) than in patients without high BMI (58.8 %). The HAMD score, HAMA score, systolic blood pressure (BP), fasting blood glucose (FBG), thyroid peroxidase antibody (TPOAb), total cholesterol (TC), high-density lipoprotein (HDL-C), and low-density lipoprotein (LDL-C), were risk factors for thyroid dysfunction in MDD patients with high BMI. The combination of HAMD, FBG, TC, LDL-C, and systolic BP had a high AUC value of 0.76 differentiating patients with and without thyroid dysfunction. Limitation: Causality cannot be drawn due to cross-sectional design. Conclusions: This study demonstrated a high prevalence of thyroid dysfunction in FEDN MDD patients with high BMI. Severity of depression and anxiety, levels of systolic BP, FBG, TPOAb, TC, HDL-C and LDL-C appear to be associated with thyroid dysfunction in FEDN MDD patients with high BMI. abstract_id: PUBMED:32954408 Sleep health, diseases, and pain syndromes: findings from an electronic health record biobank. Study Objectives: Implementation of electronic health record biobanks has facilitated linkage between clinical and questionnaire data and enabled assessments of relationships between sleep health and diseases in phenome-wide association studies (PheWAS). In the Mass General Brigham Biobank, a large health system-based study, we aimed to systematically catalog associations between time in bed, sleep timing, and weekly variability with clinical phenotypes derived from ICD-9/10 codes. Methods: Self-reported habitual bed and wake times were used to derive variables: short (<7 hours) and long (≥9 hours) time in bed, sleep midpoint, social jetlag, and sleep debt. Logistic regression and Cox proportional hazards models were used to test cross-sectional and prospective associations, respectively, adjusted for age, gender, race/ethnicity, and employment status and further adjusted for body mass index. Results: In cross-sectional analysis (n = 34,651), sleep variable associations were most notable for circulatory system, mental disorders, and endocrine/metabolic phenotypes. We observed the strongest associations for short time in bed with obesity, for long time in bed and sleep midpoint with major depressive disorder, for social jetlag with hypercholesterolemia, and for sleep debt with acne. In prospective analysis (n = 24,065), we observed short time in bed associations with higher incidence of acute pain and later sleep midpoint and higher sleep debt and social jetlag associations with higher incidence of major depressive disorder. Conclusions: Our analysis reinforced that sleep health is a multidimensional construct, corroborated robust known findings from traditional cohort studies, and supported the application of PheWAS as a promising tool for advancing sleep research. Considering the exploratory nature of PheWAS, careful interrogation of novel findings is imperative. Answer: The prospective association between obesity and major depression in the general population does indeed appear to be influenced by whether the depression is a single episode or recurrent. A longitudinal study within the PREVEND cohort found that baseline obesity was specifically associated with the onset of recurrent major depressive disorder (MDD-R) during a 2-year follow-up, but not with the onset of a single episode of major depressive disorder (MDD-S). The odds ratio for the association between baseline Body Mass Index (BMI) and the onset of MDD-R was 1.32, and for obesity with MDD-R, it was 11.63. However, neither MDD-S nor MDD-R were associated with the development of obesity during the follow-up period. This suggests that obesity might be a specific risk factor for the onset of multiple episodes of major depression rather than a single episode (PUBMED:25880736). Additionally, other studies have found associations between high BMI and various aspects of major depressive disorder. For instance, high BMI has been associated with subclinical hypothyroidism in young, first-episode, and drug-naïve patients with MDD (PUBMED:35546358), and elevated body weight has been shown to modulate subcortical volume change and associated clinical response following electroconvulsive therapy (ECT) in MDD patients (PUBMED:34223741). Moreover, the PsyCoLaus study highlighted that the atypical subtype of major depression, which is characterized by increased appetite and weight gain, is particularly associated with cardio-metabolic risk factors (PUBMED:31910766). In summary, the evidence suggests that obesity is more strongly associated with recurrent episodes of major depression rather than a single episode, indicating that the relationship between obesity and depression may be influenced by the chronicity and recurrence of the depressive episodes.
Instruction: Do acid and bile reflux into the esophagus simultaneously? Abstracts: abstract_id: PUBMED:7778033 Synergism of acid and duodenogastroesophageal reflux in complicated Barrett's esophagus. Background: The role of acid and duodenogastroesophageal reflux (DGER) in the development of complications in Barrett's esophagus is controversial. We characterized the esophageal reflux constituents in patients with and without complications of Barrett's esophagus. Methods: Using a new fiber-optic system we studied 12 normal subjects (six male; mean age, 46 years) and 20 patients with Barrett's esophagus (17 male; mean age, 58 years), nine with uncomplicated (seven male; mean age, 55 years) and 11 with complicated Barrett's esophagus (seven with stricture, two with ulcer, and two with dysplasia; 10 male; mean age, 61 years). Fasting gastric bile acid concentrations were measured. Twenty-four-hour ambulatory acid and bilirubin measurements were obtained with the fiber-optic system by using a glass electrode and fiber-optic sensor. The data were then analyzed for percent total time pH < 4 and > 7 and bilirubin absorbance > 0.14%. Results: Percent times pH < 4, bilirubin absorbance > 0.14%, and fasting gastric bile acid concentrations were significantly greater in patients with complicated Barrett's esophagus compared with patients with uncomplicated Barrett's esophagus with both being higher than the controls. Acid reflux paralleled bile reflux in the two Barrett's esophagus groups (r = 0.44, p < 0.05), but percent time pH > 7 did not differentiate between the two groups. Conclusions: (1) Patients with complicated Barrett's esophagus reflux significantly greater amounts of both acid and duodenal contents than patients with uncomplicated Barrett's esophagus. (2) Complications in Barrett's esophagus may be due to synergism between acid and bile rather than either constituent alone. abstract_id: PUBMED:9841990 Bile reflux in benign and malignant Barrett's esophagus: effect of medical acid suppression and nissen fundoplication. Bile reflux has been implicated in the pathogenesis and malignant degeneration of Barrett's esophagus, but clinical studies in patients with adenocarcinoma arising in Barrett's esophagus are lacking. Ambulatory esophageal measurement of acid and bile reflux was performed with the previously validated fiberoptic bilirubin monitoring system (Bilitec) combined with a pH probe in 20 asymptomatic volunteers, 19 patients with gastroesophageal reflux disease (GERD) but no mucosal injury, 45 patients with GERD and erosive esophagitis, 33 patients with GERD and Barrett's esophagus, and 14 patients with early adenocarcinoma arising in Barrett's esophagus. Repeat studies were done in 15 patients under medical acid suppression and 16 patients after laparoscopic Nissen fundoplication. The mean esophageal bile exposure time showed an exponential increase from GERD patients without esophagitis to those with erosive esophagitis and benign Barrett's esophagus and was highest in patients with early carcinoma in Barrett's esophagus (P <0.01). Pathologic esophageal bile exposure was documented in 18 (54.5%) of 33 patients with benign Barrett's esophagus and 11 (78.6%) of 14 patients with early adenocarcinoma in Barrett's esophagus. Nissen fundoplication but not medical acid suppression resulted in complete suppression of bile reflux. Bile reflux into the esophagus is particularly prevalent in patients with Barrett's esophagus and early cancer. Bile reflux into the esophagus can be completely suppressed by Nissen fundoplication but not medical acid suppression alone. abstract_id: PUBMED:11901931 Role of acid and bile in the genesis of Barrett's esophagus. Clinical and experimental studies have shown that acid and bile reflux are increased in patients who have Barrett's esophagus. The combination of both seems the key factor in the pathogenesis of Barrett's esophagus. This factor has been confirmed by immunohistochemical studies that show that environmental factors, such as acid and bile, are involved in the pathogenesis of Barrett's esophagus. There is a critical pH range between 3 and 6 in which bile acids exist in their soluble, un-ionized form; can penetrate cell membranes; and accumulate within mucosal cells. At a lower pH, bile acids are precipitated, and at a higher pH, bile acids exist in their noninjurious ionized form. Thus incomplete gastric acid suppression, as is the case with most medical treatment regimens for gastroesophageal reflux, may in fact predispose to the development of Barrett's esophagus. abstract_id: PUBMED:14641309 Impaired visceral sensitivity to acid reflux in patients with Barrett's esophagus. The role of esophageal motility*. Patients with Barrett's esophagus have been reported to have impaired visceral sensitivity to acid perfusion and distension compared with non-Barrett's refluxers, but the mechanism is poorly understood. Esophageal motility and clearance mechanisms may be important, and this study explored the relationship of motility with symptoms. Seventy-four patients with Barrett's esophagus were compared with 216 patients with gastro-esophageal reflux disease (GERD) with abnormal acid reflux scores, and 50 symptomatic patients who had normal acid exposure. All patients had esophageal manometry and 24-h pH monitoring. Thirty-six Barrett's patients also had 24-h bile reflux monitoring. Symptoms were assessed by Symptom Index (SI) during 24-h pH monitoring. Barrett's patients with normal motility had a significantly lower SI than GERD patients for similar acid exposure (P < 0.001). Barrett's patients with abnormal motility had higher acid exposure than those with normal motility (P < 0.05), but the SI values for this group was not significantly different from the GERD patients. SI and Bile reflux in Barrett's esophagus was not significantly different in patients with normal or abnormal motility. Barrett's patients had less sensitivity than GERD patients for similar acid exposure. Normal motility in Barrett's esophagus is associated with the poorest sensitivity and the presence of increased acid exposure is required in order to achieve sensitivity levels comparable with GERD patients. abstract_id: PUBMED:10859231 Bile salts induce or blunt cell proliferation in Barrett's esophagus in an acid-dependent fashion. Barrett's esophagus (BE) results from acid and bile reflux and predisposes to cancer. We investigated the effect of bile salts, with or without acid, on cell proliferation in BE and assessed mechanism(s) involved. To mimic physiological conditions, biopsies of esophagus, BE, and duodenum were exposed to a bile salt mixture, either continuously or as a 1-h pulse, and were compared with control media without bile salts (pH 7.4) for < or =24 h. Similar experiments were also performed with acidified media (pH 3.5) combined with the bile salt mixture as a 1-h pulse. Cell proliferation was assessed by a [(3)H]thymidine incorporation assay with or without bisindolylmaleimide (BIM), a selective protein kinase C inhibitor. Bile salt pulses enhanced cell proliferation in BE without affecting cell proliferation in esophageal or duodenal epithelia. In the presence of BIM, there was complete obliteration of the bile salt-induced BE hyperproliferation. In contrast, 1-h pulses of bile salts in combination with acid significantly inhibited proliferation in BE but had no effect on esophagus or duodenum. We conclude that in BE explants, brief exposure to bile salts, in the absence of acid, increases proliferation, whereas exposure to a combination of bile salts and acid together inhibits proliferation. abstract_id: PUBMED:11356992 Importance of bile reflux in Barrett's esophagus. Barrett's esophagus (BE) is an acquired condition in which the squamous epithelial lining of the lower esophagus is replaced by a columnar epithelium due to chronic gastroesophageal reflux. The role of acid and bile in the development of esophageal mucosal injury and the formation of BE is controversial. Acid and pepsin are unquestionably important in causing mucosal damage and BE formation in both animal models and humans. Animal studies suggest the potential for synergistic damage from conjugated bile acids and gastric acid, as well as from unconjugated bile acids and trypsin in more neutral pH settings. Evidence of the involvement of bile and its constituents in humans has been less conclusive; however, the advent of better technology to detect bile reflux is beginning to clarify the role of these constituents. Human studies show that the reflux of bile parallels acid reflux and increases with the severity of gastroesophageal reflux disease, being most marked in BE. However, recent ex vivo studies suggest that pulses of acid reflux may be more important than bile salts in the development of dysplasia or adenocarcinoma in Barrett's epithelium. Nevertheless, antireflux surgery and aggressive acid suppression with proton pump inhibitors will decrease both acid and bile refluxes, and eliminate the synergism between these two duodenogastric constituents. abstract_id: PUBMED:12121879 Acid- and bile-induced PGE(2) release and hyperproliferation in Barrett's esophagus are COX-2 and PKC-epsilon dependent. Barrett's esophagus (BE) results from acid and bile reflux and predisposes to cancer. To further understand the mechanisms of acid- and bile-induced hyperproliferation in BE, we investigated the release of PGE(2) in response to acid or bile salt exposure. Biopsies of esophagus, BE, and duodenum were exposed to a bile salt mixture as a 1-h pulse and compared with exposure to pH 7.4 for up to 24 h, and PGE(2) release, cyclooxygenase-2 (COX-2), and protein kinase C (PKC) expression were compared. Similar experiments were also performed with acidified media (pH 3.5) alone, in the presence or absence of bisindolylmaleimide (BIM), a selective PKC inhibitor, and NS-398, a COX-2 inhibitor. One-hour pulses of bile salts or acid significantly enhanced proliferation, COX-2 expression, and PGE(2) release in BE. In contrast, the combination pulse of acid and bile salts had no such effect. Treatment with either BIM or NS-398 led to a dramatic decrease in PGE(2) release in BE explants and a suppression of proliferation. The acid- or bile salt-mediated hyperproliferation is related to PGE(2) release. Acid- and bile salt-induced induction of COX-2 and PKC may explain, at least in part, the tumor-promoting effects of acid and bile in BE. abstract_id: PUBMED:9842539 Barrett's esophagus Barrett's esophagus (BE) has recently gained the interest of Japanese physicians. In BE, the squamous epithelium of the distal esophagus is replaced by metaplastic columnar epithelium. This intestinal metaplasia usually occurs as a complication of severe reflux esophagitis and its association with adenocarcinoma of the esophagus is well established. In 1950 Norman Barrett described a tubular, intrathoracic structure that appeared to be the esophagus, except that the distal portion was lined with columnar epithelium. Although he believed that the distal portion was not the esophagus, the condition in which the distal esophagus is lined with columnar epithelium became known as BE. From animal and clinical studies, the intestinal metaplasia is generally believed to arise from multipotential stem cells located in the basal layer of the squamous epithelium and at the base of the glandular epithelium. Evidence for a genetic basis underlying the dysplasia-adenocarcinoma sequence is now being accumulated. It is known that gastric acid reflux as well as bile reflux can cause distal esophagitis. Therefore, treatment with a proton pump inhibitor alone may not be sufficient therapy for all patients. Antireflux surgery can cause regression of BE in up to 50% of patients. Overall 1-, 2-, and 5-year survival rates for patients with adenocarcinoma arising from BE after surgical resection is reported to be 63%, 41%, and 32%, respectively. Therefore, endoscopic surveillance of patients with BE is suggested. abstract_id: PUBMED:11232672 Increased acid and bile reflux in Barrett's esophagus compared to reflux esophagitis, and effect of proton pump inhibitor therapy. Objectives: Barrett's metaplasia is an acquired condition resulting from longstanding gastroesophageal reflux disease. Approximately 10% of esophagitis patients develop Barrett's esophagus. There is increasing evidence that duodenogastroesophageal reflux plays a role in the progression of disease. We further analyzed the correlation of acid and biliary reflux with reflux esophagitis and Barrett's esophagus and tested the effects of proton pump inhibitor therapy. Methods: Patients with either reflux esophagitis (group 1) or Barrett's esophagus (group 2) prospectively underwent simultaneous 24-h esophageal pH and bile reflux testing without any therapy affecting acid secretion or GI motility. A total of 16 patients in group 1 and 18 patients in group 2 were tested again under proton pump inhibitor therapy. Results: Acid and bile exposure were significantly increased in Barrett's patients (n = 23) compared to 20 esophagitis patients (median percentage of time that pH was <4 was 24.6% vs 12.4%, p = 0.01, median percentage of time that bilirubin absorbance was >0.2 was 34.7% vs 12.8%, p < 0.05). During therapy, both acid and bile reflux decreased significantly in both groups. Median percentage of time that pH was <4 and bilirubin absorbance was >0.2 before and during therapy was 18.2%/2.3% and 29.8%/0.7% (p = 0.001 and p = 0.001) in Barrett's esophagus patients versus 14.5%/3.6% and 21.5%/0.9% (p = 0.002 and p = 0.011) in esophagitis patients. There was no significant difference between the groups. In two esophagitis patients, bile reflux increased during therapy. Conclusions: There is a good correlation of the duration of esophageal exposure to acid and bile with the severity of pathological change in the esophagus. Both acid and bile reflux is significantly suppressed by proton pump inhibitor therapy with exceptions among individual esophagitis patients. The prolonged simultaneous attack of bile and acid may play a key role in the development of Barrett's metaplasia. abstract_id: PUBMED:35895531 Surgical treatment of gastroesophageal reflux disease and Barrett’s esophagus For the centenary of the Department of Surgery, University of Szeged we have investigated and summarized the results and outcomes of 779 anti-reflux surgery cases between 1. January 2000 – 31. May 2021. The indication for surgery was made in close collaboration with the internal medicine workgroup depending on the results of endoscopy and functional tests. The primer indication for surgery was medical therapy-resistant reflux disease. Based on our clinical practice we performed laparoscopic Nissen fundoplication in 98,2% of the cases. Besides the long- and short-term postoperative complications, we investigated the long-term effect of anti-reflux surgery on acid and bile reflux, and the improvement of the patients' quality of life using the Visick score, and modified GERD-HRLQ score. Our investigations have proven the effect of acid and bile reflux in the pathogenesis of Barrett's esophagus and furthermore we have confirmed that laparoscopic anti-reflux surgery restores the function of the lower esophageal sphincter and eliminates acid and bile reflux, so in certain cases Barrett's esophagus regression can be achieved. But due to the heterogeneity of GERD and Barrett's esophagus long-term and regular endoscopic control is necessary. Answer: Yes, acid and bile reflux into the esophagus simultaneously. Studies have shown that patients with Barrett's esophagus, a condition that can arise due to chronic gastroesophageal reflux disease (GERD), experience significantly greater amounts of both acid and duodenal contents, including bile, than patients with uncomplicated Barrett's esophagus (PUBMED:7778033). Furthermore, the reflux of bile parallels acid reflux and increases with the severity of GERD, being most marked in Barrett's esophagus (PUBMED:11356992). The simultaneous attack of bile and acid is suggested to play a key role in the development of Barrett's metaplasia (PUBMED:11232672). Additionally, bile reflux into the esophagus is particularly prevalent in patients with Barrett's esophagus and early cancer, and can be completely suppressed by Nissen fundoplication but not by medical acid suppression alone (PUBMED:9841990).
Instruction: Chinese medicine treatment for menopausal symptoms in the UK health service: is a clinical trial warranted? Abstracts: abstract_id: PUBMED:25534172 Chinese medicine treatment for menopausal symptoms in the UK health service: is a clinical trial warranted? Objectives: The aims of this pilot study were to evaluate treatment effects, ascertain safety and formulate best practice Chinese medicine protocols relevant for London women suffering from menopausal symptoms. Study Design: This clinical pilot study employed a case series design within a wider action-based research project. 117 perimenopausal women between 45 and 55 years of age recruited from the general population were treated for menopausal symptoms by six experienced practitioners of Chinese medicine at the Polyclinic of the University of Westminster. Practitioners were instructed to treat as near to their usual practice style as possible. This involved using Chinese herbal medicine and/or acupuncture along with dietary and lifestyle advice. A maximum of 12 treatments over 6 months was allowed per patient. Outcome Measures: The menopause specific quality of life questionnaire (MenQoL), the Greene climacteric scale, and flushing diaries were used to evaluate treatment outcomes. Liver and kidney function tests were carried out at intake and after 1, 6 and 12 treatments to evaluate the safety particularly in relation to the use of herbal medicines. Results: Patients showed significant improvement across all domains measured by the MenQoL and Greene climacteric scales. Reduction on the MenQoL scale between first and last visit was from 4.31 to 3.27 (p<0.001) and on the Green climacteric scale from 21.01 to 13.00 (p<0.001). Study participants did not reliably complete their flushing diaries. No adverse events or abnormal liver or kidney function values were observed during the course of the study. Conclusions: Further research that seeks to investigate the effects observed in more detail and to evaluate them against other forms of treatment and/or no-treatment controls is warranted. This could be achieved by way of a pragmatic randomized controlled trial that evaluated Chinese medicine against orthodox medical care. abstract_id: PUBMED:28619302 Chinese herbal medicine for treating menopausal symptoms in London women: developing a good practice protocol via the factor analysis of prescribing patterns in a clinical study. The objective of the study described in this paper was to define Chinese medicine formula patterns for the treatment of menopausal women in London. These formula patterns are intended to inform the development of best practice guidelines for a future pragmatic randomised controlled trial, with the ultimate goal of evaluating the possibility of integrating Chinese medicine treatment strategies for menopausal symptoms into the UK National Health Service. Data from a clinical study that had demonstrated the effectiveness and safety of Chinese medicine in treating 117 perimenopausal women at the Westminster University Polyclinic in London were analysed for symptom occurrence and herb use. The frequency of occurrence of different presenting symptoms and the frequency of use of individual herbs is described, the patterns of combined herb use were analysed by means of factor analysis, and the correlations between these patterns and the presenting symptoms were analysed using the chi square test. Treating the emergent use patterns as Chinese herbal medicine formulas, five distinctive formula patterns emerged in the course of this study. While there is some overlap between these formulas and their associated symptom patterns and those described in Chinese medicine textbooks and guidelines, some formula patterns appear to be unique to London women. This indicates that best practice guidelines for the Chinese medicine treatment of menopausal symptoms, which have been shown to vary cross-culturally, need to be derived from local clinical practice. abstract_id: PUBMED:33094658 The treatment of menopausal symptoms by traditional Chinese medicine in Asian countries. Menopausal symptoms (or climacteric syndrome) refer to a series of symptoms that occur during the perimenopausal and early postmenopausal period. About 80% of women will have various degrees of menopausal symptoms, and most of them need associated treatment. Asian women are more inclined to choose traditional Chinese medicine (TCM) in terms of therapeutic method, and menopausal hormone therapy has low acceptance because the women have been misinformed about the side effects of hormones. Therefore, TCM is used for menopausal symptoms in women in most Asian countries, including China, Japan, Vietnam, and South Korea. In the basic theory of TCM, the menopause is classified as the Kidney deficiency and imbalance of Yin and Yang; therefore, the treatment methods in either Chinese patent medicine or Chinese herbal medicine are aimed at supplementing the Kidney function and rebalancing Yin and Yang. It is believed that TCM treatment is suitable for patients with mild or moderate menopausal symptoms. The incidence rate of adverse reactions in terms of breast tenderness and irregular vaginal bleeding is lower than that of hormone therapy. However, there are few randomized, double-blind, placebo-controlled studies on TCM treatment of menopausal syndrome. Future studies should be undertaken to confirm its merits. abstract_id: PUBMED:24678630 A critical review of traditional Chinese medicine use amongst women with menopausal symptoms. Objectives: To provide the first critical review of traditional Chinese medicine (TCM) use amongst symptomatic menopausal women, drawing upon work examining the perspectives of both TCM users and TCM practitioners. Methods: A search was conducted in three English-language databases (MEDLINE, CINAHL and AMED) and three Chinese-language databases (CNKI, VIP and CBM Disc) for 2002-2013 international peer-reviewed articles reporting empirical findings of TCM use in menopause. Results: A total of 25 journal articles reporting 22 studies were identified as meeting the review inclusion criteria. Chinese herbal medicine appears to be the most common therapy amongst symptomatic menopausal women, and vasomotor symptoms and emotional changes are the most frequent symptoms for which TCM is sought. However, evidence regarding the prevalence of TCM use and users' profile in menopause is limited. Existing studies are of varied methodological quality, often reporting low response rate, extensive recall bias and a lack of syndrome differentiation. Conclusions: This review provides insights for practitioners and health policy-makers regarding TCM care to symptomatic menopausal women. More nationally representative studies are required to rigorously examine TCM use for the management of menopausal symptoms. Syndrome differentiation of menopausal women is an area which also warrants further attention. abstract_id: PUBMED:29533368 A randomized, double-blind, placebo-controlled trial of Chinese herbal medicine capsules for the treatment of premature ovarian insufficiency. Objective: This study was conducted to evaluate the treatment effectiveness of Chinese herbal medicine capsules containing the Yangyin Shugan formula (YYSG) in premature ovarian insufficiency (POI). Methods: One-hundred forty-six women with POI participated in this stratified, randomized, double-blind, placebo-controlled clinical trial. Participants in two groups (n = 73 in each)-the YYSG group and control group-underwent treatment for 12 weeks. Outcome measures included the Chinese version Menopause-Specific Quality of Life questionnaire (CMS), serum levels of basal follicle-stimulating hormone (bFSH), basal estradiol, and anti-Mullerian hormone (AMH), the antral follicle count (AFC), and ovarian peak systolic velocity (PSV; cm/s). Results: Treatment with YYSG significantly reduced the total scores of the CMS at the end of the 12th week with statistical significance (P < 0.01); the vasomotor, psychosocial, physical, and sexual domains significantly improvement after treatment (P < 0.01). Compared with the baseline hormone levels, YYSG markedly decreased the bFSH level with statistical significance (P < 0.01) and improved the AMH level (P < 0.01). Furthermore, YYSG greatly improved the participants' AFC and ovarian PSV, compared with placebo (P < 0.01). There were no serious adverse events, and the safety indices of whole blood counts, renal function, and liver function were within the normal range, both before and after treatment. Conclusions: Treatment with YYSG was more effective than placebo for improving menopausal symptoms, basal hormone levels, and ovarian function in women with POI in Guangdong, China. abstract_id: PUBMED:37960761 Acupuncture combined with Chinese herbal medicine in the treatment of perimenopausal insomnia: A systematic review and meta-analysis. Background: Perimenopausal insomnia (PMI) is a relatively common menopausal symptom that can cause serious problems for the women themselves and their families. Today, the world is facing the trend and challenges of an aging population. It is reported that about 1.5 million women worldwide enter menopause every year, with sleep disorder identified as a core symptom. The efficacy of acupuncture combined with traditional Chinese medicine for treating PMI has been recognized by patients and doctors. Methods: We searched 8 databases to identify 15 randomized controlled trials evaluating the effects of acupuncture combined with traditional Chinese medicine on sleep in patients with PMI compared with Western medicine alone. Subsequently, data extraction and analysis were performed to assess the quality and risk of bias of the study method design, and a meta-analysis of the data was performed. Results: This study included 15 randomized controlled trials involving 1188 patients with PMI. The results show that acupuncture combined with traditional Chinese medicine seems to be more effective than Western medicine in the treatment of PMI: efficiency (RR: 1.18; 95% CI: 1.08, 1.29; P = .001); the Pittsburgh Sleep Quality Index (PSQI) (WMD: -2.77; 95% CI: 4.15-1.39; P < .0001); follicle-stimulating hormone (FSH) (WMD: -31.45; 95% CI: 42.7-20.2; P < .001) and the Hamilton Anxiety Score (HAMA) (WMD: -2.62, 95% CI: -3.93, -1.32; P < .0001). Compared with western medicine, E2 (WMD: 5.07; 95% CI: 5.78-15.92; P = .36) and LH (WMD: -4.86; 95% CI: 11.5-1.78; P = .151) had no difference. Conclusion: The current analysis results show that acupuncture combined with Chinese medicine seems to have a more positive effect than western medicine alone in improving sleep and FSHF in PMI patients, but no difference has been found in improving E2 and LH. This study provides a basis for acupuncture combined with Chinese medicine to treat PMI. However, due to the higher risk of evaluation in included studies, more rigorous randomized controlled trials and higher quality studies are needed to validate included studies. abstract_id: PUBMED:23676632 Effects of acupuncture and Chinese herbal medicine (Zhi Mu 14) on hot flushes and quality of life in postmenopausal women: results of a four-arm randomized controlled pilot trial. Objective: The aim of this study was to evaluate the feasibility of a clinical trial investigating the effects of acupuncture (AP) and Chinese herbal medicine (CHM) on hot flushes and quality of life in postmenopausal women. Methods: Forty postmenopausal women reporting at least 20 hot flushes per week were enrolled in a randomized controlled trial. They were randomly allocated to receive traditional Chinese medicine (TCM) AP, sham AP, verum CHM, or placebo CHM for 12 weeks. Follow-up assessment was conducted 12 weeks after intervention. Primary outcome measures included hot flush frequency and severity. As a secondary outcome measure, the severity of menopausal symptoms was assessed using the Menopause Rating Scale (MRS) II. Results: TCM AP induced a significant decline in all outcome measures from pretreatment to posttreatment compared with sham AP (hot flush frequency, P = 0.016; hot flush severity, P = 0.013; MRS, P < 0.001). In the TCM AP group, a larger decrease in MRS scores persisted from pretreatment to follow-up (P = 0.048). No significant differences were noted between the verum CHM group and the placebo CHM group. Compared with the verum CHM group, there was a significant decrease in MRS scores (P = 0.002) and a trend toward a stronger decrease in hot flush severity (P = 0.06) in the TCM AP group from pretreatment to posttreatment. Conclusions: TCM AP is superior to sham AP and verum CHM in reducing menopausal symptoms, whereas verum CHM shows no significant improvements when compared with placebo CHM. abstract_id: PUBMED:25600538 Efficacy and side-effects of a semi-individualized Chinese herb mixture "Tiáo Gēng Tāng" for menopausal syndrome in China. Background: Chinese herbal medicine is an alternative therapy for menopausal problems and is widely practiced in China and many other Asian countries. However, efficacies and side-effects are rarely assessed according to the standards of evidence-based medicine. Patients And Methods: This is a prospective observatory study following efficacy and side-effects of a semi-individualized Chinese herbal mixture "Tiáo Gēng Tāng (TGT)" in 30 patients for 3 months. Another group of 30 patients receiving hormone therapy with tibolone was included as a positive comparison. Common questionnaire-based measuring instruments were: modified Kupperman index, menopause rating scale, life quality and Chinese medical symptom scale (CMSS). Follicle-stimulating hormone (FSH), luteinizing hormone (LH) and estradiol (E2) were determined before and three months after the treatments. Results: Significant improvement was seen in overall scores of all the four measurements in both groups. For some symptoms, including dry mouth, tinnitus, poor appetite and constipation, TGT was more effective than tibolone. For psychosocial and sexual sub-scales of life quality, tibolone has a slightly higher remedy rate than TGT. TGT lowered FSH and LH significantly, as tibolone did, but elevated E2 significantly less than tibolone. Various adverse events, including body weight increase, abdomen discomfort, nausea/vomiting, emotional instability, pressure in breasts and dizziness, were reported by patients treated with tibolone, whereas only diarrhea was observed in two patients treated with TGT. Conclusion: TGT alleviates menopausal symptoms with similar efficacy as tibolone but has fewer side effects. abstract_id: PUBMED:15111765 Use of complementary and alternative medicine by chinese women with breast cancer. The use of complementary and alternative medicine (CAM) has been rapidly increasing among cancer patients. The aim of this study is to evaluate the prevalence and patterns of CAM use, particularly patients' intentions and their perceived effectiveness of using Chinese herbal medicine (CHM), as well as the relations between the herbal medicine use and demographic and clinical factors among Chinese women with breast cancer. We analyzed the data from a population-based sample of 1065 breast cancer women in urban Shanghai. Patients' average age at diagnosis was 48.1 years and the median time from the initial diagnosis to the follow-up survey was 4.3 years. Overall, 98% of patients had used at least one form of CAM therapy after diagnosis of breast cancer. The most popular CAM modality was traditional Chinese medicine (86.7%), followed by the use of supplements (84.8%), physical exercises (65.5%), and support group attendance (16.6%). CHM was used by 86.4% of patients, while acupuncture was used only by 4.9% of patients. Treating cancer (81.5%) was the most common intentions of using CHM. Other cited intentions included enhancing the immune system (12%), preventing metastasis of cancer or managing other discomforts (7.9%), and lessening menopausal symptoms (4.7%). The majority of patients reported that they had benefited from the use of CHM. Patients who were younger, married, had higher education or income, received chemotherapy or radiotherapy, or had recurrence/metastasis of cancer tended to use CHM more frequently than other patients. The relations between patient characteristics and use of CHMs varied with users' intentions. Given the high prevalence of CAM use among breast cancer patients, research is urgently needed to systematically evaluate the efficacy and safety of CAM use, particularly use of herbal medicines. abstract_id: PUBMED:24925094 Acupuncture for menopausal vasomotor symptoms: study protocol for a randomised controlled trial. Background: Hot flushes and night sweats (vasomotor symptoms) are common menopausal symptoms, often causing distress, sleep deprivation and reduced quality of life. Although hormone replacement therapy is an effective treatment, there are concerns about serious adverse events. Non-hormonal pharmacological therapies are less effective and can also cause adverse effects. Complementary therapies, including acupuncture, are commonly used for menopausal vasomotor symptoms. While the evidence for the effectiveness of acupuncture in treating vasomotor symptoms is inconclusive, acupuncture has a low risk of adverse effects, and two small studies suggest it may be more effective than non-insertive sham acupuncture. Our objective is to assess the efficacy of needle acupuncture in improving hot flush severity and frequency in menopausal women. Our current study design is informed by methods tested in a pilot study. Methods/design: This is a stratified, parallel, randomised sham-controlled trial with equal allocation of participants to two trial groups. We are recruiting 360 menopausal women experiencing a minimum average of seven moderate hot flushes a day over a seven-day period and who meet diagnostic criteria for the Traditional Chinese Medicine diagnosis of Kidney Yin deficiency. Exclusion criteria include breast cancer, surgical menopause, and current hormone replacement therapy use. Eligible women are randomised to receive either true needle acupuncture or sham acupuncture with non-insertive (blunt) needles for ten treatments over eight weeks. Participants are blinded to treatment allocation. Interventions are provided by Chinese medicine acupuncturists who have received specific training on trial procedures. The primary outcome measure is hot flush score, assessed using the validated Hot Flush Diary. Secondary outcome measures include health-related quality of life, anxiety and depression symptoms, credibility of the sham treatment, expectancy and beliefs about acupuncture, and adverse events. Participants will be analysed in the groups in which they were randomised using an intention-to-treat analysis strategy. Discussion: Results from this trial will significantly add to the current body of evidence on the role of acupuncture for vasomotor symptoms. If found to be effective and safe, acupuncture will be a valuable additional treatment option for women who experience menopausal vasomotor symptoms. Trial Registration: Australian New Zealand Clinical Trials Registry ACTRN12611000393954 11/02/2009. Answer: Based on the information provided in the abstracts, a clinical trial investigating Chinese medicine treatment for menopausal symptoms in the UK health service does appear to be warranted. A pilot study conducted in the UK showed significant improvement in menopausal symptoms among perimenopausal women treated with Chinese herbal medicine and/or acupuncture, with no adverse events or abnormal liver or kidney function values observed (PUBMED:25534172). This suggests that further research could investigate these effects in more detail and evaluate them against other forms of treatment or no-treatment controls. Additionally, a study that developed a good practice protocol for Chinese medicine formula patterns for treating menopausal women in London indicated that best practice guidelines need to be derived from local clinical practice, as some formula patterns appear to be unique to London women (PUBMED:28619302). This further supports the need for a clinical trial tailored to the UK context. Moreover, the use of traditional Chinese medicine (TCM) for menopausal symptoms is common in Asian countries, and while it is believed to be suitable for patients with mild or moderate symptoms, there is a need for more rigorous studies to confirm its merits (PUBMED:33094658). A critical review also highlighted the need for more nationally representative studies to rigorously examine TCM use for the management of menopausal symptoms (PUBMED:24678630). Furthermore, randomized controlled trials conducted in China have shown that Chinese herbal medicine can be effective for menopausal symptoms, such as in the treatment of premature ovarian insufficiency (PUBMED:29533368) and perimenopausal insomnia (PUBMED:37960761), and for reducing hot flushes and improving quality of life in postmenopausal women (PUBMED:23676632). Another study found that a semi-individualized Chinese herbal mixture was effective in alleviating menopausal symptoms with fewer side effects compared to hormone therapy (PUBMED:25600538). Given the high prevalence of CAM use among breast cancer patients, including Chinese women with menopausal symptoms, there is a call for systematic evaluation of the efficacy and safety of CAM, particularly herbal medicines (PUBMED:15111765). Lastly, a protocol for a randomized controlled trial assessing the efficacy of acupuncture for menopausal vasomotor symptoms further emphasizes the ongoing interest and need for rigorous research in this area (PUBMED:24925094). In summary, the existing evidence from pilot studies and research on the effectiveness and safety of Chinese medicine for menopausal symptoms, as well as the recognition of the need for more rigorous and locally relevant clinical trials, supports the argument that a clinical trial is warranted in the UK health service. Such a trial would contribute to the evidence base and could potentially lead to the integration of Chinese medicine treatment strategies for menopausal symptoms into the UK National Health Service, provided the trial demonstrates positive outcomes.
Instruction: Is the incidence of near-fatal asthma decreasing in Spain? Abstracts: abstract_id: PUBMED:17067519 Is the incidence of near-fatal asthma decreasing in Spain? Objective: A number of studies have shown that both mortality and hospital admissions due to severe asthma have decreased in recent years in many parts of the world. However, the situation is Spain has not yet been analyzed. The aim of this study was to determine the incidence of very severe, near-fatal asthma in recent years in various Spanish hospitals. Patients And Methods: A retrospective review of hospital records from 6 hospitals in 5 Spanish autonomous communities was conducted for the period 1997 to 2004 to determine the annual number of patients who required orotracheal intubation and mechanical ventilation due to an asthma attack. Results: Of the 130 patients included in the study, 81 (62%) were women and 61 (47%) were aged between 51 and 75 years. The number of cases observed for the periods 2001-2002 and 2003-2004 (32 and 18, respectively) was significantly lower than that observed for the 1997-1998 and 1999-2000 periods (40 in both cases; P=.019). A significant increase in the incidence was observed in autumn and winter (n=81 [62%]; P=.018). Seventeen patients (13%) died and 8 (6%) developed serious sequelae. Conclusions: Although our sample of 6 hospitals is not widely representative of the entire population of hospitals in Spain, our findings strongly suggest a decrease in the incidence of near-fatal asthma in Spain in recent years. abstract_id: PUBMED:28808576 "Awake" ECCO2R superseded intubation in a near-fatal asthma attack. Background: Near-fatal asthma attacks are life threatening events that often require mechanical ventilation. Extracorporeal carbon dioxide removal (ECCO2R) is, beside extracorporeal membrane oxygenation (ECMO), a well-established rescue option whenever ventilation gets to its limits. But there seems to be very rare experience with those techniques in avoiding mechanical ventilation in severe asthma attacks. Case Presentation: A 67-year-old man with a near-fatal asthma attack deteriorated under non-invasive ventilation conditions. Beside pharmacological treatment, the intensivists decided to use an extracorporeal carbon dioxide removal system (ECCO2R) to avoid sedation and intubation. Within only a few hours, there was a breakthrough and the patient's status improved continuously. One and a half days later, weaning from ECCO2R was already completed. Conclusions: The discussion deals with several advantages of extracorporeal lung support in acute asthma, the potential of avoiding intubation and sedation, as well as the benefits of a conscious and spontaneously breathing patient. Extracorporeal membrane oxygenation (ECMO) in general and ECCO2R in particular is a highly effective method for the treatment of an acute near-fatal asthma attack. Pathophysiological aspects favor the "awake" approach, without sedation, intubation, and mechanical ventilation. Therefore, experienced clinicians might consider "awake" ECCO2R in similar cases. abstract_id: PUBMED:26011771 Identification and characterization of near-fatal asthma phenotypes by cluster analysis. Background: Near-fatal asthma (NFA) is a heterogeneous clinical entity and several profiles of patients have been described according to different clinical, pathophysiological and histological features. However, there are no previous studies that identify in a unbiased way--using statistical methods such as clusters analysis--different phenotypes of NFA. Therefore, the aim of the present study was to identify and to characterize phenotypes of near fatal asthma using a cluster analysis. Methods: Over a period of 2 years, 33 Spanish hospitals enrolled 179 asthmatics admitted for an episode of NFA. A cluster analysis using two-steps algorithm was performed from data of 84 of these cases. Results: The analysis defined three clusters of patients with NFA: cluster 1, the largest, including older patients with clinical and therapeutic criteria of severe asthma; cluster 2, with an high proportion of respiratory arrest (68%), impaired consciousness level (82%) and mechanical ventilation (93%); and cluster 3, which included younger patients, characterized by an insufficient anti-inflammatory treatment and frequent sensitization to Alternaria alternata and soybean. Conclusions: These results identify specific asthma phenotypes involved in NFA, confirming in part previous findings observed in studies with a clinical approach. The identification of patients with a specific NFA phenotype could suggest interventions to prevent future severe asthma exacerbations. abstract_id: PUBMED:25829967 A case of near fatal asthma: The role of ECMO as rescue therapy. We report a case of an adolescent with near fatal asthma (NFA). He presented with severe hypoxemia and lifethreatening acidemia, who failed to respond to conventional therapy. His hospital course was complicated by barotrauma and hemodynamic instability. Early introduction of extracorporeal membrane oxygenation (ECMO) led to dramatic improvement in gas exchange and lung mechanics. This case illustrates the important role of ECMO as salvage therapy in NFA. abstract_id: PUBMED:37234620 Rapid weaning from mechanical ventilation and bronchial thermoplasty for near-fatal asthma: A case report. Background: The action mechanism of bronchial thermoplasty (BT) is poorly understood. Generally, patients with severe asthma who are in desperate need of treatment have relatively low baseline values. In this paper, we describe the case of an asthmatic patient who was saved by a combination of therapy and bronchial thermoplasty. Case Information: A patient with near-fatal asthma was initially treated in our hospital with conventional medication, but his condition did not improve. The patient was next subjected to invasive mechanical ventilation, which did not provide significant relief. Additionally, he was treated with BT in conjunction with mechanical ventilation, which promptly reversed his status asthmaticus and stabilized his condition. Conclusion: Patients with near-fatal asthma who do not react effectively to aggressive therapy may benefit from BT. abstract_id: PUBMED:34984154 Extracorporeal Membrane Oxygenation (ECMO): A Life Saver in Near-Fatal Asthma. Near-fatal asthma (NFA) is a life-threatening condition that represents the most severe clinical phenotype of asthma and can progress to fatal asthma. Patients with NFA do not respond adequately to conventional medical therapy and urgent intervention is needed to provide adequate oxygenation by invasive mechanical ventilation. While mechanical ventilation is a potentially life-saving intervention, it could cause lung injury, barotrauma, and dynamic hyperinflation due to high ventilator settings resulting in hemodynamic instability. Extracorporeal membrane oxygenation (ECMO) provides full respiratory support with adequate gas exchange in patients with NFA and improves survival rate. We present a case of a young female patient who presented with NFA, and her clinical condition was worsening despite invasive positive pressure mechanical ventilation. abstract_id: PUBMED:27425166 Near fatal asthma: treatment and prevention. Near-fatal asthma (NFA) is described as acute asthma associated with a respiratory arrest or arterial carbon dioxide tension greater than 50 mmHg, with or without altered consciousness, requiring mechanical ventilation. Risk factors for near fatal asthma have not been fully elucidated. In 80-85% of all fatal events, a phenotype, characterized by eosinophilic inflammation associated with gradual deterioration occurring in patients with severe and poorly controlled asthma, has been identified. Regarding to the management, acute severe asthma remains a significant clinical problem, which needs to be identified to facilitate early and appropriate therapeutic interventions. The assessment relies on clinical signs, but additional information might be obtained from chest radiography or blood gas analysis. No investigation should delay the initiation of appropriate therapy. The goals of therapy are the maintenance of oxygenation, relief of airflow obstruction, reduction of airways edema and mucus plugging (with Increased use of medications such as beta-agonists via metered dose inhalers and nebulizers, oral and/or intravenous (other than by inhalation) corticosteroids and oral or intravenous theophylline) whereas supporting ventilation as clinically indicated. Of course, the emergency physician needs to consider the wide range of potential complications, as attention to these problems when managing severe acute asthma might significantly improve outcome. An understanding of the available agents and potential pitfalls in the management of NFA is mandatory for the emergency physician. abstract_id: PUBMED:31007768 Fatal and near-fatal thunderstorm asthma epidemic in a desert country. Background: Reports of thunderstorm asthma in the Middle East are few. This study is the first to report on cases of near-fatal and fatal thunderstorm asthma in Kuwait on December 1, 2016. Methods: We conducted a chart review and interview with adult patients admitted to Mubarak Al-Kabir Hospital with near-fatal asthma, defined as an exacerbation requiring intubation and mechanical ventilation or hypercapnia with a PaCO2≥6 kPa. Information, including patient age, gender, occupation, asthma history, medication usage, and clinical outcome, was collected. For fatal asthma cases, patients' data were collected from the Forensic Department at the Kuwait Ministry of Interior. Results: A total of 17 patients were admitted with near-fatal asthma. Fifteen patients (93.8%) had a prior history of asthma, with an average duration of 9 years. Five patients (33.3%) reported receiving a corticosteroid inhaler from their physician. Fifteen patients (93.8%) reported relying on a short-acting β2-agonist alone to manage their asthma. Eleven patients (68.8%) reported being outdoors during the storm. Eleven patients were diagnosed with fatal asthma. Conclusion: The study is the first to report on a thunderstorm-associated asthma epidemic in Kuwait and emphasizes the potential dangers associated with this entity. abstract_id: PUBMED:38476128 Use of venovenous (VV) extracorporeal membrane oxygenation (ECMO) in near-fatal asthma: a case series. Introduction: Status asthmaticus (SA) and near-fatal asthma (NFA) are life-threatening conditions that continue to present a management challenge for physicians. Extracorporeal Membrane Oxygenation (ECMO) has been employed as a last resort in treating these patients. Case Presentation: We described six patients who were admitted to the ICU for NFA and received ECMO treatment at a high-complexity institution in Cali, Colombia, between 2015 and 2019. All patients are registered in the ELSO registry. Baseline patient characteristics, arterial blood gases (ABG), ventilatory parameters, and complications were collected as specified in the ELSO registry form. Efficacy was analyzed in terms of the improvement in respiratory acidosis, the number of ventilator-free days (VFD), and a reduction in mechanical power (MP). MP, which refers to the energy associated with the mechanical forces involved in breathing and the functioning of the respiratory system, was calculated using a mathematical formula. Safety was evaluated based on the incidence of complications. After 12 hours of ECMO, we achieved a correction of respiratory acidosis, a significant decrease in all ventilatory parameters, and a reduction in MP ranging from 52.8% to 89%. There was one mortality. Among the five surviving patients, all except one, who required a tracheostomy, had a high VFD score, with a mode of 26 days, demonstrating a reduction in ventilation time. Conclusion: Further randomized controlled trials are needed to fully understand the efficacy and safety profiles of ECMO in SA/NFA. MP is being widely used to achieve safer ventilation, and although more data is required, it appears to be a promising option for evaluating the risk of developing VILI and the success of the therapy. abstract_id: PUBMED:32227683 Extracorporeal and advanced therapies for progressive refractory near-fatal acute severe asthma in children. Asthma is the most common chronic illness and is one of the most common medical emergencies in children. Progressive refractory near-fatal asthma requiring intubation and mechanical ventilation can lead to death. Extracorporeal membrane oxygenation (ECMO) can provide adequate gas exchange during acute respiratory failure although data on outcomes in children requiring ECMO support for status asthmaticus is sparse with one study reporting survival rates of nearly 85% with asthma being one of the best outcome subsets for patients with refractory respiratory failure requiring ECMO support. We describe the current literature on the use of ECMO and other advanced extracorporeal therapies available for children with acute severe asthma. We also review other advanced invasive and noninvasive therapies in acute severe asthma both before and while on ECMO support. Answer: Yes, the incidence of near-fatal asthma appears to be decreasing in Spain. A study conducted by reviewing hospital records from six hospitals in five Spanish autonomous communities for the period from 1997 to 2004 found that the number of patients who required orotracheal intubation and mechanical ventilation due to an asthma attack was significantly lower in the periods 2001-2002 and 2003-2004 compared to the periods 1997-1998 and 1999-2000 (PUBMED:17067519). This suggests a decrease in the incidence of near-fatal asthma in Spain in recent years, although the authors note that their sample of six hospitals is not widely representative of the entire population of hospitals in Spain.
Instruction: Does transvaginal hydrolaparoscopy render standard diagnostic laparoscopy obsolete for unexplained infertility investigation? Abstracts: abstract_id: PUBMED:11134833 Does transvaginal hydrolaparoscopy render standard diagnostic laparoscopy obsolete for unexplained infertility investigation? Objective: To evaluate the feasability of transvaginal hydrolaparoscopy in infertile patients undergoing basic infertility investigations, and to determine its usefulness in comparison with standard laparoscopy. Materials And Methods: Twenty-three unexplained infertile women were prospectively included. Selected patients had no history of pelvic disease or previous pelvic surgery and had normal findings on gynecological examination and vaginal sonography. The patients received general anesthesia and underwent the transvaginal hydrolaparoscopy immediately prior to a standard laparoscopy by a different operator. The main outcome measures were the rate of successful access to the pouch of Douglas, the duration of the procedure, and the rate of complications. In order to compare the accuracy of the transvaginal hydrolaparoscopy to the standard laparoscopy, findings in terms of tubal pathology, endometriosis, and adhesions were analyzed. Results: The successful rate of access to the pouch of Douglas was 95.7%. The rate of complications was 4.3%. The mean duration of the transvaginal hydrolaparoscopy procedure was 8 min. The concordance between transvaginal hydrolaparoscopy and laparoscopy for pelvic cavity examination was statistically significant (k=0.57, P=0.02). In 40.9% of cases, the transvaginal hydrolaparoscopy procedure has shown a normal pelvic examination confirmed by laparoscopic diagnosis. According to the findings by laparoscopy, transvaginal hydrolaparoscopic diagnosis was correlated well in 81.8% of cases. When transvaginal hydrolaparoscopy showed pathological findings, there were no normal laparoscopies. Pathological laparoscopies were found in 18.2% of the normal transvaginal hydrolaparoscopies. Conclusion: The transvaginal hydrolaparoscopy is a reproducible and safe method to investigate the pelvis and its structures. The diagnostic accuracy of the transvaginal hydrolaparoscopy suggests that more than 40% of standard laparoscopies for unexplained infertility could be avoided. abstract_id: PUBMED:11334917 Transvaginal hydrolaparoscopy but not standard laparoscopy reveals subtle endometriotic adhesions of the ovary. Objective: To determine whether transvaginal hydrolaparoscopy is superior to standard laparoscopy for detection of subtle endometriotic adhesions of the ovary. Design: Videotapes of standard laparoscopy and transvaginal hydrolaparoscopy were viewed by an independent observer in random order and in a blinded manner. Setting: Tertiary referral centers for infertility. Patient(s): Patients with minimal or mild endometriosis (n = 11) and unexplained infertility (n = 10) on standard laparoscopy from a group of 43 patients with infertility who were undergoing both studies. Intervention(s): Transvaginal hydrolaparoscopy followed by standard laparoscopy. Main Outcome Measure(s): Detection of unexplained ovarian adhesions. Result(s): Patients with minimal and mild endometriosis and unexplained infertility had significantly more ovarian adhesions on transvaginal hydrolaparoscopy than on standard laparoscopy. The subtle adhesions seen on transvaginal hydrolaparoscopy but not on standard laparoscopy were filmy, microvascularized, and nonconnecting. Conclusion(s): Unexplained ovarian adhesions are frequently detected on transvaginal hydrolaparoscopy but not on standard laparoscopy in infertile patients with minimal and mild endometriosis and unexplained infertility. abstract_id: PUBMED:25097685 Comparison of hysterosalpingography and transvaginal hydrolaparoscopy in patients with tubal factor infertility: a prospective cohort study. Aim: To evaluate the usefulness, diagnostic accuracy, and cost-effectiveness of transvaginal hydrolaparoscopy (THL) in infertile women with abnormal hysterosalpingogram (HSG) results without obvious pelvic pathology. Material And Methods: THIRTY INFERTILE WOMEN (AGE: 20-40 years) who had tubal pathology in HSG were enrolled in the study. All patients underwent THL instead of standard laparoscopy. A cost analysis was performed comparing HSG and THL methods. Results: In comparison of the HSG of cases by considering the chromopertubation results by THL, the sensitivity and specificity of HSG were 85.1% and 56%, respectively. The reasons for preferring standard laparoscopy rather than THL were: failure in accessing of Douglas (n = 3), insufficient monitoring of pelvis (n = 1), hydrosalpinx (n = 1), and intense peritubal adhesion (n = 1), which were 10%, 3%, 3%, and 3%, respectively. The complication rate was 3.8%. Cost analysis of the procedures showed that the total cost of the THL group was 34.8% lower than the HSG group. Conclusions: In the elective patients group, THL is more feasible than HSG. Transvaginal hydrolaparoscopy is effective, simple and safe, avoiding the cost, possible complications, time and postoperative patient discomfort compared to conventional laparoscopy. abstract_id: PUBMED:12225307 Transvaginal hydrolaparoscopy: a new diagnostic tool in infertility investigation. Background: To establish the value of a new technique called transvaginal hydrolaparoscopy for exploration of the tubo-ovarian structures in the management of patients with unexplained infertility. Methods: A total of 120 women with primary or secondary infertility, and without obvious pelvic pathology, were included in the study. The first 14 patients received general anesthesia and underwent transvaginal laparoscopy immediately before standard laparoscopy. The remaining 106 patients had transvaginal laparoscopy performed under local anesthesia, with standard laparoscopy performed only on those with pathology, which required operative intervention. Information on pathology, complications, and successful access to the pouch of Douglas were recorded. Results: The successful rate of access was 93%. Pathology was found in 29 patients, of whom 15 required operative laparoscopy. Also, out of 19 patients with adhesions, 11 (58%) had normal dye test bilateral, and only two (11%) had bilateral occlusion of the tubes. No complications were observed. Conclusion: Transvaginal hydrolaparoscopy is a safe and well-tolerated method for investigating the tubo-ovarian structures in unexplained infertility. It is superior to hysterosalpingography for diagnosis of adhesions. abstract_id: PUBMED:9512237 Transvaginal hydrolaparoscopy as an outpatient procedure for infertility investigation. A new technique called transvaginal hydrolaparoscopy is described for the exploration of the tubo-ovarian structures in infertile patients without obvious pelvic pathology. It aims to be an acceptable alternative to diagnostic laparoscopy, a standard but not innocuous procedure which infrequently reveals pathology in the asymptomatic patient. Transvaginal hydrolaparoscopy copy is performed under local anaesthesia using a small diameter optic with the patient in the dorsal position. Cavity distension is achieved with normal saline. Transvaginal hydrolaparoscopy does not provide the familiar and panoramic view of the pelvis given by laparoscopy, but it does have several advantages. These include accurate and atraumatic inspection of adnexal structures without manipulation, with the opportunity to perform dye hydrotubation and salpingoscopy. The risks of a general anaesthetic are avoided, and there is less risk of trauma to major vessels. The high patient acceptability makes transvaginal hydrolaparoscopy suitable as an early stage procedure in the investigation of infertility and as a repeat or second look procedure. Minor operative procedures such as biopsy and adhesiolysis can also be performed. In patients with obvious pelvic pathology, diagnostic laparoscopy will obviously remain the procedure of choice. Transvaginal hydrolaparoscopy deserves full evaluation of its accuracy, risks and benefits before it can be accepted as a new first line technique in gynaecological practice. abstract_id: PUBMED:10360929 Diagnostic accuracy of transvaginal hydrolaparoscopy in infertility. Objective: To determine whether the accuracy of transvaginal hydrolaparoscopy is comparable to that of standard laparoscopy for the diagnosis of infertility. Design: Prospective, comparative study. Setting: General hospital in Belgium. Patient(s): Ten infertile patients without obvious pelvic pathology. Intervention(s): Two gynecologists independently performed transvaginal hydrolaparoscopy and standard laparoscopy and reported the observations in a confidential manner to a third person. Main Outcome Measure(s): Endometriosis, adhesions, and other pelvic pathology related to infertility. Result(s): Minimal or mild endometriosis was diagnosed in seven patients and sequelae of pelvic inflammatory disease in one patient at both procedures. The interobserver agreement for tuboovarian adhesions was 95% at transvaginal hydrolaparoscopy and 74% at standard laparoscopy. Ovarian adhesions were detected in 63% at transvaginal hydrolaparoscopy and in 37% at standard laparoscopy. Conclusion(s): Transvaginal hydrolaparoscopy is comparable in accuracy to standard laparoscopy for the diagnosis of adhesions and endometriosis in infertile patients without obvious pathology. abstract_id: PUBMED:12101342 Transvaginal hydrolaparoscopy. Transvaginal hydrolaparoscopy (THL) correlates well with standard laparoscopy for evaluation of the cul-de-sac in women with unexplained infertility. Office THL, chromopertubation, and hysteroscopy appear well tolerated, safe, and suitable for most patients, and may be more cost effective than hysterosalpingogram in the infertility work-up. Gynecologic and operative THLs are applicable for selected indications. abstract_id: PUBMED:12216426 Transvaginal hydrolaparoscopy vs. conventional laparoscopy for evaluating unexplained primary infertility in women. Objective: To verify the usefulness and reliability of transvaginal hydrolaparoscopy (THL) as a screening tool for evaluating infertility in women in comparison with conventional diagnostic laparoscopy. Study Design: Sixty consecutive women with unexplained primary infertility were prospectively enrolled into the study. After examination of the whole pelvic cavity, tubal patency was evaluated and the uterine cavity studied by hysteroscopy. Immediately after THL, conventional laparoscopy was performed. Main outcome measures were the success rate of accessing the pouch of Douglas, rate of complete examinations, rate of complications and accuracy of THL in comparison with laparoscopy. Results: Success rate of accessing the pouch of Douglas and performing THL was 93.3%. The rate of complete evaluation of all the pelvic structures was 76.8%. In studying tubal pathology, 77.8% agreement was found between the two techniques. Diagnosis of endometriosis was correct in 55.5% of patients. Overall, THL results correlated closely with conventional laparoscopic results in 92.86%, but the diagnostic accuracy of THL was 100% in cases of complete pelvic evaluation. Conclusion: THL is a feasible, reliable and safe procedure and can be considered an alternative procedure for evaluating infertility in women. In cases of incomplete pelvic evaluation or abnormal findings, conventional laparoscopy is indicated as the second step in the evaluation. abstract_id: PUBMED:22674201 Rationale of first-line endoscopy-based fertility exploration using transvaginal hydrolaparoscopy and minihysteroscopy. The transvaginal access for exploration of tubo-ovarian function in women with unexplained infertility has been revived since transvaginal hydrolaparoscopy (THL) was introduced in 1998. One prospective double-blind trial and several reviews have validated the diagnostic value of THL in comparison with laparoscopy for the exploration of women with unexplained infertility. A review of the recent literature confirms the efficacy and safety of the technique for first-line endoscopy-based exploration of fertility. The standard policy of 1-year delay for laparoscopic investigation in unexplained infertility is challenged. In older women and particularly in women experienced in fertility awareness methods, THL and minihysteroscopy can be performed after a waiting period of 6-12 months. abstract_id: PUBMED:29699164 Application of transvaginal hydrolaparoscopy for ovarian drilling using Nd:YAG laser in infertile women with polycystic ovary syndrome. Since transvaginal hydrolaparoscopy (THL) was introduced as the first-line procedure in the early stage of the exploration of the tubo-ovarian structures in infertile women, it has been shown that THL is a less traumatic and a more suitable outpatient procedure than diagnostic laparoscopy. In the present study, a minimally invasive surgery was carried out in infertile women with polycystic ovary syndrome (PCOS) by THL. Ovarian drilling using Nd:YAG laser vaporization by THL was performed in two clomiphen citrate-resistant infertile women with PCOS. After ovarian drilling with THL, a patient recovered an ovulatory cycle. These findings suggest that ovarian drilling by THL seems to be a safe procedure in infertile women with PCOS. However, further investigations are required to evaluate the effectiveness and risks of this minimally invasive operation. (Reprod Med Biol 2003; 2: 37-40). Answer: Transvaginal hydrolaparoscopy (THL) has been evaluated as a potential alternative to standard diagnostic laparoscopy for the investigation of unexplained infertility. The studies suggest that THL is a reproducible and safe method that could potentially reduce the need for standard laparoscopy in certain cases. The successful rate of access to the pouch of Douglas was high, and the procedure demonstrated a low rate of complications (PUBMED:11134833). Moreover, THL has been shown to be superior to standard laparoscopy in detecting subtle endometriotic adhesions of the ovary (PUBMED:11334917). In terms of diagnostic accuracy, THL correlates well with standard laparoscopy for the evaluation of the pelvic cavity and tubo-ovarian structures (PUBMED:12225307). It has been found to be comparable in accuracy to standard laparoscopy for the diagnosis of adhesions and endometriosis in infertile patients without obvious pathology (PUBMED:10360929). Additionally, THL has been reported to be more cost-effective than hysterosalpingography (HSG) in the infertility work-up and is well tolerated by patients (PUBMED:12101342). However, while THL shows promise, it may not completely replace standard laparoscopy in all cases. For instance, in cases of incomplete pelvic evaluation or abnormal findings during THL, conventional laparoscopy is indicated as the second step in the evaluation (PUBMED:12216426). Furthermore, THL is particularly useful in patients without obvious pelvic pathology and can be an early stage procedure in the investigation of infertility (PUBMED:9512237). In conclusion, while THL offers several advantages and could reduce the number of standard laparoscopies needed for unexplained infertility, it does not render standard diagnostic laparoscopy obsolete. Standard laparoscopy remains an important tool, especially in cases where THL is inconclusive or when pelvic pathology is suspected (PUBMED:11134833, PUBMED:12216426).
Instruction: Improving outcomes on the ABS Certifying Examination: can monthly mock orals do it? Abstracts: abstract_id: PUBMED:26686905 The utility of mock oral examinations in preparation for the American Board of Surgery certifying examination. Background: Determine the utility of mock oral examinations in preparation for the American Board of Surgery certifying examination (ABS CE). Methods: Between 2002 and 2012, blinded data were collected on 63 general surgery residents: 4th and 5th-year mock oral examination scores, first-time pass rates on ABS CE, and an online survey. Results: Fifty-seven residents took the 4th-year mock oral examination: 30 (52.6%) passed and 27 (47.4%) failed, with first-time ABS CE pass rates 93.3% and 81.5% (P = .238). Fifty-nine residents took the 5th-year mock oral examination: 28 (47.5%) passed and 31 (52.5%) failed, with first-time ABS CE pass rates 82.1% and 93.5% (P = .240). Thirty-eight responded to the online survey, 77.1% ranked mock oral examinations as very or extremely helpful with ABS CE preparation. Conclusions: Although mock oral examinations and ABS CE passing rates do not directly correlate, residents perceive the mock oral examinations to be helpful. abstract_id: PUBMED:27663083 Sequential Participation in a Multi-Institutional Mock Oral Examination Is Associated With Improved American Board of Surgery Certifying Examination First-Time Pass Rate. Objective: We sought to determine whether sequential participation in a multi-institutional mock oral examination affected the likelihood of passing the American Board of Surgery Certifying Examination (ABSCE) in first attempt. Design: Residents from 3 academic medical centers were able to participate in a regional mock oral examination in the fall and spring of their fourth and fifth postgraduate year from 2011 to 2014. Candidate׳s highest composite score of all mock orals attempts was classified as risk for failure, intermediate, or likely to pass. Factors including United States Medical Licensing Examination steps 1, 2, and 3, number of cases logged, American Board of Surgery In-Training Examination performance, American Board of Surgery Qualifying Examination (ABSQE) performance, number of attempts, and performance in the mock orals were assessed to determine factors predictive of passing the ABSCE. Results: A total of 128 mock oral examinations were administered to 88 (71%) of 124 eligible residents. The overall first-time pass rate for the ABSCE was 82%. There was no difference in pass rates between participants and nonparticipants. Of them, 16 (18%) residents were classified as at risk, 47 (53%) as intermediate, and 25 (29%) as likely to pass. ABSCE pass rate for each group was as follows: 36% for at risk, 84% for intermediate, and 96% for likely pass. The following 4 factors were associated with first-time passing of ABSCE on bivariate analysis: mock orals participation in postgraduate year 4 (p = 0.05), sequential participation in mock orals (p = 0.03), ABSQE performance (p = 0.01), and best performance on mock orals (p = 0.001). In multivariable logistic regression, the following 3 factors remained associated with ABSCE passing: ABSQE performance, odds ratio (OR) = 2.9 (95% CI: 1.3-6.1); mock orals best performance, OR = 1.7 (1.2-2.4); and participation in multiple mock oral examinations, OR = 1.4 (1.1-2.7). Conclusions: Performance on a multi-institutional mock oral examination can identify residents at risk for failure of the ABSCE. Sequential participation in mock oral examinations is associated with improved ABSCE first-time pass rate. abstract_id: PUBMED:26774933 Attributes of Candidates Passing the ABS Certifying Examination on the First Attempt-Program Directors׳ Perspective. Background: The American Board of Surgery Certifying Examination (CE) is a pivotal event in a surgeon's career development, as it is the last challenge before achieving Board certification. First-time pass rate on the CE is one of the key metrics of surgery residency programs. The overall pass rate on the CE has declined significantly in recent years. Objectives: The goal of this study was the identification of attributes of general surgery residents that are associated with passing the CE at the first attempt. Methods: The modified Delphi process was used to survey general surgery program directors. The study was conducted in 2 rounds in the interest of time available for surgical education research fellowship project. Results: All 259 program directors were contacted in each round of surveys. In all, 49 (19%) responded to the first round and 54 (21%) responded to the second round of survey. Conclusions: The characteristics of a successful resident on CE include confidence, self-motivation, sound knowledge base, strong performance on the Board's training examination (American Board of Surgery In-Training Examination), and mock orals, and good communication skills. Postgraduate years 4 and 5 are the most likely resident levels at which failure could be predicted. abstract_id: PUBMED:34173545 Multi-institutional collaborative mock oral (mICMO) examination for cardiothoracic surgery trainees: Results from the pilot experience. Objective: The American Board of Thoracic Surgery-certifying examination is challenging for applicants. Single institutions have reported good results with a mock oral examination (MOE) for trainees. General surgery literature has demonstrated success with in-person multi-institutional MOE examinations. Due to small numbers of cardiothoracic training programs and significant geographic variability, we hypothesized that a multi-institutional, collaborative remotely administered MOE (mICMO) pairing faculty with trainees from different institutions would provide an important educational experience. Methods: mICMOs were conducted via the Zoom virtual platform across 6 institutions. Descriptive results via post-experience survey were analyzed and reported. Results: In total, 100% of trainees found mICMO useful. The majority would recommend to a peer, and 100% of faculty examiners found mICMO useful and would participate in another examination. Conclusions: Faculty and trainees found the experience to be effective with respect to creating a high-stakes environment, educationally beneficial, and productive. These results support the continued use of mICMO and encourage expansion and collaboration with additional institutions across the country. abstract_id: PUBMED:30122639 National Landscape of General Surgery Mock Oral Examination Practices: Survey of Residency Program Directors. Introduction: Mock oral examinations (MOEs) are used within surgery residency programs to prepare trainees for the American Board of Surgery (ABS) Certifying Exam (CE), but little work exists to guide programs in terms of best practices for implementing a general surgery MOE program. This study, endorsed by the Association for Program Directors in Surgery (APDS) Research Committee, aimed to better understand the national scope of current practices for general surgery MOEs. Methods: General surgery residency program directors (PDs) were invited via the APDS listserv to complete a 27-item survey about their perceptions of the importance and correlates of MOEs, how their exams are structured, implementation barriers, and recent revisions to their MOE program. Results: Of 98 PDs responding to the survey, 94% (n = 92) responded about the characteristics of their formal MOE programs. The majority required upper level resident participation and held the exams 2 to 3 times annually; far fewer involved lower level residents. Most programs structure their MOEs to mimic the CE format with 3 exam rooms (76%), using premade questions (66%), presenting 4 scenarios per room (59%), and using two examiners per room (85%). Most PDs (88%) believed MOEs were very important or essential for surgery trainees, which correlated with their ratings of how important MOEs are to their Clinical Competency Committee for determining resident advancement (r = 0.32, p < 0.002). Common barriers for implementing MOEs were availability of examiners and scenarios. About half indicated making recent or ongoing revisions to improve their MOEs. Many PDs indicated interest in collaborating regionally or nationally on MOE initiatives. Conclusions: MOEs were largely regarded as a highly valuable tool by PDs to prepare trainees for the general surgery CE. The majority of programs in this study provide a testing experience as similar to the CE as possible, although some variability in the structure of MOEs was identified. PDs also reported significant implementation barriers and a desire for more MOE collaboration. abstract_id: PUBMED:19059175 Improving outcomes on the ABS Certifying Examination: can monthly mock orals do it? Purpose: First-time pass rates on the American Board of Surgery Certifying Examination (ABSCE) have now become one of the standards of excellence to evaluate residency programs. Our residency program started monthly simulated and critiqued (verbal, written, and video) oral examinations (MSCE) in 2003. The current study explores the outcomes of this intervention. Methods: We evaluated ABSCE performance of 48 residents who graduated from a large academic/community program between the years 2001 and 2006 though a prospective study with historical controls. Residents were divided into 2 groups: The intervention group comprised the 2003 to 2006 classes, which underwent MSCE; the historical control group spanned the 2001 and 2002 classes, which did not undergo MSCE. Results in the ABSCE were compared between groups using the Fisher exact test. In addition, the intervention group was queried in relation to the most important aspects of the MSCE as a learning experience through a structured questionnaire. Results: A statistically significant improvement (p = 0.038) in ABSCE first-time pass rates was noted in the intervention group. Examinees unanimously asserted they had been helped by the MSCE. Improvements in clinical reasoning and promotion of self-study were the most often cited benefits of the MSCE. Conclusion: Monthly simulated and critiqued oral examinations improved the first-time pass rate of the American Board of Surgery Certifying Examination. Additional perceived benefits of this intervention included improvements in clinical reasoning and promotion of self-study. abstract_id: PUBMED:27664877 Evaluating the effectiveness of a mock oral educational program. Background: To obtain board certification, the American Board of Surgery requires graduates of general surgery training programs to pass both the written qualifying examination (QE) and the oral certifying examination (CE). In 2015, the pass rates for the QE and CE were 80% and 77%, respectively. In the 2011-2012 academic year, the University of Wisconsin instituted a mandatory, faculty-led, monthly CE preparation educational program (CE prep) as a supplement to their existing annual mock oral examination. We hypothesized that the implementation of these sessions would improve the first-time pass rate for residents taking the ABS CE at our institution. Secondary outcomes studied were QE pass rate, correlation with American Board of Surgery In-Training Examination (ABSITE) and mock oral examination scores, cost, and type of study materials used, perception of examination difficulty, and applicant preparedness. Methods: A sixteen question survey was sent to 57 of 59 residents who attended the University of Wisconsin between the years of 2007 and 2015. Email addresses for two former residents could not be located. De-identified data for the ABSITE and first-time pass rates for the QE and CE examination were retrospectively collected and analyzed along with survey results. Statistical analysis was performed using SPSS version 22 (IBM Corp., Armonk, NY). P values < 0.05 were considered significant. Results: Survey response rate was 77.2%. Of the residents who have attempted the CE, first-time pass rate was 76.0% (19 of 25) before the implementation of the formal CE Prep and 100% (22 of 22) after (P = 0.025). Absolute ABSITE score, and mock oral annual examination grades were significantly improved after the CE Prep was initiated (P values < 0.001 and 0.003, respectively), however, ABSITE percentile was not significantly different (P = 0.415). ABSITE raw score and percentile, as well as mock oral annual examination scores were significantly associated with passing the QE (0.032, 0.027, and 0.020, respectively), whereas mock oral annual examination scores alone were associated with passing the CE (P = 0.001). Survey results showed that residents perceived the CE to be easier than the annual mock oral after the institution of the CE prep course (P = 0.036), however, there was no difference in their perception of preparedness. Overall, applicants felt extremely prepared for the CE (4.70 ± 0.5, Likert scale 1-5). Conclusions: Formal educational programs instituted during residency can improve resident performance on the ABS certifying examination. The institution of a formal, faculty-led monthly CE preparation educational program at the University of Wisconsin has significantly improved the first-time pass rate for the ABS CE. Mock oral annual examination scores were also significantly improved. Furthermore, ABSITE scores correlate with QE pass rates, and mock oral annual examination scores correlate with pass rates for both QE and CE. abstract_id: PUBMED:31029576 An Online System to Help With Mock Oral Examination Administration. Objective: Describe an online system used to collect data, compute statistics, and provide reports for mock oral examinations. Design: Forty general surgery residents, program directors, and faculty serving as examiners completed a survey regarding their experiences with the online mock oral examination system. Setting: General surgery residency programs and national surgical conferences. Participants: General surgery residents, program directors, and faculty. Results: System users had very positive attitudes toward the online system in terms of usability and reporting functions. Conclusions: The mock oral exam management online system is a useful tool that eases the burden of managing a mock oral exam session. abstract_id: PUBMED:32994158 Virtual Surgery Oral Board Examinations in the Era of COVID-19 Pandemic. How I Do It! Introduction: Traditional in-person Mock Oral Examinations (IP-MOEs) are utilized by surgery residency programs to prepare trainees for the American Board of Surgery Certifying Exam (ABS-CE). However, the COVID-19 Pandemic has led to a profound disruption of on-campus and in-person educational activities, with subsequent instantaneous revolutionization of educational systems all over the world, including a massive switch to virtual platforms. Many in-person didactics and examinations were canceled or rescheduled, including the ABS-CE. The study aims to evaluate Virtual MOEs' (V-MOEs) feasibility as a potential alternative to in-person MOEs in residency programs. Methods: Twenty-five participants-16 general surgery residents (7 females, 9 males) and 9 faculty - in the inaugural Department of Surgery Virtual Mock Oral Examination completed an anonymous, voluntary online survey via Microsoft Forms. Faculty was given 24 questions, and residents 28, with 9 questions common between both residents and faculty. Participants were asked about the accessibility to virtual examination rooms, V-MOE effectiveness, resident's preparation for the exam, resident's stress, diversity, and number of clinical scenarios, and possible future implementation of, and barriers to, V-MOEs. Results: All participants have participated in IP-MOEs in the past. All faculties were very satisfied or satisfied with IP-MOE, compared to 93.8% of residents. All participants were very satisfied or satisfied with the orientation and instructions before V-MOE. Only 66.6% of faculty, compared to all residents, was satisfied with time allocation for sessions. While 88.9% of faculty felt the V-MOE was less stressful on residents, only 68.8% of residents felt so. Additionally, 87.5% of residents said they prepared for the V-MOE similarly to the IP-MOE. As a future platform, only 22.2% of faculty compared to 43.8% of residents preferred V-MOE over the IP-MOE. Both faculty (88.9%), and residents (81.3%) preferred immediate feedback at the end of sessions. All faculty recommend collaboration with other programs to enhance the resident's preparation. Time constraints, lack of experience with the format, and availability were the top 3 barriers. Conclusion: V-MOE is feasible, accessible, and a potential alternative for IP-MOEs at a program level for ABS-CE preparation. Given the time constraints and costs associated with IP-MOEs, it is an opportunity to collaborate with other residency programs. abstract_id: PUBMED:32505673 Standardized Multi-Institutional Mock Oral Examination: A Feasible and Valuable Educational Experience for General Surgery Residents. Introduction: Mock oral examinations (MOE) are used to prepare residents and assess their readiness for the American Board of Surgery Certifying Exam (ABSCE). Delivery of MOEs varies by institution and previous studies have demonstrated significant implementation barriers such as availability of faculty examiners and exam scenarios. Objective: To assess the value and participant satisfaction of a standardized multi-institutional MOE for general surgery residents. Participants: Thirty-three general surgery residents and 37 faculty members from 3 institutions participated in a regional MOE. Residents were examined in three 20-minute sessions. Faculty examiners were given a wide selection of prescripted exam scenarios and instructed to use standardized grading rubrics during a brief orientation on the day of the exam. All participants were surveyed on their overall experience. Results: Of 33 participating residents, 26 (79%) passed the MOE (92% of R5, 91% R4, and 50% of R3). Response rates were 91% for residents, and 57% for faculty members respectively. Most respondents were satisfied with the overall exam experience (88%), standardized question quality (86%) and question variety (82%). A total of 92% of respondents agreed that the time, effort, and cost of the MOE was justified by its educational value to residents. Higher medical knowledge ratings assigned by faculty examiners correlated with stronger trainee performance (β = 0.48; 95% confidence interval [CI] 0.29-0.66), while patient care and interpersonal communication skill ratings were not associated with trainee performance. The standardized grading rubric achieved moderate inter-rater reliability among examiner pairs with 70.6% agreement (Kappa 0.47). Conclusions: General Surgery residents and faculty perceived the standardized multi-institutional MOE to be a highly satisfactory educational experience and valuable assessment tool. Developing a repertoire of scripted exam scenarios made it feasible to recruit sufficient faculty participants, and standardizing grading rubrics allowed for a consistent exam experience with moderate inter-rater reliability. Answer: Yes, improving outcomes on the ABS Certifying Examination can be achieved through monthly mock orals. A study that evaluated the performance of 48 residents who graduated from a large academic/community program between the years 2001 and 2006 found a statistically significant improvement in ABS Certifying Examination first-time pass rates in the group that underwent monthly simulated and critiqued oral examinations (MSCE). The examinees unanimously asserted they had been helped by the MSCE, noting improvements in clinical reasoning and promotion of self-study as the most significant benefits of the MSCE (PUBMED:19059175).
Instruction: Non-obstructive prosthetic heart valve thrombosis (NOPVT): Really a benign entity? Abstracts: abstract_id: PUBMED:23994405 Treatment of obstructive thrombosed prosthetic heart valve. Obstructive thrombosed prosthetic heart valve (OTPHV) is a serious complication of heart valve replacement. There are no generally accepted criteria for management of these patients. Therefore, in September 2012, a literature survey of studies published after 1995 was performed to analyze the data regarding clinical outcomes of patients with OTPHV treated with thrombolytic agents and with surgery since 1996. The search yielded appropriate and relevant studies, which included 17 studies comprising 756 patients who had received thrombolytic therapy and 13 studies comprising 662 patients who had received surgery. The data on these 2 groups was analyzed in detail relating to frequency of use of the diagnostic studies, baseline patient data, and on the rate of complete success, outcomes, and complications of the therapy they had received, and the limitations of the studies. We have then developed a strategy for therapy of OTPHV. abstract_id: PUBMED:26113471 Non-obstructive prosthetic heart valve thrombosis (NOPVT): Really a benign entity? Aims: To assess the effectiveness of different treatment strategies in patients with non-obstructive prosthetic valve thrombosis (NOPVT) during hospitalization and long-term follow-up. Methods: NOPVT was diagnosed by transesophageal echocardiography. Resolution was defined as the disappearance or reduction of the thrombus under anticoagulation. All cases were first managed with optimization of anticoagulation. At discharge, patients received oral anticoagulation (OAC) alone or OAC and antiplatelet therapy (double treatment). Adverse events were defined as cardiovascular death, recurrence, thromboembolic events or major bleeding. Results: From 1997 to 2012, 47 patients (mean age: 65years; women: 60%) were diagnosed with NOPVT (mitral valve: 97%). Previous poor anticoagulation control was documented in 66% of patients. Twenty-one patients (45%) were treated with unfractionated heparin (UFH), especially those with thrombus size >10mm (19/21). Optimization of OAC was performed in the remaining patients. Treatment failed in 13 (27.6%) patients, mostly in those who received UFH (10/13), requiring surgery (53.8%) or fibrinolysis (30.7%). Forty-two patients survived and, at discharge, 44% of patients received OAC alone and 56% the double treatment. At follow-up (median 23months; range 0.03-116months), 59.5% of patients presented cardiovascular events, however no differences in outcome were observed with double treatment or OAC alone (p=0.385). Conclusions: NOPVT is a high-risk complication, not only during hospitalization but also during follow-up. Optimization of anticoagulation is efficient in most patients except in thrombi ≥10mm treated with UFH. The double treatment does not prevent adverse events or complications during follow-up. abstract_id: PUBMED:34498010 Imaging Spectrum of Valvular and Paravalvular Complications of Prosthetic Heart Valve at CT Angiography. Prosthetic heart valve (PHV) dysfunction is an uncommon but potentially life-threatening condition. In routine practice, transthoracic echocardiography and cinefluoroscopy comprise first-line imaging for the diagnostic evaluation of PHV dysfunction. In cases in which the findings of echocardiography and cinefluoroscopy remain inconclusive or are contradictory, CT angiography can resolve these conflicts. CT angiography also provides incremental diagnostic information about patients with suspected PHV obstruction and endocarditis, in which case it can demonstrate the anatomic substrate and extent of involvement. Additionally, information regarding the coronary arteries, cardiac dimensions, and retrosternal space may be obtained in cases in which repeat surgery is planned. This imaging essay describes the imaging spectrum of valvular and paravalvular complications of PHV at CT angiography and how the knowledge regarding the spectrum of complications can be incorporated into multimodality imaging for guiding clinical management. Keywords: Prosthetic Heart Valve Dysfunction, Prosthetic Heart Valve Thrombosis, Pannus, Paravalvular Leak, CT Angiography, Cardiac, Valves Supplemental material is available for this article. © RSNA, 2021. abstract_id: PUBMED:34316860 Timely Management of Obstructive Prosthetic Valve Thrombosis. Obstructive bioprosthetic valve thrombosis is associated with hemodynamic compromise, and evidence on management with fibrinolysis is limited. Echocardiography is required to assess thrombus size and its effects on valve gradients, area, and leaflet motion. This case demonstrates use of echocardiography guided slow-infusion low-dose fibrinolytic therapy in a patient with obstructive bioprosthetic valve thrombosis. (Level of Difficulty: Intermediate.). abstract_id: PUBMED:30883907 Prosthetic aortic valve thrombosis: To fibrinolyse or not to fibrinolyse? That is the question! Prosthetic heart valve (PHV) dysfunction is a rare but serious complication whose optimal management may be challenging and requires a multidisciplinary approach. Treatment success ultimately depends on determining the underlying mechanism of valve dysfunction by echocardiography. However, being able to establish the main etiology is not always straightforward. We present a difficult case of obstructive PHV dysfunction and discuss clinical and echocardiographic parameters to help differentiate thrombus from pannus formation. abstract_id: PUBMED:33950064 Prosthetic heart valve thrombosis in pregnancy: a case series on acute management. Rheumatic heart disease is one of the leading causes of valve dysfunction, resulting in prosthetic valve implantation. Changes in physiology and the haemodynamics of pregnancy increase the susceptibility of thrombosis to the prosthetic valve in the pregnant woman. Valve redo surgery carries a considerable risk of maternal and perinatal morbidity and mortality. Women of reproductive age should be well counselled regarding compliance with anticoagulation, contraception and pre-pregnancy planning. abstract_id: PUBMED:32880248 Transcatheter mechanical manipulation of obstructed prosthetic mitral valve in an infant. Prosthetic valve thrombosis is a serious complication of prosthetic heart valves that typically requires either surgical intervention or systemic thrombolysis. In patients with contraindications to both treatment modalities, options can be limited. We describe an alternative approach to managing prosthetic valve thrombosis in an infant presenting in extremis with pulmonary haemorrhage. Using transoesophageal echocardiography and fluoroscopic guidance, we restored function to the infant's obstructed St. Jude prosthetic mitral valve through percutaneous transcatheter manipulation of the valve's leaflets. abstract_id: PUBMED:24062899 Prosthetic heart valve obstruction: thrombolysis or surgical treatment? Prosthetic valve thrombosis is a potentially life-threatening complication associated with high morbidity and mortality. Transthorasic and transoesophageal echocardiography play an important role to the diagnosis and provides incremental information about the optimal treatment strategy, while fluoroscopy and cardiac computed tomography may be of added value. Guidelines differ on whether surgical treatment or fibrinolysis should be the treatment of choice for the management of left-sided prosthetic valve thrombosis and these uncertainties underline the need for further prospective randomized controlled trials. Thrombus size, New York Heart Association functional class of the patient, the possible contraindications, the availability of each therapeutic option and the clinician's experience are important determinants for the management of prosthetic valve thrombosis. abstract_id: PUBMED:28050429 Redo Mitral Valve Replacement for Prosthetic Valve Thrombosis: Single Center Experience. Introduction: Prosthetic Valve Thrombosis (PVT) remains a significant cause of morbidity and mortality after valve replacement. Although surgical treatment is usually preferred as life-saving in cases of obstructive PVT, optimal treatment is yet to be decided. Aim: To evaluate risk factors and outcome of the patients undergoing redo mitral valve surgery for acute PVT. Materials And Methods: Between January 2012 and February 2015, 65 patients underwent redo surgery for obstructive PVT of mitral valve in Department of Cardiothoracic Surgery, UN Mehta Institute of Cardiology & Research Centre. Patients having acute PVT of aortic valve or combined aortic and mitral valve were excluded. Pre-operative, intra-operative and post-operative factors affecting the outcome as well as follow-up data were measured. Results: There were total 65 patients, 17 males and 48 females. Most common presenting symptom was dyspnea (100%), followed by palpitation (57.88%) and fatigue (29.45%). Total mortality was 29.2% (19/65). Mortality rate was significantly higher (16/35, 46%) in patients with direct surgery (Group-2) as compared to those with failed thrombolysis (3/30, 10%) (Group-1). Mortality was also significantly higher in patients presented with New York Heart Association (NYHA) III/IV class as compared to those presented with NYHA I/II class (p=0.02). All survived patients are NYHA Class I-II in follow-up with mean follow-up period was 24 ± 9 months. Conclusion: PVT still remains a challenging problem in a post-operative patient with a high mortality independent of treatment modality. Though life-saving, the surgical management of this condition still carries a high risk in haemodynamically unstable and in NYHA class III/IV patients. abstract_id: PUBMED:30947924 ACC/AHA Versus ESC Guidelines on Prosthetic Heart Valve Management: JACC Guideline Comparison. Prosthetic heart valve interventions continue to evolve with new innovations in surgical and transcatheter technologies. We compared the recommendations from the 2017 American College of Cardiology/American Heart Association guidelines for management of patients with prosthetic heart valves with the 2017 European Society of Cardiology guidelines. The 2 documents differed regarding recommendations for follow-up imaging, the choice of biological versus mechanical prosthesis, bridging therapies, role of aspirin, use of fibrinolytic therapy for prosthetic valve thrombosis, and management of paravalvular regurgitation. This review highlights the differences between the 2 guidelines, summarizes new evidence, and offers recommendations for the management of patients with prosthetic heart valves in these areas of controversy. Answer: Non-obstructive prosthetic heart valve thrombosis (NOPVT) is not considered a benign entity. According to a study that assessed the effectiveness of different treatment strategies in patients with NOPVT, it was found that NOPVT is a high-risk complication, not only during hospitalization but also during follow-up (PUBMED:26113471). The study, which included 47 patients diagnosed with NOPVT, found that optimization of anticoagulation was efficient in most patients except in those with thrombi larger than 10mm treated with unfractionated heparin (UFH). Treatment failed in 27.6% of patients, requiring surgery or fibrinolysis. At follow-up, a significant 59.5% of patients presented cardiovascular events, and no differences in outcome were observed with double treatment (oral anticoagulation and antiplatelet therapy) or oral anticoagulation alone. This indicates that NOPVT can lead to serious complications and adverse events, challenging the notion that it might be a benign condition.
Instruction: The prevalence of transitional object use in adolescence: is there a connection between the existence of a transitional object and depressive symptoms? Abstracts: abstract_id: PUBMED:19221854 The prevalence of transitional object use in adolescence: is there a connection between the existence of a transitional object and depressive symptoms? Objective: The purpose of the study was to investigate the prevalence of the use of a transitional object (TO) in adolescence and its connection with depressive symptoms and mental distress in youth by gender. Method: The study group consisted of 1,054 adolescents (465 boys, 589 girls) from normal comprehensive schools in Turku, a Finnish town with approximately 175,000 inhabitants. The mean age of both gender groups was 14.5 years (SD 0.5); respondents came from all social classes. Background and TO information was collected with questionnaires. Depressive symptoms and mental distress were explored by the children's depression inventory (CDI) and its subscales: low self-confidence, anhedonia and sadness. Results: Of all respondents, 29% (n = 285) had a TO: 37% of girls and 18% of boys. The difference between genders was statistically significant. There was also a statistically significant gender difference in the character of the TO reported: 72 % of girls with a TO had a soft toy, whereas 49% of boys with a TO used hard objects, such as pens, hard toys or books. TO-users showed more depressive symptoms than non-users, but the difference was not statistically significant (P = 0.053). Significant differences were found in the CDI subscales: TO-users had more sadness than non-users and girls using a TO showed more sadness than non-using girls. TO-using boys did not differ from non-using boys with regard to sadness or the other CDI-subscales. Conclusions: The use of a TO seems to be common in adolescence. Adolescents with more depressive symptoms more often used a TO. The sadness of girls using TOs requires attention. abstract_id: PUBMED:28815713 CLINICAL WINNICOTT: TRAVELING A REVOLUTIONARY ROAD. The author contends that, contrary to the usual perception that Winnicott followed a linear progression "through pediatrics to psychoanalysis," Winnicott's vision was always a psychoanalytic one, even during his early pediatric work. His place in the development of psychoanalytic theory is highlighted, and the author discusses such key Winnicottian concepts as transitional space, the false self, and the use of the object. Winnicott's unique approach to the form and value of analytic interpretation is particularly emphasized, and his thoughts on the treatment of depression are also addressed, as well as his distinction between regression and withdrawal. Included is a summary of convergences and divergences between Winnicott's thinking and that of Bion. abstract_id: PUBMED:36480216 Prevalence and correlates of methamphetamine use in transitional age youth experiencing homelessness or housing instability in San Francisco, CA. Introduction: Substance use, including methamphetamine use, is a contributing factor in HIV acquisition and treatment. Stimulant use is linked to mental health yet there is limited data from youth in community-based settings. Design: One hundred marginally housed or homeless transitional age youth (TAY) were recruited at Larkin Street Youth Services and completed a survey on mental health and substance use. Methods: We conducted secondary data analysis using multivariable logistic regression models to identify the correlates of methamphetamine use among TAY. Results: The participants' mean age was 22. Of those who reported methamphetamine use in the past 3 months, 64% were Gay, Bisexual, or Pansexual. Factors independently associated with methamphetamine use were; living with HIV (adjusted odds ratio [aOR] = 3.18, 95% CI = 1.11-9.15), depressive symptoms (aOR = 6.02, 95% CI = 1.46-24.78), symptoms of PTSD (aOR = 13.38, 95% CI = 1.59-112.73), polysubstance use in the past 3 months (aOR = 50.02, 95% CI = 9.72-257.46) and a history of injection drug use (aOR = 8.38, 95% CI = 1.87-37.53). Conclusions: Results from this study suggest a need to develop, adapt, and rapidly implement comprehensive interventions that address the combined epidemics of substance use, HIV, and mental health among TAY. Clinical Relevance: This article examines factors associated with methamphetamine use among transitional age youth (TAY) experiencing homelessness or housing instability. Several factors were associated with use, including depression, PTSD, HIV status, polysubstance use, and injection drug use. These findings highlight the need for nurses to assess for methamphetamine use among youth as well as associated mental health and physical health problems. Nurses should link TAY who are using methamphetamine to evidence-based treatment programs to address substance use and comorbid conditions. abstract_id: PUBMED:8477617 Transitional objects, pre-sleep rituals, and psychopathology. The prevalence among college age adults (N = 184) of transitional objects and pre-sleep preparation rituals and their relationship to mental health was investigated. Two scales concerning use of transitional objects and fear of going to sleep, showed statistically significant correlations with self-reports of nightmares, loneliness, depression, and previous physical and sexual abuse. Forty-six students scoring in the highest 15% of the distribution of scores for each scale were compared with students in the lowest 15% on the MAACL and SCL-90-R. High scoring students showed significantly higher levels of depression, anxiety, hostility, and interpersonal sensitivity. abstract_id: PUBMED:3843602 Determinants of object choice in adulthood: a test case of object-relations theory. A clinical example of the effects of the absence of the father during the early childhood of a male analysand and its consequences for his later object choice is presented. This patient's biological father left the household when the patient was only a few months old, and his mother did not remarry until he was five years of age. Unconscious fantasies of the lost father, with whom the patient had no contact in reality, and longing for him organized the patient's drives and determined his self-identity as well as his later object choice. The patient developed both a positive and negative identification with the lost father. The identification was with a fantasy object not a real one. The case highlights the need to use a precise term in talking of the object, namely the mental representation of the object which may or may not be built up out of experiences with the real object. abstract_id: PUBMED:34468226 The Somatic Symptom as One's Object: Applying Fairbairn's Theory of Internal Object Relations and Winnicott's Conceptualization of the Psyche-and-Soma. The author discusses Winnicott's theory (1949/1975) of the psyche-soma and Fairbairn's (1944) theory of internal object relations, bringing them together to enrich our perspective of one's somatization. By focusing on how the patient takes care, attends, experiences, and feels toward the symptom, the analyst can better understand the patient's early object-relations. This allows analyst and patient to rethink the symptom in terms of the patient's early traumas and one's capacity to mourn the loss of the love-object. Fairbairn's conceptualizations of the "rejecting," "alluring," and "addictive" object-relations are combined with Winnicott's understanding of the split between psyche and soma, following the ill-adaptation of the mother to the baby's earliest emotional needs. abstract_id: PUBMED:12204163 A new reading of the origins of object-relations theory. The author presents a reading of Freud's "Mourning and melancholia" in which he examines not only the ideas Freud was introducing, but, as important, the way he was thinking/writing in this watershed paper. The author demonstrates how Freud made use of his exploration of the unconscious work of mourning and of melancholia to propose and explore some of the major tenets of a revised model of the mind (which later would be termed "object-relations theory"). The principal tenets of the revised model presented in this 1917 paper include: (1) the idea that the unconscious is organised to a significant degree around stable internal object relations between paired split-off parts of the ego; (2) the notion that psychic pain may be defended against by means of the replacement of an external object relationship by an unconscious, fantasied internal object relationship; (3) the idea that pathological bonds of love mixed with hate are among the strongest ties that bind internal objects to one another in a state of mutual captivity; (4) the notion that the psychopathology of internal object relations often involves the use of omnipotent thinking to a degree that cuts off the dialogue between the unconscious internal object world and the world of actual experience with real external objects; and (5) the idea that ambivalence in relations between unconscious internal objects involves not only the conflict of love and hate, but also the conflict between the wish to continue to be alive in one's object relationships and the wish to be at one with one's dead internal objects. abstract_id: PUBMED:11008631 Object relations and intersubjectivity in depression. Starting with a clinical vignette the authors discuss Freud's thesis from "Mourning and Melancholia" (1917) that depression is a defense against the experience of loss: Whilst a part of the self remains identified with the lost object another part directs all the aggression against it that originally had been directed against the object. As a result the relationship between self and object becomes replaced by a pathological relationship between parts of the self. As psychoanalytical and phenomenological-hermeneutical approaches congruently show, one reason why mourning cannot proceed and feelings of guilt cannot be worked through lies in the fact that depressive patients need a symbiotic, idealizing type of identification with their objects in order to stabilize their identity. Because of their narcissistic vulnerability changes, personal failures, and disappointment by their objects, which may provoke unconscious aggression and fear of loss, can easily threaten their psychic equilibrium. This gives rise to considerable difficulties within the transference-countertransference relationship. The authors argue that to establish a feeling of autonomy in depressive patients, a secure framework within therapy has to be offered and at the same time the negative transference must thoroughly be worked through. abstract_id: PUBMED:6834235 The psychological assessment of object representation. In 1976, Blatt, Brenneis, Schimek, and Glick developed a comprehensive scale for assessing the organization and content of the "concept of the object" in Rorschach responses. Utilizing developmental theory, they developed procedures for assessing the representation of human figures on the Rorschach in terms of the degree of differentiation, articulation, and integration. This analysis of Rorschach responses has proven to be of considerable value in clinical research and has provided empirical data for the study of severe psychopathology. The present paper considers the clinical utility of this conceptual scheme. Five prototypic patients, each representing a particular diagnostic category, were selected for study on the basis of clinical case records. The object representations of the five cases are presented and analyzed in detail, and conclusions are drawn about possible configurations of object representations in different forms of psychopathology. The value of a systematic assessment of object representations for differential diagnosis as well as for the study of change in the psychotherapeutic process is considered. This approach to Rorschach interpretation represents an integration of object relations theory into Rorschach methodology. These new concepts of test assessment and interpretation offer the promise of providing a methodological framework and theoretical foundation for further innovative use of the Rorschach and other projective procedures. abstract_id: PUBMED:7323775 Total fusion with an object and its destruction On the basis of four partly narcissistic-neurotic, partly narcissistically disturbed borderline-patients who are very severely disturbed in their self- and in their object-representation the author explains how total fusion-tendencies may become fatal for the object, if this wants to separate itself even just a bit. The object turns then abruptly from a "good" into a "bad" one. In a fifth example it is described how, in deep and especially in the endogenous depression, the fusion-tendency with a near object may lead to an expanded suicide. The danger, which should not be underestimated, for near related persons of these heavily narcissistically disturbed, respectively - in the depression - emptied human beings is underlined. Forensic viewpoints which follow the destruction of the object because of a total fusion-tendency are discussed. Answer: The study on the prevalence of transitional object (TO) use in adolescence found that TO use is common among adolescents, with 29% of respondents having a TO. Girls were more likely to have a TO than boys, with 37% of girls and 18% of boys reporting TO use. The study also found that adolescents who used a TO showed more depressive symptoms than those who did not, although the difference was not statistically significant (P = 0.053). However, significant differences were observed in the subscales of the Children's Depression Inventory (CDI), with TO-users exhibiting more sadness than non-users. Specifically, girls using a TO showed more sadness than non-using girls, while TO-using boys did not differ significantly from non-using boys in terms of sadness or other CDI subscales. These findings suggest that there may be a connection between the existence of a TO and depressive symptoms, particularly in terms of sadness, and that this connection may be more pronounced in girls (PUBMED:19221854).
Instruction: Is a High Body Mass Index Still a Risk Factor for Complications of Donor Nephrectomy? Abstracts: abstract_id: PUBMED:26093700 Is a High Body Mass Index Still a Risk Factor for Complications of Donor Nephrectomy? Background And Aim: The incidence of obesity is increasing all around the world and Turkey is no exception. In Turkey, 80.1% of all kidney transplants performed in 2013 were living donor kidney transplants. In this study we compare the early postoperative complications of living kidney donors with a body mass index (BMI) over 30 to those with BMIs under 30. Patients And Method: All donor nephrectomies performed at the Ege University School of Medicine Hospital between May 2013 and May 2014 were included in the study. Donors' demographics, preoperative BMI, operation time, length of hospital stay, postoperative complications, and perioperative blood creatinine levels were analyzed. Results: There were a total of 72 donors, 50 of whom had a BMI below 30 (group 1), whereas 22 had a BMI of 30 or higher (Group 2). The median age was 47 (±12.6) and 52.2 (±8.4) for Groups 1 and 2, respectively. The median BMI was 26.1 (±2.3) for Group 1 and 31.8 (±1.5) for Group 2. There was no significant difference in operation time (P = .980) between the 2 groups. There was no difference in the length of hospitalization with an average hospital stay of 3 days for both groups. No major complications were observed in either group. There was no difference in minor complication rates for both groups. Conclusion: High BMI donors can safely donate their kidney with no significant increase in complication rates at high-volume transplantation centers. abstract_id: PUBMED:23344469 Systematic review and meta-analysis of the relation between body mass index and short-term donor outcome of laparoscopic donor nephrectomy. In this era of organ donor shortage, live kidney donation has been proven to increase the donor pool; however, it is extremely important to make careful decisions in the selection of possible live donors. A body mass index (BMI) above 35 is generally considered as a relative contraindication for donation. To determine whether this is justified, a systematic review and meta-analysis were carried out to compare perioperative outcome of live donor nephrectomy between donors with high and low BMI. A comprehensive literature search was performed in MEDLINE, Embase, and CENTRAL (the Cochrane Library). All aspects of the Preferred Reporting Items for Systematic Reviews and Meta-analyses statement were followed. Of 14 studies reviewed, eight perioperative donor outcome measures were meta-analyzed, and, of these, five were not different between BMI categories. Three found significant differences in favor of low BMI (29.9 and less) donors with significant mean differences in operation duration (16.9 min (confidence interval (CI) 9.1-24.8)), mean difference in rise in serum creatinine (0.05 mg/dl (CI 0.01-0.09)), and risk ratio for conversion (1.69 (CI 1.12-2.56)). Thus, a high body mass index (BMI) alone is no contraindication for live kidney donation regarding short-term outcome. abstract_id: PUBMED:28187700 Long-Term Body Mass Index Trends After Living-Donor Nephrectomy. Objectives: As the demand for kidney transplant allografts has increased, many centers are expanding the upper limit of acceptable body mass index for kidney donors. However, obesity is a risk factor for developing renal disease. Our goal was to quantify body mass index trends in donor nephrectomy patients and to institute nutrition counseling to promote sustainable weight loss to reduce the risk of metabolic syndrome-derived renal dysfunction. Materials And Methods: Ninety patients who underwent donor nephrectomy between 2007 and 2012 consented to having height and weight data collected at multiple time points. After data collection, each patient underwent a standardized nutrition counseling session. One year later, body mass index was reassessed. Results: Preoperatively, 52% of the patients were overweight or obese. The percentage of overweight and obese patients remained stable for 2 years after surgery. However, at 3, 4, and 5 years after surgery, these rates increased to 59%, 69%, and 91%. Each patient was counseled about obesity-related comorbidities and provided information about lifestyle modification. One year later, 94% of previously overweight patients and 82% of previously obese patients had a decrease in mean body mass index from 27.2 ± 4.0 kg/m2 to 25.1 ± 3.6 kg/m2. Conclusions: Living-donor nephrectomy patients are at risk of developing obesity, similar to the adult population. Nutrition counseling may be beneficial to help normalize body mass index in patients who have become overweight or obese to potentially prevent obesity-related comorbidities. All patients were evaluated by a nutrition specialist after surgery to review our donor nephrectomy nutrition brochure. Body mass index monitoring and primary care follow-up appear to be appropriate surveillance methods. abstract_id: PUBMED:31474288 Incisional Complications and Cosmetic Evaluation After Hand-assisted Retroperitoneoscopic Donor Nephrectomy. Purpose: Hand-assisted retroperitoneoscopic (HARP) donor nephrectomy prevents major complications, but incision site complications may be more frequent in hand-assisted approach. We evaluated long-term incisional complication rates and cosmetic outcomes after HARP donor nephrectomy in our series. Materials And Methods: A total of 609 donors who underwent nephrectomy between February 2009 and June 2016 were invited for physical examination and face-to-face interview. A total of 209 donors (35.3%) participated to the study. Sex, age, body mass index (BMI), mean follow-up period, incision-related outcomes of cosmesis, and postoperative complications were evaluated. Body image scale (BIS) and cosmetic scale (CS) (scar test) questionnaires were applied. Higher cosmesis and body image scores indicated greater satisfaction. Results: A total of 191 donors had paramedian (91.4%), and 18 donors had Pfannenstiel incision (8.6%); 121 donors were female (57.9%). The donor mean age and BMI were 49.1 (SD, 1.8) years and 29.7 (SD, 5.1), respectively. Body mass index was significantly lower in the Pfannenstiel group (P < .001). The mean BIS score was 18.9 (SD, 1.8), and the CS questionnaire score was 19.3 (SD, 4,7). The BIS score was significantly better in donors with Pfannenstiel incisions (P < .001), but there was no statistical significance in CS score. The total rate of wound infection was 4.8%, and rate of incisional hernia was 4.8%. The rate of incisional hernia was more frequent in donors with paramedian incision (5.2%), but there was no statistical significance. Six donors (2.9%) required rehospitalization because of incision site complications. Conclusion: Hand-assisted retroperitoneoscopic donor nephrectomy avoids intra-abdominal complications, but rate of incision site complications can be higher in hand-assisted procedure. The donors were convinced from the cosmetic outcome after HARP donor nephrectomy. The ones who had Pfannenstiel incision had better satisfaction according to BIS score. abstract_id: PUBMED:22561514 The benefit of laparoscopic partial nephrectomy in high body mass index patients. Objective: The aims of the present study were to evaluate the effect of body mass index on the surgical outcomes of open partial nephrectomy and laparoscopic partial nephrectomy, and to analyze whether higher body mass index patients may derive greater benefit from laparoscopic partial nephrectomy. Methods: We reviewed 110 patients who underwent open partial nephrectomy and 47 patients who underwent laparoscopic partial nephrectomy at our institution. We analyzed the data to determine what kind of factor would be associated with prolonged operative time, increased estimated blood loss and prolonged ischemic time, and compared the result of open partial nephrectomy with that of laparoscopic partial nephrectomy. Results: A statistically significant correlation was observed between body mass index and operative time or estimated blood loss in open partial nephrectomy. Multivariate analysis also demonstrated that body mass index was an independent predictor for prolonged operative time and higher estimated blood loss in open partial nephrectomy, but not in laparoscopic partial nephrectomy. In the normal body mass index group (body mass index<25.0 kg/m2), although mean operative time in the laparoscopic partial nephrectomy group was significantly longer than that in the open partial nephrectomy group, the difference was relatively small. In the high body mass index group (body mass index≥25.0 kg/m2), the mean operative time of the two groups was not statistically different. The estimated blood loss of open partial nephrectomy was significantly higher than that of laparoscopic partial nephrectomy in both groups. In both operative procedures, tumor size was an independent predictor for prolonged ischemic time in multivariate analysis. Conclusions: Body mass index was an independent predictor for prolonged operative time and higher estimated blood loss in open partial nephrectomy but not in laparoscopic partial nephrectomy. Laparoscopic partial nephrectomy was less influenced by body mass index and had a greater benefit, especially in high body mass index patients. abstract_id: PUBMED:22495495 Expanding the donor pool: living donor nephrectomy in the elderly and the overweight. Background: Increasing demand for donor kidneys, in parallel with trends toward more elderly and obese populations, make it important to continuously review donor pool inclusion criteria. Acceptance of elderly and obese living donors remains controversial, with a higher incidence of comorbidity and the greater risk of postoperative complications sighted as reasons for caution. Drawing on our center's experience, we aim to determine whether older age and obesity are in fact associated with greater perioperative risk, and longer term complications in donors undergoing nephrectomy. Methods: Three hundred eighty-three living donor nephrectomies conducted at one of the United Kingdom's largest transplant units over the last 5 years were stratified into groups according to age and body mass index. Perioperative endpoints and postdonation follow-up data collected at 6-to-12-monthly intervals were analyzed and compared. Results: No significant differences in operative parameters, including operative time and estimated blood loss, were reported between groups. Rates of early postoperative complications were not significantly different, although subgroup analysis showed a higher incidence of respiratory complications at the extremes of obesity (body mass index ≥ 40 kg/m²). On follow-up, renal function parameters showed significant change postnephrectomy, but between-group variation was not significant. Mortality and major complication rates were comparably low in all groups of study. Conclusions: In our unit's experience, nephrectomy in selected donors who may otherwise have been precluded from participation on account of their age or weight, is feasible and associated with perioperative and longer term outcomes comparable with their younger nonobese counterparts. It provides a basis for informed consent of "extended criteria" donors. abstract_id: PUBMED:25192366 Trends in Body-Mass Index After Donor Nephrectomy. The link between obesity and diabetes, hypertension, and eventual kidney dysfunction is well recognized. To evaluate trends in the body-mass index (BMI) of donor nephrectomy patients, the BMI was evaluated in 532 donors over 11 years from 2002 to 2012. Measurements were made preoperatively, at 1 year, and at the present time. The follow-up range varied from 12 months to 11 years dependent on the date of donation. Of the 532 patients queried, 100 patients had long-term results. At 1 year, there was an overall decrease in BMI, however, at anytime after the first year, postoperative BMI trended upward, with more than half of the overweight patients (BMI 25-30), at the time of donation, dipping into the obese range (BMI >30). Given these results and recognizing the attendant risks-especially with the background of a single kidney-we have initiated further lifestyle and nutrition counseling at our center and are considering a more stringent preoperative weight control policy. abstract_id: PUBMED:29402156 Donor Body Mass Index as a Risk Factor for Delayed Onset of Graft Function. Delayed graft function continues to pose a significant challenge to clinicians in the context of kidney transplantation. The objective of this retrospective, 5-year analysis is to identify the parameters of beating heart donors and those of recipients that affect the delayed development of graft function. The monitored group was composed of 152 beating heart donors and 179 recipients. Delayed graft function was identified in 32 (17%) patients. The predictor for development of delayed graft function was the body mass index of the donor (odds ratio: 1.1473; 95% confidence interval [CI]: 1.0017-1.3140; P = .0472), and the independent risk factors were donor body mass index 30 to 34.9 kg/m2 (hazard ratio [HR]: 6.0215; 95% CI: 1.4188-25.556; P = .0149), donor body mass index ≥35 kg/m2 (HR: 13.5484; 95% CI: 1.4575-125.938; P = .0220), and abuse of alcohol in the donor's history (HR: 1.779; 95% CI: 1.0679-2.964; P = .0270). abstract_id: PUBMED:18485408 Comparison of open and laparoscopic nephrectomy in obese and nonobese patients: outcomes stratified by body mass index. Purpose: Laparoscopic radical nephrectomy has been accepted as the preferred management for low stage renal masses not amenable to partial nephrectomy. Early in the mid 1990s several studies suggested that obesity should be a relative contraindication to laparoscopy. We present our surgical outcomes and complications in patients undergoing open and laparoscopic nephrectomy, stratified by body mass index. Materials And Methods: We retrospectively identified 88 patients, of whom 43 underwent open nephrectomy and 45 were treated laparoscopically. All patients were stratified by body mass index to compare multiple perioperative end points and pathological outcomes of laparoscopy. Results: Overall our data showed that compared to open nephrectomy laparoscopic nephrectomy resulted in statistically significant lower estimated blood loss (147.95 vs 640.48 cc, p <0.0002), operative time (156.11 vs 198.95 minutes, p <0.003) and hospital stay (3.7 vs 5.9 days, p <0.004). When stratified by body mass index less than 25, 25 to 29.9 and 30 kg/m(2) or greater, there was a statistically significant difference in estimated blood loss and hospital stay that was in favor of the laparoscopic approach in each body mass index category. Operative time did not show a statistical difference in the subgroups but all laparoscopic procedure times were shorter than open procedure times in each body mass index category. When patients with a body mass index of greater than 30 kg/m(2) were further subgrouped into 35 kg/m(2) or greater and 40 kg/m(2) or greater, there was a statistically significant difference in estimated blood loss and hospital stay that was again in favor of the laparoscopic method. Conclusions: Laparoscopic radical nephrectomy is technically more challenging as body mass index increases due to many factors but our data show that it is feasible and safe in experienced hands. Laparoscopy appears to result in perioperative outcomes that are superior to those of open nephrectomy in this high risk population with a complication profile that is equivalent to that of the open method for each stratified body mass index category. abstract_id: PUBMED:28591959 Learning Curve of Retroperitoneal Laparoscopic Donor Nephrectomy and Risk Analysis of Intraoperative Complications Objectives: To investigate the learning curve of retroperitoneal laparoscopic donor nephrectomy (LDN) and evaluate the risk factors of intraoperative complications with data from a single center. Methods: We evaluated perioperative data of 527 consecutive kidney donors who received retroperitoneal LDN between April 2009 and April 2014. The patients were divided into two groups according to the learning curve which was determined by the operation time:group 1 (on the learning curve) and group 2 (learning curve completed). Results: The mean operation time was (88.4±38.07) min. The asymptote of the surgeon's learning curve for retroperitoneal LDN was achieved at the 100th case. The operation time and the incidence of intraoperative complications in group 1 were significantly higher than those of group 2. When cases completed, body mass index (BMI) and intraoperative complications were correlated to operative time. The incidence of intraoperative complications was 1.90% and BMI was correlated to the incidence of intraoperative complications. When the learning curve was completed, renal artery numbers and right kidney were found being correlated to operative time. Conclusions: Retroperitoneal LDN is a safe and effective operation method with a low incidence of complications. Technical proficiency in retroperitoneal LDN could be achieved after 100 surgeries. Answer: The relationship between high body mass index (BMI) and the risk of complications following donor nephrectomy is nuanced. Some studies suggest that a high BMI is not an absolute contraindication for living kidney donation and does not necessarily lead to an increase in short-term complications. For instance, a study conducted at Ege University School of Medicine Hospital found no significant increase in complication rates among high BMI donors (BMI over 30) compared to those with BMIs under 30, suggesting that high BMI donors can safely donate their kidneys at high-volume transplantation centers (PUBMED:26093700). Similarly, a systematic review and meta-analysis indicated that a high BMI alone is not a contraindication for live kidney donation regarding short-term outcomes, although some differences in operation duration and risk of conversion were noted (PUBMED:23344469). However, other studies have highlighted potential concerns with high BMI donors. For example, a retrospective analysis identified donor BMI as a predictor for the development of delayed graft function, with higher BMI donors (30 to 34.9 kg/m^2 and ≥35 kg/m^2) having increased risk (PUBMED:29402156). Additionally, a study on long-term BMI trends after living-donor nephrectomy found that donors are at risk of developing obesity post-surgery, which could lead to obesity-related comorbidities (PUBMED:28187700). This suggests that while high BMI may not significantly impact immediate postoperative complications, it could have implications for long-term health outcomes. Moreover, laparoscopic partial nephrectomy in high BMI patients has been shown to have benefits, such as less influence on operative time and estimated blood loss compared to open partial nephrectomy, suggesting that laparoscopic approaches may be advantageous for high BMI patients (PUBMED:22561514). Another study comparing open and laparoscopic nephrectomy in obese and nonobese patients found that laparoscopic nephrectomy resulted in better perioperative outcomes across all BMI categories (PUBMED:18485408). In conclusion, while high BMI is not an absolute contraindication for donor nephrectomy and may not significantly increase short-term complication rates, it is associated with certain risks, such as delayed graft function and potential long-term health issues. Careful consideration and counseling, as well as the choice of surgical technique, may help mitigate these risks.
Instruction: Changes in clinical indications for community antibiotic prescribing for children in the UK from 1996 to 2006: will the new NICE prescribing guidance on upper respiratory tract infections just be ignored? Abstracts: abstract_id: PUBMED:19066174 Changes in clinical indications for community antibiotic prescribing for children in the UK from 1996 to 2006: will the new NICE prescribing guidance on upper respiratory tract infections just be ignored? Objective: To analyse changes in clinical indications for community antibiotic prescribing for children in the UK between 1996 and 2006 and relate these findings to the new NICE guidelines for the treatment of upper respiratory tract infections in children. Study Design: Retrospective cohort study. Method: The IMS Health Mediplus database was used to obtain annual antibiotic prescribing rates and associated clinical indications in 0-18-year-old patients between 1 January 1996 and 31 December 2006 in the UK. Results: Antibiotic prescribing declined by 24% between 1996 and 2000 but increased again by 10% during 2003-2006. Respiratory tract infection was the most common indication for which an antibiotic was prescribed, followed by "abnormal signs and symptoms", ear and skin infections. Antibiotic prescriptions for respiratory tract infections have decreased by 31% (p<0.01) mainly because of reduced prescribing for lower respiratory tract infections (56% decline, p<0.001) and specific upper respiratory tract infections including tonsillitis/pharyngitis (48% decline, p<0.001) and otitis (46% decline, p<0.001). Prescribing for non-specific upper respiratory tract infection increased fourfold (p<0.001). Prescribing for "abnormal signs and symptoms" increased significantly since 2001 (40% increase, p<0.001). Conclusion: There has been a marked decrease in community antibiotic prescribing linked to lower respiratory tract infection, tonsillitis, pharyngitis and otitis. Overall prescribing is now increasing again but is associated with non-specific upper respiratory tract infection diagnoses. General practitioners may be avoiding using diagnoses where formal guidance suggests antibiotic prescribing is not indicated. The new NICE guidance on upper respiratory tract infections is at risk of being ignored. abstract_id: PUBMED:29335323 Antibiotic prescribing quality for children in primary care: an observational study. Background: Overuse and inappropriate prescribing of antibiotics is driving antibiotic resistance. GPs often prescribe antibiotics for upper respiratory tract infections (URTIs) in young children despite their marginal beneficial effects. Aim: To assess the quality of antibiotic prescribing for common infections in young children attending primary care and to investigate influencing factors. Design And Setting: An observational, descriptive analysis, including children attending primary care sites in England and Wales. Method: The Diagnosis of Urinary Tract infection in Young children study collected data on 7163 children aged <5 years, presenting to UK primary care with an acute illness (<28 days). Data were compared with the European Surveillance of Antimicrobial Consumption Network (ESAC-Net) disease-specific quality indicators to assess prescribing for URTIs, tonsillitis, and otitis media, against ESAC-Net proposed standards. Non-parametric trend tests and χ2 tests assessed trends and differences in prescribing by level of deprivation, site type, and demographics. Results: Prescribing rates fell within the recommendations for URTIs but exceeded the recommended limits for tonsillitis and otitis media. The proportion of children receiving the recommended antibiotic was below standards for URTIs and tonsillitis, but within the recommended limits for otitis media. Prescribing rates increased as the level of deprivation decreased for all infections (P<0.05), and increased as the age of the child increased for URTIs and tonsillitis (P<0.05). There were no other significant trends or differences. Conclusion: The quality of antibiotic prescribing in this study was mixed and highlights the scope for future improvements. There is a need to assess further the quality of disease-specific antibiotic prescribing in UK primary care settings using data representative of routine clinical practice. abstract_id: PUBMED:34348560 Encouraging rational antibiotic prescribing behaviour in primary care - prescribing practice among children aged 0-4 years 2016-2018: an observational study. Objective: To study antibiotic prescriptions among 0- to 4-year-old children before and after implementing a quality project on prudent prescribing of antibiotics in primary healthcare in the capital region of Iceland. Design: An observational, descriptive, retrospective study using quantitative methodology. Setting: Primary healthcare in the Reykjavik area with a total population of approximately 220,000. Subjects: A total of 6420 children 0-4 years of age presenting at the primary healthcare centres in the metropolitan area over three years from 2016 to 2018. Main Outcome Measures: Reduction of antibiotic prescriptions and change in antibiotic profile. Data on antibiotic prescriptions for children 0-4 years of age was obtained from the medical records. Out-of-hours prescriptions were not included in the database. Results: The number of prescriptions during the study period ranged from 263.6 to 289.6 prescriptions/1000 inhabitants/year. A reduction of 9% in the total number of prescriptions between 2017-2018 was observed. More than half of all prescriptions were for otitis media, followed by pneumonia and skin infections. Amoxicillin accounted for over half of all prescriptions, increasing between 2016 and 2018 by 51.3%. During this period, the prescribing of co-amoxiclav and macrolides decreased by 52.3% and 40.7%, respectively. These changes were significant in all cases, p < 0.0001. Conclusion: The results show an overall decrease in antibiotic prescribing concurrent with a change in the choice of antibiotics prescribed and in line with the recommendations presented in the prescribing guidelines implemented by the Primary Healthcare of the Capital Area, and consistent with the project's goals.Key pointsA substantial proportion of antibiotic prescribing can be considered inappropriate and the antibiotic prescription rate is highest in Iceland of the Nordic countries.After implementing guidance on the treatment of common infections together with feedback on antibiotic prescribing, a decrease in the total number of prescriptions accompanied by a shift in the antibiotic profile was observed. abstract_id: PUBMED:29610624 Dyspnea in Children as a Symptom of Acute Respiratory Tract Infections and Antibiotic Prescribing. Background: Dyspnea is an unpleasant feeling of breathing difficulty, shortness of breath and inability to satisfy the hunger for air. The role of family physicians is to be prepared to recognise dyspnea as a symptom of acute respiratory infections (ARI), to perform triage and managing of children with acute dyspnea and make continuous education of parents. In the treatment of acute dyspnea more important is to treat dyspnea as a symptom than the prescribing of antibiotics (AB). Nowadays, even more often large amount of children, because of the noncompliance in the treatment and the pressure from the parents, unnecessary is hospitalised and frequently used antibiotics: According to the guidelines, a small percentage of children with ARI should be treated with AB. The rate of antibiotic prescription should be around 15-20% and lower. Aim: To assess doctor's AB prescriptions in Primary care practices in Macedonia, for ARI and dyspnea in children we use the data from a National project about antibiotic prescribing for acute respiratory tract infections, which has been run in 2014 November as a part of E - quality program. Methods: Eighty-six general practitioners from Macedonia have taken participation in it. The group of 3026 children, from 0-5 years of age, with symptoms of dyspnea, was analysed. We have found a rate of 54.6% antibiotic prescriptions (AB). From 3026 children with dyspnea, AB got 1519 children, 852 of which were prescribed by a specialist. Results: The children were mostly diagnosed with upper ARI, in 57.7%. The most used AB is amoxicillin + clavulonic acid. We concluded that there was an increased and inappropriate prescribing of antibiotics in children with ARI, presented with dyspnea in Macedonia. Conclusion: Perceptions and the parent's attitudes do not correlate with the severity of clinical picture of the disease in children and lack of use Evidence-based medicine (EBM), insecurity, fear, loss of patients effect on antibiotic prescribing of the doctors. abstract_id: PUBMED:25949616 Antibiotic prescribing for upper respiratory tract infections in children: how can we improve? Upper respiratory tract infection (URTI) in children is one of the most common problems that general practitioners (GPs) see. Although complications from URTIs are rare, and antibiotics offer little or no benefit in uncomplicated cases, antibiotic prescribing has increased in recent years following a decline in the late 1990s. This article explores possible reasons for the increase, weighs the evidence on withholding antibiotics and asks how GPs will interpret recent National Institute for Health and Clinical Excellent (NICE) guidelines when dealing with URTI in children. We review some of the latest approaches to help implement antibiotic prescribing guidelines and suggest some practical solutions to help busy GPs. abstract_id: PUBMED:26324495 Clinical predictors of antibiotic prescribing for acutely ill children in primary care: an observational study. Background: Antibiotic overuse and inappropriate prescribing drive antibiotic resistance. Children account for a high proportion of antibiotics prescribed in primary care. Aim: To determine the predictors of antibiotic prescription in young children presenting to UK general practices with acute illness. Design And Setting: Prospective observational study in general practices in Wales. Method: A total of 999 children were recruited from 13 practices between March 2008 and July 2010. Multilevel, multivariable logistic regression analysis was performed to determine predictors of antibiotic prescribing. Results: Oral antibiotics were prescribed to 261 children (26.1%). Respiratory infections were responsible for 77.4% of antibiotic prescriptions. The multivariable model included 719 children. Children were more likely to be prescribed antibiotics if they were older (odds ratio [OR] 1.3; 95% confidence intervals [CI] = 1.1 to 1.7); presented with poor sleep (OR 2.7; 95% CI = 1.5 to 5.0); had abnormal ear (OR 6.5; 95% CI = 2.5 to 17.2), throat (OR 2.2; 95% CI = 1.1 to 4.5) or chest examination (OR 13.6; 95% CI = 5.8 to 32.2); were diagnosed with lower respiratory tract infection (OR 9.5; 95% CI = 3.7 to 25.5), tonsillitis/sore throat (OR 119.3; 95% CI = 28.2 to 504.6), ear infection (OR 26.5; 95% CI = 7.4 to 95.7) or urinary tract infection (OR 12.7; 95% CI = 4.4 to 36.5); or if the responsible clinician perceived the child to be moderately to severely unwell (OR 4.0; 95% CI = 1.4 to 11.4). The area under the receiver operating characteristic curve was 0.9371. Conclusion: Respiratory infections were responsible for 74.4% of antibiotic prescriptions. Diagnoses of tonsillitis, sore throat, or ear infection were associated most with antibiotic prescribing. Diagnosis seemed to be more important than abnormal examination findings in predicting antibiotic prescribing, although these were correlated. abstract_id: PUBMED:34627625 Impact of the childhood influenza vaccine programme on antibiotic prescribing rates in primary care in England. Vaccines are a key part of the global strategy to tackle antimicrobial resistance (AMR) since prevention of infection should reduce antibiotic use. England commenced national rollout of a live attenuated influenza vaccine (LAIV) programme for children aged 2-3 years together with a series of geographically discrete pilot areas for primary school age children in 2013 extending to older children in subsequent seasons. We investigated vaccine programme impact on community antibiotic prescribing rates. Antibiotic prescribing incidence rates for respiratory (RTI) and urinary tract infections (UTI; controls) were calculated at general practice (GP) level by age category (children<=10 years/adults) and season for LAIV pilot and non-pilot areas between 2013/14 and 2015/16. To estimate the LAIV (primary school age children only) intervention effect, a random effects model was fitted. A multivariable random-effects Poisson regression investigated the association of antibiotic prescribing rates in children with LAIV uptake (2-3-year-olds only) at GP practice level. RTI antibiotic prescribing rates for children <=10 years and adults showed clear seasonal trends and were lower in LAIV-pilot and non-pilot areas after the introduction of the LAIV programme in 2013. The reductions for RTI prescriptions (children) were similar (within 3%) in all areas, which coincided with the start the UK AMR strategy. Antibiotic prescribing was significantly (p < 0.0001) related to LAIV uptake in 2-3-year-olds with antibiotic prescribing reduced by 2.7% (95% CI: 2.1% to 3.4%) for every 10% increase in uptake. We found no evidence the LAIV programme for primary school age children resulted in reductions in RTI antibiotic prescribing, however we detected a significant inverse association between increased vaccine uptake in pre-school age children and antibiotic prescribing at GP level. The temporal association of reduced RTI and UTI antibiotic prescribing with the launch of the UK's AMR Strategy in 2013 highlights the importance of a multifaceted approach to tackle AMR. abstract_id: PUBMED:36065724 Relevance of the Consensus Principles for Appropriate Antibiotic Prescribing in 2022. Background: In the late 1990s, as a response to rising antimicrobial resistance (AMR), an independent multinational, interdisciplinary group was formed specifically targeting primary care antibiotic prescribing for community-acquired respiratory tract infections (CA-RTIs). The group comprised senior clinicians from Canada, Israel, Spain, Sweden, UK and USA. The group's objectives were to provide recommendations for antibiotic stewardship in the community because, whilst it was widely accepted that inappropriate antibiotic use was contributing to AMR, it remained difficult to change prescribing behaviour. The group aimed to identify principles underlying appropriate antibiotic prescribing and guideline formulation to reduce morbidity from CA-RTIs, limit therapeutic failure and, importantly, curb AMR emergence. The group published a report in 2002, which has become known as the Consensus Principles. Objectives: (i) To consider the relevance of the Consensus Principles in 2022 by reviewing current global approaches to rising AMR. A wide range of factors, such as antibiotic overuse, most recently seen in COVID-19 patients, are still driving rising AMR even though there has been a high-level international response to the AMR threat; and (ii) as an introduction to this Supplement, which reports the findings of analyses of how AMR is being addressed in nine disparate countries (Brazil, India, Kuwait, Mexico, Pakistan, Russia, Saudi Arabia, Türkiye and Vietnam). Understanding how these initiatives are being pursued in different countries helps identify areas where more information is needed. Conclusions: Adherence to the Consensus Principles remains as important now as it was in 2002. Achieving appropriate antibiotic prescribing is a vital objective in order that the right patient receives the right antibiotics at the right time to ensure optimal clinical outcomes while at the same time helping to limit further increases in AMR. abstract_id: PUBMED:35870657 Assessment of antibiotic prescribing patterns at dental and primary health care clinics according to WHO Access, Watch, Reserve (AWaRe) classification. Background: The 2019 WHO Access, Watch, Reserve (AWaRe) antibiotic classification framework aims to prevent irrational prescribing of antibiotics used to treat widespread infections. This study explored antibiotic prescribing pattern for appropriate indications by family physicians and general dentists in primary health care practices. Methods: A retrospective review of patients' electronic medical records was conducted over 6 months, from May 1, 2020, to November 30, 2020. The data were collected from 24 general family medicine and dental practices within the North West Armed Forces in Tabuk city. Antibiotic prescribing for systemic use (J01) was assessed by the number of prescriptions and the number defined daily doses (DDDs) and then analyzed according to the AWaRe classification. The prescribing of antibiotics for appropriate indications was assessed through comparing the prescription pattern with the recently published and relevant clinical guidelines. Multivariate logistic regression analysis was used to predict the association between the prescribing of AWaRe category and some demographic and disease-related factors. Results: In total, 752 prescriptions of antibiotics were collected. Watch-group antibiotics such as second-generation cephalosporin and macrolides were more likely prescribed (51.1%) based on the number of prescriptions and (52.2%) based on DDDs compared with Access-group antibiotics (48.9%) and (47.8%), respectively. The percentages of Watch group antibiotics for children and adults were 66.7% and 42.9%, respectively. Adherence to prescribing guidelines was poor for children (27.2%) and adults (64%). Being a child (adjusted odds ratio [OR]: 2.89; 95% confidence interval [CI] = 1.46-5.78), diagnosis with acute respiratory tract infection (adjusted OR, 2.62; 95% CI = 1.03-6.69), and urinary tract infection (adjusted OR, 4.69; 95% CI = 2.09-10.56) were associated with higher prescriptions of Watch-group antibiotics. Conclusions: a higher prescribing of Watch-group antibiotics and poor adherence to antibiotic guidelines were observed, especially for children. The findings of this study identified targets for further improvement and interventions needed to develop better antibiotic-prescribing practices. abstract_id: PUBMED:38334431 An assessment of the index of rational drug prescribing for severe acute respiratory infections among hospitalised children in Northern Nigeria: a retrospective study. Background: This study evaluated drug use pattern among hospitalized children with severe acute respiratory infection (SARI) in Nigeria. Research Design And Methods: A retrospective assessment of prescribed medicines for children aged 13 years and below who were admitted and treated for SARI from 1 January 2016 to 31 December 2018 was conducted. The WHO prescribing indicators and the Index of Rational Drug Prescribing were used to evaluate prescriptions. Results: A total of 259 patients were included, mostly diagnosed with bronchopneumonia (56%). A summary of WHO-core prescribing indicators showed the average number of drugs per encounter was 3.9, medicines prescribed by generic name was 82.1%, and an encounter with at least an antibiotic was 99.7%. The percentage of drugs prescribed from the Essential Medicine List for children was 79%. The most frequently prescribed pharmacological class of medicines was antibiotics (41.4%). Cephalosporins (40.0%), aminoglycosides (34.1%), and penicillins (21.5%) were the most commonly prescribed antibiotic classes. Gentamicin (34.1%) and cefuroxime (21.5%) were the most commonly prescribed antibiotics. Conclusions: Drug prescribing for hospitalized children with SARI was suboptimal, especially with regard to polypharmacy, antibiotics, and injection use. Interventions to promote rational use of medicines including antimicrobial stewardship interventions are recommended. Answer: The study examining changes in clinical indications for community antibiotic prescribing for children in the UK between 1996 and 2006 raised concerns that the new NICE prescribing guidance on upper respiratory tract infections (URTIs) might be ignored. The study found that while antibiotic prescribing declined by 24% between 1996 and 2000, it increased again by 10% during 2003-2006. Respiratory tract infections were the most common indication for antibiotic prescriptions, followed by "abnormal signs and symptoms," ear, and skin infections. Notably, antibiotic prescriptions for respiratory tract infections decreased by 31%, mainly due to reduced prescribing for lower respiratory tract infections and specific URTIs such as tonsillitis/pharyngitis and otitis. However, prescribing for non-specific URTIs increased fourfold. The study concluded that general practitioners may be avoiding using diagnoses where formal guidance suggests antibiotic prescribing is not indicated, which could put the new NICE guidance at risk of being ignored (PUBMED:19066174).
Instruction: Does antrochoanal polyp present with epistaxis? Abstracts: abstract_id: PUBMED:36452723 Management of Angiomatous Antrochoanal Polyp: Our Experience. Antrochoanal polyps are the most common type of choanal polyp. Angiomatous polyp is a rare subtype of Antrochoanal polyp that might be misdiagnosed as malignant lesion due to its clinical features. A retrospective observational study was conducted in a tertiary care hospital in patients who were diagnosed with Angiomatous Antrochonal polyp that underwent functional endoscopic sinus surgery from 2017 to 2020. We analyzed the clinical symptoms, radiological findings, and pathological features of all patients diagnosed with Angiomatous Antrochonal polyp. Unilateral nasal obstruction, rhinorrhea, and epistaxis were the common symptoms. Antrochoanal polyps all originated from maxillary sinus and extended to nasal cavity with or without involving the nasopharynx. Expansile mass with surrounding bony destruction are typical on computed tomography imaging but specific for Angiomatous Antrochoanal polyp. The magnetic resonance revealed high signal intensity on T1-weighted images and hypo-intense rim on T2-weighted images. Computed tomography and magnetic resonance together might give rise to more accurate diagnosis of Angiomatous Antrochoanal polyp. Incisional biopsy does help if the clinician suspects a malignant lesion. Complete removal is treatment of choice for Angiomatous Antrochoanal polyp. abstract_id: PUBMED:19641644 Characteristics of antrochoanal polyps in the pediatric age group. Objectives: To evaluate and compare the clinical and the pathological characteristics of antrochoanal polyps (ACPS) in adults and children. Materials And Methods: Medical records of 35 patients (19 children, 16 adults) operated upon for ACPS between 1995 and 2005 at an academic tertiary center were reviewed retrospectively. Demographic characteristics, clinical presentation, surgical management, histological findings and recurrence rate were compared. Results: Of the 35 patients, 19 (54%) were children (mean age, 12.6 years) and 16 (46%) were adults (mean age, 31.4 years). Nasal obstruction was the most common presenting symptom in both groups. The incidence of snoring and/or obstructive sleep apnea was statistically significant, more common among the pediatric age group as compared to the adult group (P =.001). Epistaxis was also found to be more common among the pediatric age group (P =.027), while sinusitis was noted to be significantly more common among the adult group (P =.019). Transnasal endoscopic removal of ACPS was performed in 12 (63.1%) children and 11 (68.7%) adults. A combined open/endoscopic approach was required in 36.9% of children and 31.3% of adults. On histologic examination, allergic ACPS (the mucosal surface is respiratory epithelium, no mucus glands, abundant eosinophils) was more common than inflammatory ACPS (the mucosal surface is respiratory epithelium, no mucus glands, abundant neutrophils) in children (2.8:1) as compared to adults (0.8:1) (P =.045). All of our patients were followed with endoscopic examination for a period ranging from 9 to 42 months (mean, 24 months). Recurrence of ACPS was identified in 2 children and 1 adult. Conclusion: Antrochoanal polyps are a rare clinical entity. Children have unique clinical and pathological features as compared to adults. Endoscopic excision is safe and effective in the pediatric age group and has the capability to ensure complete removal and lower recurrence rate. abstract_id: PUBMED:20003601 Does antrochoanal polyp present with epistaxis? Objective: To compare the gross and microscopic appearance of antrochoanal polyps associated with recurrent epistaxis, with those with a more typical presentation. Design: Prospective, controlled study. Methods: All patients underwent clinical and endoscopic examination, computed tomography scanning, and examination under anaesthesia, in order to detect the gross diagnostic criteria for antrochoanal polyp. Histological findings on light microscopy were compared for polyps presenting with epistaxis versus those without. The number of predominant inflammatory cells in the corium was determined in both groups and statistically compared using the Student t-test. Results: Recurrent epistaxis was a presenting symptom in 10/84 (11.9 per cent) patients with gross diagnostic criteria for antrochoanal polyp. Grossly, these patients' polyps had a reddish, vascular surface in parts. Histologically, these polyps showed a highly vascular stroma with multiple dilated blood vessels, the typical appearance of an angiomatous antrochoanal polyp. Thrombi at different stages of development were detected, with no infarcts. The remaining cases (88.1 per cent) had no history of epistaxis; histologically, these patients' polyps showed an oedematous connective tissue core with few inflammatory cells. Plasma cells were predominant in the angiomatous polyps, being significantly more prevalent than in the ordinary antrochoanal polyps (p < 0.00). Conclusions: It would appear that only angiomatous antrochoanal polyps present with epistaxis. Detection of the characteristic gross appearance of these polyps may help avoid unwanted surgery. Histopathological analysis confirms the diagnosis. A significantly increased number of plasma cells may be the underlying cause of the histological changes seen in angiomatous antrochoanal polyps. abstract_id: PUBMED:31297484 Origin of polyps and accompanying sinonasal pathologies in patients with antrochoanal polyp: Analysis of 22 patients. Objective: Solitary, benign soft tissue masses originating at the maxillary sinus and extending to the nasopharynx are called antrochoanal polyps (ACP). The aim of the study was to determine the origins of ACP accompanying sinonasal pathologies and the effectiveness of endoscopic sinus surgery in patients with ACP. Methods: Twenty-two patients (13 men, 9 women; age range: 6-50; average age: 28.3±13.3) who were treated for ACP using endoscopic sinus surgery between January 2014 and September 2017 were included in the study. The patient's age, sex, symptom at presentation to the clinic, sinonasal pathologies accompanying the ACP, and adhesion site of ACP inside the maxillary sinus were retrospectively examined. Results: The most frequently encountered symptom in patients was nasal congestion (95.4%). This was followed by snoring and sleeping with an open mouth (31.8%), nasal discharge (22.7%), headache (18.1%), feeling of a foreign body in the throat (9.1%), and epistaxis (4.5%). When the adhesion site of ACP in the maxillary sinus was checked, it was seen that it was the most frequently located in the medial wall (27.2%), followed by the posterior (18.1%) and lateral wall (13.6%). One patient (4.54%) developed recurrence. Conclusion: Although the origin of ACP cannot be detected in the paranasal sinus tomography scan, accompanying sinonasal pathologies should be preoperatively identified. Development of angled endoscopes and angled instruments enabling easy removal of cystic lesions renders endoscopic sinus surgery sufficient for the treatment of ACP. abstract_id: PUBMED:2230563 Epistaxis as an unusual presentation of an antrochoanal polyp. Antrochoanal polyps rarely present with severe epistaxis. We present the case of a young male patient who presented in such a manner, thus leading to the suspicion of an angiofibroma as the underlying pathology. The radiological investigations, treatment and pathological features are discussed. abstract_id: PUBMED:23311182 Antrochoanal polyps: report of 94 cases and review the literature. Objective: The aim of this study was to evaluate the management of antrochoanal polys (ACPs) with functional endoscopic sinus surgery and review literatures. Methods: We retrospectively studied the clinical presentations and operative records of 94 patients with ACPs treated surgically in the period of September 1999 to March 2008 (about 9 years). Age, gender, side, Clinical symptoms, duration of follow up, associated rhinological findings, forms of surgical treatment, postoperative complications and recurrence rate were reviewed. Results: Total records of 94 patients [mean (+/- SD age of 26.5 +/- 12.8 years, 55 males (58.5%) and 39 females (41.5%)] were studied. Postoperative follow-up duration ranged from 8 to 68 months. The most common clinical symptom was nasal obstruction (100%) and Rhinorrhoea (57.4%). The origins and sites of the pedicle in the maxillary sinus were mostly present in medial wall in 32 patients. Recurrence rate in our study was of 5.3% (5 cases) after 10 months postoperatively. For the revision surgery, endoscopic resection was combined with Caldwell-Luc procedure. Conclusion: The antrochoanal polyp happens predominantly in children and young adults of both male and female patients. Endoscopic sinus surgery is an effective, safe and reliable method with major advantage for the treatment of antrochoanal polyps. abstract_id: PUBMED:23119890 A clinical study of nasopharyngeal masses. Out of 36511 patients attending the ENT OHD of R. G,. Aur Medical Callege & Hospital. Kolkata, in one year 0.08% patients were found tuning naspharyngeal mass. 30 cases of nasopharyngeal mass were studied. The peak incidence of nasopharyngeal mass was in the age group between II to 20 years (40%). The incidence in males (73%) was more than females(27%). The maximum ineldence was found among students (74%). Commonest ntuopharyngeal nuns teas antrochoanal polyp (30%). Next commlon masses were adenoids (23%)Juvenile angwfibroma(20%) and nasophartngeal carcinoma (13%). Nasal obstruction was the main presenting symptom(83%) followed by epistaxis (40%). abstract_id: PUBMED:18094812 Endoscopic polipectomy with middle meatal antrostomy for antrochoanal polyp treatment. Unlabelled: Antrochoanal polyp (ACP) or Killian polyp is a benign nonatopic lesion of the maxillary sinus. Patients usually present nasal obstruction. Many surgical options for the treatment of ACPs have been suggested to minimize postoperative recurrence. The endoscopic nasal approach is a surgical option for maxillary intrasinusal resection of the polyp implantation through the maxillary ostium or middle meatal antrostomy, with lower morbidity when compared to other surgical approaches. Aim: To evaluate the rate of endoscopic antrochoanal polypectomy with middle meatotomy in the treatment of ACP. Materials And Methods: Were evaluated by means of a retrospective study, 29 patients, who were diagnosed based on history, physical examination, computed tomography, and histological findings, treated between 1997 and 2004. The surgical approach was endoscopic polypectomy with middle meatotomy. Results: Twenty-nine patients with ACP, 17(58.6%) were females and 12(41.4%) males, age range, 7-75 years (average of 27.55 years) were included in this study. The main symptom were nasal obstruction 24(82%), oral breathing 11(37.9%), snoring nine (31%), rhinorrhea 5(17%), epistaxis 2(6.9%), headache 2(6.9%), and drip one (3.4%). The association with atopy was found in nine (31%). The mean follow-up period was 17 months (3-63 months). Only two patients (6.9%) presented recurrence. Conclusion: The rate of recurrence obtained in our study is no different from literature data, even when compared with former and gold standard procedures. abstract_id: PUBMED:14719510 Antrochoanal polyps: clinical presentation and the role of powered endoscopic polypectomy. Antrochoanal polyps are a rare clinical entity. In this review of patients treated between January 1996 and September 2002, there were 18 cases of antrochoanal polyps. The mean age of patients was 20 years. Nasal obstruction was the most common symptom (17 cases, 94%), followed by rhinorrhoea (44%), epistaxis (33%), postnasal drip (28%), and snoring (22%). Chronic sinus was the most common associated rhinological finding (50%). Various surgical approaches were used: endoscopic polypectomy and middle meatal antrostomy in seven patients (38.9%), endoscopic polypectomy and inferior meatal antrostomy in three patients (16.7%) and Caldwell-Luc surgery in one patient (5.6%). No complications were noted in patients treated with powered instrumentation, including the three patients in whom combined transcanine approaches were used. We concluded that powered endoscopic polypectomy was safe and effective. It allowed a more complete dissection and may diminish the chance of recurrence. abstract_id: PUBMED:26981490 Killian's polyp mimicking malignant tumor. Killian polyp is predominantly found in children and any sinonasal tumor in elderly presenting with epistaxis and pain usually indicates malignant growth until proved otherwise. We present an unusual case of Killian polyp in an elderly patient that behaved as a malignant tumor. This case report reminded us that paranasal sinuses are still dark hollow mysterious cavities, and we should take utmost clinical acumen in managing such cases. Answer: Yes, antrochoanal polyps can present with epistaxis, although it is not the most common symptom. Recurrent epistaxis was a presenting symptom in 10 out of 84 patients (11.9%) with antrochoanal polyps in one study, where the polyps with epistaxis showed a highly vascular stroma with multiple dilated blood vessels, typical of an angiomatous antrochoanal polyp (PUBMED:20003601). Another study found epistaxis to be more common among the pediatric age group with antrochoanal polyps (PUBMED:19641644). Additionally, a case of a young male patient with an antrochoanal polyp presenting with severe epistaxis has been reported, leading to an initial suspicion of an angiofibroma (PUBMED:2230563). However, it is important to note that while epistaxis can be a symptom, nasal obstruction is the most common presenting symptom of antrochoanal polyps (PUBMED:23311182).
Instruction: GCKR gene functional variants in type 2 diabetes and metabolic syndrome: do the rare variants associate with increased carotid intima-media thickness? Abstracts: abstract_id: PUBMED:21114848 GCKR gene functional variants in type 2 diabetes and metabolic syndrome: do the rare variants associate with increased carotid intima-media thickness? Background: Recent studies revealed that glucokinase regulatory protein (GCKR) variants (rs780094 and rs1260326) are associated with serum triglycerides and plasma glucose levels. Here we analyzed primarily the association of these two variants with the lipid profile and plasma glucose levels in Hungarian subjects with type 2 diabetes mellitus and metabolic syndrome; and also correlated the genotypes with the carotid intima-media thickness records. Methods: A total of 321 type 2 diabetic patients, 455 metabolic syndrome patients, and 172 healthy controls were genotyped by PCR-RFLP. Results: Both GCKR variants were found to associate with serum triglycerides and with fasting plasma glucose. However, significant association with the development of type 2 diabetes mellitus and metabolic syndrome could not be observed. Analyzing the records of the patients, a positive association of prevalence the GCKR homozygous functional variants and carotid intima-media thickness was found in the metabolic syndrome patients. Conclusions: Our results support that rs780094 and rs1260326 functional variants of the GCKR gene are inversely associated with serum triglycerides and fasting plasma glucose levels, as it was already reported for diabetic and metabolic syndrome patients in some other populations. Besides this positive replication, as a novel feature, our preliminary findings also suggest a cardiovascular risk role of the GCKR minor allele carriage based on the carotid intima-media thickness association. abstract_id: PUBMED:35999587 Identification of genetic variants related to metabolic syndrome by next-generation sequencing. Background: Metabolic syndrome (MetS) is a cluster of conditions associated with glucose intolerance, hypertension, abdominal obesity, dyslipidemia, and insulin resistance that increase the risk of cardiovascular diseases (CVD) and type 2 diabetes (T2D). Since MetS is known as a complex symptom with a high incidence of genetic factors, it is important to identify genetic variants for each clinical characteristic of MetS. Methods: We performed targeted next-generation sequencing (NGS) to identify genetic variants related to obesity, blood glucose, triacylglycerol (TG), and high-density lipoprotein (HDL)-cholesterol level, and hypertension in 48 subjects with MetS and in 48 healthy subjects. Results: NGS analysis revealed that 26 of 48 subjects (54.2%) with MetS had putative non-synonymous variants related to the clinical features of MetS. Of the subjects with MetS, 8 (16.7%) had variants in 4 genes (COL6A2, FTO, SPARC, and MTHFR) related to central obesity, 17 (35.4%) had variants in 6 genes (APOB, SLC2A2, LPA, ABCG5, ABCG8, and GCKR) related to hyperglycemia, 3 (6.3%) had variants in 4 genes (APOA1, APOC2, APOA4, and LMF1) related to hypertriglyceridemia, 8 (16.7%) had variants in 4 genes (ABCA1, CETP, SCARB1, and LDLR) related to low HDL-cholesterolemia, and 5 (10.4%) had variants in ADD1 related to hypertension. Conclusions: Our findings may contribute to broadening the genetic spectrum of risk variants related to the development of MetS. abstract_id: PUBMED:18439548 Loci related to metabolic-syndrome pathways including LEPR,HNF1A, IL6R, and GCKR associate with plasma C-reactive protein: the Women's Genome Health Study. Although elevated levels of C-reactive protein (CRP) independently predict increased risk of development of metabolic syndrome, diabetes, myocardial infarction, and stroke, comprehensive analysis of the influence of genetic variation on CRP is not available. To address this issue, we performed a genome-wide association study among 6345 apparently healthy women in which we evaluated 336,108 SNPs as potential determinants of plasma CRP concentration. Overall, seven loci that associate with plasma CRP at levels achieving genome-wide statistical significance were found (range of p values for lead SNPs within the seven loci: 1.9 x 10(-)(8) to 6.2 x 10(-)(28)). Two of these loci (GCKR and HNF1A) are suspected or known to be associated with maturity-onset diabetes of the young, one is a gene-desert region on 12q23.2, and the remaining four loci are in or near the leptin receptor protein gene, the apolipoprotein E gene, the interleukin-6 receptor protein gene, or the CRP gene itself. The protein products of six of these seven loci are directly involved in metabolic syndrome, insulin resistance, beta cell function, weight homeostasis, and/or premature atherothrombosis. Thus, common variation in several genes involved in metabolic and inflammatory regulation have significant effects on CRP levels, consistent with CRP's identification as a useful biomarker of risk for incident vascular disease and diabetes. abstract_id: PUBMED:36412629 Cardiovascular Risk Factors in Children and Adolescents with Type 1 Diabetes Mellitus: The Role of Insulin Resistance and Associated Genetic Variants. Introduction: Type 1 diabetes (T1D) is associated with an increased risk of cardiovascular disease. Insulin resistance is an important cardiovascular risk factor (CVRF), also in subjects with T1D, but the influence of the genetic predisposition of insulin resistance on cardiovascular risk is still unknown in T1D. We aimed to determine whether a genetic score composed of six variants, previously associated with insulin resistance and type 2 diabetes (T2D) risk, associates with insulin sensitivity and known CVRFs in children and adolescents with T1D. Materials And Methods: 330 children and adolescents (174 males; mean age 15.7 ± 3.5 years) with T1D were genotyped for the following genetic variants: rs1801278 (IRS1), rs1044498 (ENPP1), rs2295490 (TRIB3), rs1801282 (PPARG), rs780094 (GCKR), and rs35767 (IGF1). An additive genetic risk score (GRS) and cardiovascular risk score (CVRS) were calculated. Anthropometric, glycemic control, insulin sensitivity, blood pressure, and biochemical parameters were assessed. Multivariate regression between evaluated phenotypes and GRS was performed. Results: We found a significant association between the GRS and estimated insulin sensitivity (β = -0.027 [-0.040 to -0.013], R2 = 0.86, p≤ 0.001), diastolic blood pressure (β = 0.68 [0.08-1.27], R2 = 0.20, p = 0.026), triglycerides (β = 4.26 [1.74-6.77], R2 = 0.13, p = 0.001), waist to height ratio (β = 0.003 [0.001-0.006], R2 = 0.75, p = 0.010), non-HDL-cholesterol (β = 3.63 [1.39-5.87], R2 = 0.12, p = 0.002), and CVRS (β = 0.063 [0.008-0.118], R2 = 0.19, p = 0.025), independent of age, sex, BMI, pubertal stage, diabetes duration, glycated hemoglobin, type of treatment, and total insulin requirement. The addition of the GRS to established clinical risk factors significantly improved the discriminatory capability of the regression model for predicting subjects with more CVRFs (C-statistic 0.89 [95% CI: 0.84-0.95] versus 0.83 (0.73-0.93); p = 0.037). Conclusions: Insulin resistance and T2D risk-associated genetic variants influence insulin sensitivity and known CVRFs in children and adolescents with T1D. abstract_id: PUBMED:30176313 Gene variants in AKT1, GCKR and SOCS3 are differentially associated with metabolic traits in Mexican Amerindians and Mestizos. Amerindian ancestry appears to be a risk factor for metabolic diseases (MetD), making Mexicans an ideal population to better understand the genetic architecture of metabolic health. In this study, we determine the association of genetic variants previously reported with metabolic entities, in two Mexican populations, including the largest sample of Amerindians reported to date. We investigated the association of eigth single-nucleotide polymorphisms (SNPs) in AKT1, GCKR, and SOCS3 genes with different metabolic traits in 1923 Mexican Amerindians (MAs) belonging to 57 ethnic groups, and 855 Mestizos (MEZs). The allele frequency of 7/8 SNPs showed significant differences between MAs and MEZs. Interestingly, some alleles were monomorphic in particular ethnic groups, and highly frequent in other ones. With the exception of GCKR rs1260326T, as expected, all SNP frequencies in the MEZ population had intermediate values between its two main ancestral populations (MAs and Iberian populations in Spain [IBS]). We detected ethnic differences in linkage disequilibrium patterns and haplotype structure between MAs and MEZs, possibly due to the high genetic heterogeneity in these populations. Remarkably, AKT1 was associated with hypertension in MEZs, but not in MAs. GCKR was associated with protection against type 2 diabetes (T2D) in MAs, and with hypertriglyceridemia and protection against low HDL Cholesterol (HDL-C) levels in MEZs. The CAT haplotype in SOCS3 was associated with metabolic syndrome (MetS) in MEZs, and correlated with protection against high blood pressure (HBP) and risk for high waist circumference and T2D in MAs. Our results show differential genetic associations with metabolic traits between MAs and MEZs, possibly due to the differences in genetic structure between these Mexican populations. abstract_id: PUBMED:21386085 A bivariate genome-wide approach to metabolic syndrome: STAMPEED consortium. OBJECTIVE The metabolic syndrome (MetS) is defined as concomitant disorders of lipid and glucose metabolism, central obesity, and high blood pressure, with an increased risk of type 2 diabetes and cardiovascular disease. This study tests whether common genetic variants with pleiotropic effects account for some of the correlated architecture among five metabolic phenotypes that define MetS. RESEARCH DESIGN AND METHODS Seven studies of the STAMPEED consortium, comprising 22,161 participants of European ancestry, underwent genome-wide association analyses of metabolic traits using a panel of ∼2.5 million imputed single nucleotide polymorphisms (SNPs). Phenotypes were defined by the National Cholesterol Education Program (NCEP) criteria for MetS in pairwise combinations. Individuals exceeding the NCEP thresholds for both traits of a pair were considered affected. RESULTS Twenty-nine common variants were associated with MetS or a pair of traits. Variants in the genes LPL, CETP, APOA5 (and its cluster), GCKR (and its cluster), LIPC, TRIB1, LOC100128354/MTNR1B, ABCB11, and LOC100129150 were further tested for their association with individual qualitative and quantitative traits. None of the 16 top SNPs (one per gene) associated simultaneously with more than two individual traits. Of them 11 variants showed nominal associations with MetS per se. The effects of 16 top SNPs on the quantitative traits were relatively small, together explaining from ∼9% of the variance in triglycerides, 5.8% of high-density lipoprotein cholesterol, 3.6% of fasting glucose, and 1.4% of systolic blood pressure. CONCLUSIONS Qualitative and quantitative pleiotropic tests on pairs of traits indicate that a small portion of the covariation in these traits can be explained by the reported common genetic variants. abstract_id: PUBMED:31796261 Contribution of genetic, biochemical and environmental factors on insulin resistance and obesity in Mexican young adults. Overweight/obesity, dyslipidemias, hypertension and hyperglycemia are strongly related to non-communicable diseases (NCD) in which genetic and environmental factors interact with each other. The Mexican population exhibit a genetic disposition to metabolic syndrome, type 2 diabetes, as well as many forms of dyslipidemia. This study aimed to determine the association between biochemical, genetic and environmental factors in the development of metabolic syndrome (MS), obesity and insulin resistance (IR) in Mexican young adults. Young women and men (n=6750 between 19.3±2.3 years old) participated in a health promotion program from the Autonomous University of Querétaro, México (SU-Salud program). A sub-sample of 665 participants was taken for the determination of single nucleotide polymorphisms (SNP) rs964184 (APOAV), rs9282541 (ABCA1) and rs1260326 (GCKR), using QuantStudio 12K Flex Real-Time PCR System. For the multivariate analysis, a multiple logistic regression was performed. A prevalence of 22% of overweight and 7% of obesity was determined. The main metabolic risk factors were low levels of HDL-C (30%), IR (19%), and a high level of triglycerides (15%). The main factors associated with IR were body fat percentage and triglycerides; SNP for the ABCA1 gene was related to MS, obesity and low HDL-C; SNP for GCKR gene was related to high fasting glycemia, while APOAV SNP was related with MS, hypertriglyceridemia and low HDL-C. Our findings show that the Mexican genetic predisposition to NCD affects young adults, who can suffer MS, obesity and IR. Public health strategies must focus on prevention actions from an early age. abstract_id: PUBMED:18853134 The search for putative unifying genetic factors for components of the metabolic syndrome. Aims/hypothesis: The metabolic syndrome is a cluster of factors contributing to increased risk of cardiovascular disease and type 2 diabetes but unifying mechanisms have not been identified. Our aim was to study whether common variations in 17 genes previously associated with type 2 diabetes or components of the metabolic syndrome and variants in nine genes with inconsistent association with at least two components of the metabolic syndrome would also predict future development of components of the metabolic syndrome, individually or in combination. Methods: Genetic variants were studied in a large prospective study of 16,143 non-diabetic individuals (mean follow-up time 23 years) from the Malmö Preventive Project. In this study, development of at least three of obesity (BMI >or= 30 kg/m(2)), dyslipidaemia (triacylglycerol >or= 1.7 mmol/l and/or lipid-lowering treatment), hypertension (blood pressure >or= 140/90 mmHg and/or antihypertensive medication) and hyperglycaemia (fasting plasma glucose >or= 5.6 mmol/l and/or known diabetes) was defined as development of the metabolic syndrome. The risk of developing at least three components of the metabolic syndrome or the individual components was calculated by logistic regression adjusted for age at baseline, follow-up time and sex. Results: Polymorphisms in TCF7L2 (rs7903146, OR 1.10, 95% CI 1.04-1.17, p = 0.00097), FTO (rs9939609, OR 1.08, 95% CI 1.02-1.14, p = 0.0065), WFS1 (rs10010131, OR 1.07, 95% CI 1.02-1.13, p = 0.0078) and IGF2BP2 (rs4402960, OR 1.07, 95% CI 1.01-1.13, p = 0.021) predicted the development of at least three components of the metabolic syndrome in both univariate and multivariate analysis; in the case of TCF7L2, WFS1 and IGF2BP this was due to their association with hyperglycaemia (p < 0.00001, p = 0.0033 and p = 0.027, respectively) and for FTO it was due to its association with obesity (p = 0.004). A polymorphism in the GCKR gene predicted dyslipidaemia (rs1260326, OR 1.15, 95% CI 1.09-1.22, p < 0.00001) but not the metabolic syndrome. None of the studied polymorphisms was associated with more than two components of the metabolic syndrome. A composite genotype score of the 17 polymorphisms associated with type 2 diabetes predicted the development of at least three components of the metabolic syndrome (OR 1.04, p < 0.00001) and the development of hyperglycaemia (OR 1.06, p < 0.00001). Carriers of >or=19 risk alleles had 51 and 72% increased risk of developing at least three components of the metabolic syndrome and hyperglycaemia, respectively, compared with carriers of <or=12 risk alleles (p < 0.00001 for both). Conclusions/interpretation: Polymorphisms in susceptibility genes for type 2 diabetes (TCF7L2, WFS1, IGF2BP2) and obesity (FTO) predispose to the metabolic syndrome by increasing the risk of one specific component of the metabolic syndrome. The findings argue against a unifying genetic component for the metabolic syndrome. abstract_id: PUBMED:30945694 Genetics of nonalcoholic fatty liver disease in Asian populations. Nonalcoholic fatty liver disease (NAFLD) is characterized by the accumulation of fat in the liver without any history of chronic alcohol consumption. It encompasses a wide spectrum of diseases that range from simple steatosis to nonalcoholicsteatohepatitis. NAFLD is strongly associated with obesity, insulin resistance / type-2 diabetes mellitus and the metabolic syndrome. NAFLD is a complex disorder; environmental and genetic factors interact with NAFLD manifestation and determine its progression. In this review, an attempt was made to provide current information on the genetic variants of NAFLD in Asian populations. Literature search was performed by using PubMed, Medline and Google Scholar database. Candidate gene, validation and genomewide association studies (GWASs) were included in this review. A total of 41 studies fulfilled inclusion criteria of which 12 candidate gene studies exclusively focussed on the PNPLA3 gene and 17 other studies on other important candidate genes such as NCANCILP2, PPARG,AGTR1, FABP1, APOC3 etc. reported significant association with NAFLD. Eight validation studies identified associations of variants on PNPLA3, LYPLAL1, TM6SF2, ADIPOR2, STAT3, GCKR, SAMM50 etc. with NAFLD. Thus, so far, four GWASs have been conducted in Asian population that reported PNPLA3, SAMM50, PARVB and GATAD2A genes which were significantly associated with NAFLD. Findings indicate that PNPLA3, APOC3, PPARG, NCAN and GCKR genes emerge out to be the important biological markers associated with NAFLD. abstract_id: PUBMED:22553379 Hyperglycemia and a common variant of GCKR are associated with the levels of eight amino acids in 9,369 Finnish men. We investigated the association of glycemia and 43 genetic risk variants for hyperglycemia/type 2 diabetes with amino acid levels in the population-based Metabolic Syndrome in Men (METSIM) Study, including 9,369 nondiabetic or newly diagnosed type 2 diabetic Finnish men. Plasma levels of eight amino acids were measured with proton nuclear magnetic resonance spectroscopy. Increasing fasting and 2-h plasma glucose levels were associated with increasing levels of several amino acids and decreasing levels of histidine and glutamine. Alanine, leucine, isoleucine, tyrosine, and glutamine predicted incident type 2 diabetes in a 4.7-year follow-up of the METSIM Study, and their effects were largely mediated by insulin resistance (except for glutamine). We also found significant correlations between insulin sensitivity (Matsuda insulin sensitivity index) and mRNA expression of genes regulating amino acid degradation in 200 subcutaneous adipose tissue samples. Only 1 of 43 risk single nucleotide polymorphisms for type 2 diabetes or hyperglycemia, the glucose-increasing major C allele of rs780094 of GCKR, was significantly associated with decreased levels of alanine and isoleucine and elevated levels of glutamine. In conclusion, the levels of branched-chain, aromatic amino acids and alanine increased and the levels of glutamine and histidine decreased with increasing glycemia, reflecting, at least in part, insulin resistance. Only one single nucleotide polymorphism regulating hyperglycemia was significantly associated with amino acid levels. Answer: The study identified in abstract PUBMED:21114848 investigated the association of two GCKR gene variants (rs780094 and rs1260326) with serum triglycerides, fasting plasma glucose levels, type 2 diabetes mellitus (T2DM), metabolic syndrome (MetS), and carotid intima-media thickness (CIMT). The results indicated that both GCKR variants were associated with serum triglycerides and fasting plasma glucose levels. However, no significant association was observed with the development of T2DM and MetS. Notably, the study found a positive association between the prevalence of the GCKR homozygous functional variants and CIMT in MetS patients. This suggests that the GCKR minor allele carriage may play a role in cardiovascular risk based on the CIMT association. Therefore, the study supports the idea that functional variants of the GCKR gene are inversely associated with serum triglycerides and fasting plasma glucose levels and preliminarily suggests a cardiovascular risk role based on the association with CIMT in MetS patients.
Instruction: Should warfarin or aspirin be stopped prior to prostate biopsy? Abstracts: abstract_id: PUBMED:22959852 Should warfarin or aspirin be stopped prior to prostate biopsy? An analysis of bleeding complications related to increasing sample number regimes. Aim: To determine whether patients undergoing transrectal ultrasound (TRUS)-guided prostate biopsy with increased sampling numbers are more likely to experience bleeding complications and whether warfarin or low-dose aspirin are independent risk factors. Materials And Methods: 930 consecutive patients with suspected prostatic cancer were followed up after biopsy. Warfarin/low-dose aspirin was not stopped prior to the procedure. An eight to 10 sample regime TRUS-guided prostate biopsy was performed and patients were offered a questionnaire to complete 10 days after the procedure, to determine any immediate or delayed bleeding complications. Results: 902 patients returned completed questionnaires. 579 (64.2%) underwent eight core biopsies, 47 (5.2%) underwent nine, and 276 (30.6%) underwent 10. 68 were taking warfarin [mean international normalized ratio (INR) = 2.5], 216 were taking low-dose aspirin, one was taking both, and 617 were taking neither. 27.9% of those on warfarin and 33.8% of those on aspirin experienced haematuria. 37% of those on no blood-thinning medication experienced haematuria. 13.2% of those on warfarin and 14.4% of those on aspirin experienced rectal bleeding. 11.5% of those on no blood-thinning medication experienced rectal bleeding. 7.4% of those on warfarin and 12% of those on aspirin experienced haematospermia. 13.8% of those on neither experienced haematospermia. Regression analysis showed a significant association between increasing sampling number and occurrence of all bleeding complication types. There was no significant association between minor bleeding complications and warfarin use; however, there was a significant association between minor bleeding complications and low-dose aspirin use. There was no severe bleeding complication. Conclusion: There is an increased risk of bleeding complications following TRUS-guided prostate biopsy with increased sampling numbers but these are minor. There is also an increased risk with low-dose aspirin use; however, there is no increased risk of bleeding complications with warfarin use. These results suggest that up to 10 cores during prostate biopsy remains acceptable safe practice and cessation of warfarin and low-dose aspirin is usually not necessary. abstract_id: PUBMED:10505996 Management of patients treated with aspirin or warfarin and evaluation of haemostasis prior to prostatic biopsy: a survey of current practice amongst radiologists and urologists. Aim: To document current practice concerning the management of patients taking aspirin or warfarin and the evaluation of haemostatic function prior to prostatic biopsy. Method: A postal survey was performed with typed questionnaires being sent to 275 urology and 275 radiology centres. Results: A high proportion of radiology departments in particular (83%) had protocols in place concerning the management of aspirin or warfarin prior to prostatic biopsy. A significant proportion of both radiologists and urologists have postponed biopsies due to patients unexpectedly taking these medications. Few of the respondents reported the use of pre-biopsy screening blood tests. Fifty-two percent of radiologists and 27% of urologists terminated aspirin prior to prostatic biopsy, although the urologists stopped aspirin for a long time period. Ninety-five percent of radiologists and 84% of urologists terminated warfarin prior to prostatic biopsy, although again the urologists stopped warfarin at an earlier stage. Most of those respondents who stopped warfarin prior to biopsy, also checked the INR. The urologists generally stated a higher threshold (INR) which would be considered too high to proceed. Conclusion: There are wide variations in practice both within and between the radiology and urology groups. This is unsurprising, since there is conflicting advice in the relevant literature. abstract_id: PUBMED:15767103 Biopsy of the prostate guided by transrectal ultrasound: relation between warfarin use and incidence of bleeding complications. Aim: To determine the relation between warfarin use and the frequency of bleeding complications after biopsy of the prostate guided by transrectal ultrasound (TRUS). Methods: Overall, 1022 consecutive patients with suspected prostatic disease were followed after biopsy. Warfarin and aspirin use was determined on the day of the procedure. A TRUS-guided biopsy was performed and patients were offered a questionnaire to complete 10 days after the procedure, to determine any immediate or delayed bleeding complications. Follow-up telephone calls were made to those who had not replied within the stipulated period. Results: Of the 1000 patients who replied, 49 were receiving warfarin, 220 were receiving aspirin and 731 were not receiving any anticoagulant drugs. Of the 49 subjects reporting current use of warfarin, 18 (36.7%) experienced haematuria, compared with 440 (60.2%) of the patients receiving no anti-coagulant drugs who reported haematuria. This was statistically significant (p = 0.001). Of the group receiving warfarin, 4 (8.2%) experienced haematospermia whereas 153 (21%) of the group receiving no anticoagulant medication reported haematospermia. This difference also was statistically significant (p = 0.030). Rectal bleeding was experienced by 7 (14.3%) of the group receiving warfarin compared with 95 (13%) in the group without anticoagulant medication, but this was not statistically significant (p = 0.80). We also demonstrated that there was no statistically significant association between the severity of the bleeding complications and medication with warfarin. Conclusion: None of the group receiving warfarin experienced clinically important bleeding complications. Our results suggest that the frequency and severity of bleeding complications were no worse in the warfarin group than in the control group and that discontinuing anticoagulation medication before prostate biopsy may be unnecessary. abstract_id: PUBMED:22536465 Discontinuation of anticoagulant or antiplatelet therapy for transrectal ultrasound-guided prostate biopsies: a single-center experience. Purpose: Historically, it was thought that hemorrhagic complications were increased with transrectal ultrasound-guided prostate biopsies (TRUS biopsy) of patients receiving anticoagulation/antiplatelet therapy. However, the current literature supports the continuation of anticoagulation/antiplatelet therapy without additional morbidity. We assessed our experience regarding the continuation of anticoagulation/antiplatelet therapy during TRUS biopsy. Materials And Methods: A total of 91 and 98 patients were included in the anticoagulation/antiplatelet (group I) and control (group II) groups, respectively. Group I subgroups consisted of patients on monotherapy or dual therapy of aspirin, warfarin, clopidogrel, or low molecular weight heparin. The TRUS biopsy technique was standardized to 12 cores from the peripheral zones. Patients completed a questionnaire over the 7 days following TRUS biopsy. The questionnaire was designed to assess the presence of hematuria, rectal bleeding, and hematospermia. Development of rectal pain, fever, and emergency hospital admissions following TRUS biopsy were also recorded. Results: The patients' mean age was 65 years (range, 52 to 74 years) and 63.5 years (range, 54 to 74 years) in groups I and II, respectively. The overall incidence of hematuria was 46% in group I compared with 63% in group II (p=0.018). The incidence of hematospermia was 6% and 10% in groups I and II, respectively. The incidence of rectal bleeding was similar in group I (40%) and group II (39%). Statistical analysis was conducted by using Fisher exact test. Conclusions: There were fewer hematuria episodes in anticoagulation/antiplatelet patients. This study suggests that it is not necessary to discontinue anticoagulation/antiplatelet treatment before TRUS biopsy. abstract_id: PUBMED:15999118 Effect of short-term finasteride therapy on peroperative bleeding in patients who were candidates for transurethral resection of the prostate (TUR-P): a randomized controlled study. In the present study we aimed to demonstrate the efficacy of short-term pretreatment with finasteride in patients undergoing transurethral resection of the prostate (TUR-P). For this purpose 40 patients with BPH, who were candidates for TUR-P, were randomized into two groups. The first group (n=20) received 5 mg finasteride/day for 4 weeks prior to surgery and the second group (n=20) remained as the control. Patients who underwent prior prostate or urethral surgery and had a diagnosis of prostate cancer or chronic renal failure, patients who received finasteride, aspirin, coumadin or similar anticoagulant drugs prior to surgery and patients who had capsule perforations or open sinuses during the surgery were excluded from the study. All patients had a normal digital rectal examination and PSA values less than 4 ng/ml. As we look at the results there was no statistically significant difference between the finasteride group and control group regarding age, IPSS, PSA, prostate volumes, preoperative serum hemoglobin, hematocrit values and mean operating times and used irrigating fluids. The total amount of bleeding and bleeding per gram resected tissue were significantly lower in the finasteride group regardless of prostate volume. Furthermore the decrease in the hemoglobin and hematocrit values was higher in the control group. As a conclusion four weeks of finasteride pretreatment provided a significant decrease in peroperative bleeding regardless of prostate volume without any major side effects. abstract_id: PUBMED:24766557 Safety of gastrointestinal endoscopic biopsy in patients taking antithrombotics. Background And Aim: Current Japanese gastrointestinal (GI) endoscopic guidelines permit endoscopic biopsy without cessation of antiplatelet agents and warfarin in patients with a therapeutic range of prothrombin time-international normalized ratio (PT-INR) levels, although the evidence levels are low. We evaluated the safety of endoscopic biopsy in patients currently taking antithrombotics. Methods: Consecutive patients receiving antithrombotics who underwent GI endoscopy from August 2012 to August 2013 were enrolled. Adverse events and endoscopic hemostasis after biopsy were evaluated. PT-INR level was measured in patients taking warfarin the day before endoscopy. Results: Among 7939 patients undergoing endoscopy, 1034 patients (13.0%, 706 men and 328 women, average age 72.8 years) were receiving antithrombotics. Antithrombotics included aspirin (44.8%), warfarin (34.7%), thienopyridine (16.1%), cilostazol (10.3%), dabigatran (4.8%) etc. PT-INR levels in patients taking warfarin were >3.0 in 13 patients (4.3%), between 2.5 and 3.0 in 18 patients (6.0%), <2.5 in 269 patients (89.7%). Two hundred and six patients received endoscopic biopsy while taking aspirin (51.2%), warfarin (22.8%), and thienopyridine (13.6%). Endoscopic hemostasis was required in three patients after endoscopic biopsy (spraying thrombin in two patients, spraying thrombin and clipping in one patient). There were no major complications. The incidence of endoscopic hemostasis after biopsy in patients without antithrombotic cessation was not significantly different than in the controls not taking antithrombotics (1.5% vs 0.98%, P = 0.51). Conclusion: Endoscopic biopsy did not increase the bleeding risk despite not stopping antithrombotics prior to biopsy even among patients taking warfarin whose PT-INR was within the therapeutic range. abstract_id: PUBMED:12786808 Bone marrow biopsy morbidity and mortality. A postal survey of adverse events associated with bone marrow biopsy (aspiration biopsy with or without trephine biopsy) was carried out among British Society of Haematology members, between 1995 and 2001. A total of 26 adverse events, including one death directly attributable to the procedure, were reported among an estimated 54 890 biopsies. The most frequent and most serious adverse event was haemorrhage, reported in 14 patients, necessitating blood transfusion in six patients and leading to the single death. The potential risk factors most often associated with haemorrhage were a diagnosis of a myeloproliferative disorder, aspirin therapy or both. Other potential risk factors were warfarin therapy, disseminated intravascular coagulation and obesity. abstract_id: PUBMED:31528383 When to stop anticoagulation, anti-platelet aggregates, and non-steroidal anti-inflammatories (NSAIDs) prior to spine surgery. Background: Based upon a select review of the literature, in my opinion, spine surgeons, not just our medical/cardiological colleagues, need to know when to stop anticoagulant, anti-platelet aggregates, and non-steroidal anti-inflammatory (NSAIDs) medications prior to spine surgery to avoid perioperative bleeding complications. Methods: Typically, medical/cardiological consultants, who "clear our patients" are not as aware as we are of the increased risks of perioperative bleeding if anticoagulant, anti-platelet, and NSAIDs are not stopped in a timely fashion prior to spine surgery (e.g. excessive intraoperative hemorrhage, and postoperative seromas, hematomas, and wound dehiscence). Results: Different medications need to be discontinued at varying intervals prior to spinal operations. The anticoagulants include; Warfarin (stop at least 5 preoperative days), and Xa inhibitors (Eliquis (Apixaban: stop for 2 days) and Xarelto (Rivaroxaban: stop for 3 days)); note presently data vary. The anti-platelet aggregates include: Aspirin/Clopidogrel (stop >7-10 days preoperatively). The multiple NSAIDs should be stopped for varying intervals ranging from 1-10 days prior to spine surgery, and increase bleeding risks when combined with any of the anticoagulants or anti-platelet aggregates. NSAIDs (generic name/commercial names should be stopped preoperatively for at least; 1 day- Diclofenac (Voltaran), Ibuprofen (Advil, Motrin), Ketorolac (Toradol); 2 days- Etodolac (Lodine), Indomethacin (Indocin); 4-days-Meloxicam (Mobic) and Naproxen (Aleve, Naprosyn, Anaprox); 4 days- Nabumetone (Relafen); 6 days - Oxaprozin (Daypro); and 10 days- Piroxicam (Feldene). Conclusions: Spine surgeons need to know when anti-platelet, anticoagulant, and NSAIDs therapies should be stopped prior to spine surgery to avoid perioperative bleeding complications. abstract_id: PUBMED:17951940 Outpatient percutaneous renal biopsy in adult patients. To study the safety and efficacy of performing percutaneous renal biopsy in the outpatient department compared to the traditional inpatient policy, we studied 44 consecutive patients with proteinuria and other urinary sediment abnormalities, at King Fahd Hospital of the University, Al-Khobar, Saudi Arabia, during the period from September 2004 to August 2006. The patients were divided into two groups: group I, in whom kidney biopsy was performed and followed by 1-day hospital admission; and group II, in whom renal biopsy was performed in the outpatient department and followed by 6 hours' observation period and then by regular outpatient visits. All biopsies were performed with the use of real-time ultrasound and automated biopsy needle. Patients with a history of a bleeding diathesis or abnormal coagulation profile and those receiving warfarin, heparin, aspirin, or nonsteroidal anti-inflammatory drugs were excluded from the study. Only minor biopsy-related complications such as gross hematuria, perinephric hematoma that resolved without the need for blood transfusion or surgical intervention occurred in three (13.6%) patients in group I and in two (9.1%) patients in group II. The complications were apparent within 6 hours in all but one patient (97.7%). Overall, hematuria was identified in 52% of patients at < or = 2 hours, 85% at < or = 4 hours, and 97.7% at < or =6 hours. The 24-hour hematocrit levels were not significantly different between the study groups. One (4.5%) patient from group II had a small perinephric hematoma, which was detected by ultrasound examination at 24 hours but not at 6 hours post biopsy period.; it resolved spontaneously without intervention. We conclude that in selected patients, same-day discharge after 6 hours of renal biopsy may be given safely without increased risk of complications. abstract_id: PUBMED:18806164 Anticoagulation and bleeding risk after core needle biopsy. Objective: The objective of our study was to compare the bleeding complication rates after core needle biopsy in patients receiving anticoagulation therapy and those who had not to assess the safety of performing core needle breast biopsy in anticoagulated patients. Materials And Methods: Core needle biopsy was performed at 1,144 sites in 1,055 women from August 2004 to May 2007. A retrospective study of these cases was performed. The patient group was composed of 200 women (220 biopsy sites) who were taking anticoagulant therapy daily (180 patients, aspirin; 16 patients, warfarin; and four patients, Excedrin), and the control group was composed of 855 women (924 biopsy sites) who were not receiving daily anticoagulant therapy. Any adverse reactions after core needle biopsy, including the presence and size of bruises or lumps (hematomas), were recorded. Results: There was a statistically significant difference (p = 0.035) in the percentage of bruises between patients receiving anticoagulation therapy and those who were not. Bruising occurred in 68 of the 200 (34%) women in the patient group (anticoagulated), whereas bruising occurred in 227 of the 855 (26.5%) women in the control group (nonanticoagulated). The differences were not statistically significant for hematoma formation (p = 0.274) or bruising with hematoma formation (p = 0.413). Hematoma occurred in 12 of the 200 (6%) anticoagulated patients versus 36 of the 855 (4.2%) patients in the control group. Patients reporting a bruise and lump (hematoma) numbered 10 of 200 (5%) for the anticoagulated group and 32 of 855 (3.7%) for the control group. Conclusion: No patients undergoing core needle biopsy reported any clinically important complications. The results of this study confirm that performing core needle biopsy in patients on anticoagulation therapy is safe. Answer: Based on the provided abstracts, it appears that stopping warfarin or aspirin prior to prostate biopsy may not be necessary. Several studies have analyzed the risk of bleeding complications in patients undergoing transrectal ultrasound (TRUS)-guided prostate biopsies while on anticoagulant or antiplatelet therapy. One study with 930 consecutive patients found no significant association between minor bleeding complications and warfarin use, although there was a significant association with low-dose aspirin use. However, the study concluded that there is no increased risk of severe bleeding complications with warfarin use and suggested that cessation of warfarin and low-dose aspirin is usually not necessary (PUBMED:22959852). Another study involving 1022 patients also suggested that the frequency and severity of bleeding complications were no worse in the warfarin group than in the control group, indicating that discontinuing anticoagulation medication before prostate biopsy may be unnecessary (PUBMED:15767103). A single-center experience with 91 patients on anticoagulation/antiplatelet therapy and 98 control patients showed fewer episodes of hematuria in the anticoagulation/antiplatelet group, suggesting that discontinuation of these treatments before TRUS biopsy is not necessary (PUBMED:22536465). However, it is important to note that practices vary widely, and some practitioners may choose to stop these medications prior to biopsy. A survey showed that a significant proportion of radiologists and urologists terminated warfarin prior to prostatic biopsy, with most checking the INR before proceeding (PUBMED:10505996). In conclusion, the evidence from the provided abstracts suggests that warfarin and aspirin may not need to be stopped prior to prostate biopsy, as the risk of bleeding complications is not significantly increased. However, individual clinical judgment and patient-specific factors should be considered when making this decision.
Instruction: Redefining the target early during treatment. Can we visualize regional differences within the target volume using sequential diffusion weighted MRI? Abstracts: abstract_id: PUBMED:24231234 Redefining the target early during treatment. Can we visualize regional differences within the target volume using sequential diffusion weighted MRI? Purpose: In head and neck cancer, diffusion weighted MRI (DWI) can predict response early during treatment. Treatment-induced changes and DWI-specific artifacts hinder an accurate registration between apparent diffusion coefficient (ADC) maps. The aim of the study was to develop a registration tool which calculates and visualizes regional changes in ADC. Methods: Twenty patients with stage IV HNC treated with primary radiotherapy received an MRI including DWI before and early during treatment. Markers were manually placed at anatomical landmarks on the different modalities at both time points. A registration method, consisting of a fully automatic rigid and nonrigid registration and two semi-automatic thin-plate spline (TPS) warps was developed and applied to the image sets. After each registration step the mean registration errors were calculated and ΔADC was compared between good and poor responders. Results: Adding the TPS warps significantly reduced the registration error (in mm, 6.3 ± 6.2 vs 3.2 ± 3.3 mm, p<0.001). After the marker based registration the median ΔADC in poor responders was significantly lower than in good responders (7% vs. 21%; p<0.001). Conclusions: This registration method allowed for a significant reduction of the mean registration error. Furthermore the voxel-wise calculation of the ΔADC early during radiotherapy allowed for a visualization of the regional differences of ΔADC within the tumor. abstract_id: PUBMED:30223802 EXclusion of non-Involved uterus from the Target Volume (EXIT-trial): an individualized treatment for locally advanced cervical cancer using modern radiotherapy and imaging techniques. Background: Definitive chemoradiotherapy is standard of care in locally advanced cervical cancer (LACC). Both toxicity and local relapse remain major concerns in this treatment. We hypothesize that a magnetic resonance imaging (MRI) based redefining of the radiotherapeutic target volume will lead to a reduction of acute and late toxicity. In our center, chemoradiotherapy followed by hysterectomy was implemented successfully in the past. This enables us to assess the safety of reducing the target volume but also to explore the biological effects of chemoradiation on the resected hysterectomy specimen. Methods: The EXIT-trial is a phase II, single arm study aimed at LACC patients. This study evaluates whether a MRI-based exclusion of the non-tumor-bearing parts of the uterus out of the target volume results in absence of tumor in the non-high doses irradiated part of the uterus in the hysterectomy specimen. Secondary endpoints include a dosimetric comparison of dose on normal tissue when comparing study treatment plans compared to treatment of the whole uterus at high doses; acute and chronic toxicity, overall survival, local relapse- and progression-free survival. In the translational part of the study, we will evaluate the hypothesis that the baseline apparent diffusion coefficient (ADC) values of diffusion weighted MRI and its evolution 2 weeks after start of CRT, for the whole tumor as well as for intra-tumoral regions, is prognostic for residual tumor on the hysterectomy specimen. Discussion: Although MRI is already used to guide target delineation in brachytherapy, the EXIT-trial is the first to use this information to guide target delineation in external beam radiotherapy. Early therapy resistance prediction using DW-MRI opens a window for early treatment adaptation or further dose-escalation on tumors/intratumoral regions at risk for treatment failure. Trial Registration: Belgian Registration: B670201526181 (prospectively registered, 26/11/2015); ClinicalTrials.gov Identifier: NCT03542942 (retrospectively registered, 17/5/2018). abstract_id: PUBMED:23508528 Prediction of selective serotonin reuptake inhibitor response using diffusion-weighted MRI. Pre-treatment differences in serotonergic binding between those who remit to antidepressant treatment and those who do not have been found using Positron Emission Tomography (PET). To investigate these differences, an exploratory study was performed using a second imaging modality, diffusion-weighted MRI (DW-MRI). Eighteen antidepressant-free subjects with Major Depressive Disorder received a 25-direction DW-MRI scan prior to 8 weeks of selective serotonin reuptake inhibitor treatment. Probabilistic tractography was performed between the midbrain/raphe and two target regions implicated in depression pathophysiology (amygdala and hippocampus). Average fractional anisotropy (FA) within the derived tracts was compared between SSRI remitters and non-remitters, and correlation between pre-treatment FA values and SSRI treatment outcome was assessed. Results indicate that average FA in DW-MRI-derived tracts to the right amygdala was significantly lower in non-remitters (0.55 ± 0.04) than remitters (0.61 ± 0.04, p < 0.01). In addition, there was a significant correlation between average FA in tracts to the right amygdala and SSRI treatment response. These relationships were found at a trend level when using the left amygdala as a tractography target. No significant differences were observed when using the hippocampus as target. These regional differences, consistent with previous PET findings, suggest that the integrity and/or number of white matter fibers terminating in the right amygdala may be compromised in SSRI non-remitters. Further, this study points to the benefits of multimodal imaging and suggests that DW-MRI may provide a pre-treatment signature of SSRI depression remission at 8 weeks. abstract_id: PUBMED:26134438 Assessment of Treatment Response With Diffusion-Weighted MRI and Dynamic Contrast-Enhanced MRI in Patients With Early-Stage Breast Cancer Treated With Single-Dose Preoperative Radiotherapy: Initial Results. Single-dose preoperative stereotactic body radiotherapy is a novel radiotherapy technique for the early-stage breast cancer, and the treatment response pattern of this technique needs to be investigated on a quantitative basis. In this work, dynamic contrast-enhanced magnetic resonance imaging and diffusion-weighted magnetic resonance imaging were used to study the treatment response pattern in a unique cohort of patients with early-stage breast cancer treated with preoperative radiation. Fifteen female qualified patients received single-dose preoperative radiotherapy with 1 of the 3 prescription doses: 15 Gy, 18 Gy, and 21 Gy. Magnetic resonance imaging scans including both diffusion-weighted magnetic resonance imaging and dynamic contrast-enhanced magnetic resonance imaging were acquired before radiotherapy for planning and after radiotherapy but before surgical resection. In diffusion-weighted magnetic resonance imaging, the regional averaged apparent diffusion coefficient was calculated. In dynamic contrast-enhanced magnetic resonance imaging, quantitative parameters K (trans) and v e were evaluated using the standard Tofts model based on the average contrast agent concentration within the region of interest, and the semiquantitative initial area under the concentration curve (iAUC6min) was also recorded. These parameters' relative changes after radiotherapy were calculated for gross tumor volume, clinical target volume, and planning target volume. The initial results showed that after radiotherapy, initial area under the concentration curve significantly increased in planning target volume (P < .006) and clinical target volume (P < .006), and v e significantly increased in planning target volume (P < .05) and clinical target volume (P < .05). Statistical studies suggested that linear correlations between treatment dose and the observed parameter changes exist in most examined tests, and among these tests, the change in gross tumor volume regional averaged apparent diffusion coefficient (P < .012) and between treatment dose and planning target volume K (trans) (P < .029) were found to be statistically significant. Although it is still preliminary, this pilot study may be useful to provide insights for future works. abstract_id: PUBMED:28927030 Implementation of diffusion-weighted magnetic resonance imaging in target delineation of central lung cancer accompanied with atelectasis in precision radiotherapy. Radiotherapy, particularly the target delineation of cancer based on scanned images, plays a key role in the planning of cancer treatment. Recently, diffusion-weighted magnetic resonance imaging (DW-MRI) has emerged as a prospective superior procedure compared with intensified computed tomography (CT) and positron emission tomography (PET) in the target delineation of cancer. However, the implication of DW-MRI in lung cancer, the leading cause of cancer-associated mortality worldwide, has not been extensively evaluated. In the present study, the gross target volumes of lung cancer masses delineated using the DW-MRI, CT and PET procedures were compared in a pairwise manner in a group of 27 lung cancer patients accompanied with atelectasis of various levels. The data showed that compared with CT and PET procedures, DW-MRI has a more precise delineation of lung cancer while exhibiting higher reproducibility. Together with the fact that it is non-invasive and cost-effective, these data demonstrate the great application potential of the DW-MRI procedure in cancer precision radiotherapy. abstract_id: PUBMED:33458363 Evaluation of diffusion weighted imaging for tumor delineation in head-and-neck radiotherapy by comparison with automatically segmented 18F-fluorodeoxyglucose positron emission tomography. Background And Purpose: Diffusion weighted (DW) MRI may facilitate target volume delineation for head-and-neck (HN) radiation treatment planning. In this study we assessed the use of a dedicated, geometrically accurate, DW-MRI sequence for target volume delineation. The delineations were compared with semi-automatic segmentations on 18F-fluorodeoxyglucose (FDG) positron emission tomography (PET) images and evaluated for interobserver variation. Methods And Materials: Fifteen HN cancer patients underwent both DW-MRI and FDG-PET for RT treatment planning. Target delineation on DW-MRI was performed by three observers, while for PET a semi-automatic segmentation was performed using a Gaussian mixture model. For interobserver variation and intermodality variation, volumes, overlap metrics and Hausdorff distances were calculated from the delineations. Results: The median volumes delineated by the three observers on DW-MRI were 10.8, 10.5 and 9.0 cm3 respectively, and was larger than the median PET volume (8.0 cm3). The median conformity index of DW-MRI for interobserver variation was 0.73 (range 0.38-0.80). Compared to PET, the delineations on DW-MRI by the three observers showed a median dice similarity coefficient of 0.71, 0.69 and 0.72 respectively. The mean Hausdorff distance was small with median (range) distances between PET and DW-MRI of 2.3 (1.5-6.8), 2.5 (1.6-6.9) and 2.0 (1.35-7.6) mm respectively. Over all patients, the median 95th percentile distances were 6.0 (3.0-13.4), 6.6 (4.0-24.0) and 5.3 (3.4-26.0) mm. Conclusion: Using a dedicated DW-MRI sequence, target volumes could be defined with good interobserver agreement and a good overlap with PET. Target volume delineation using DW-MRI is promising in head-and-neck radiotherapy, combined with other modalities, it can lead to more precise target volume delineation. abstract_id: PUBMED:35817048 Feasibility study of clinical target volume definition for soft-tissue sarcoma using muscle fiber orientations derived from diffusion tensor imaging. Objective.Soft-tissue sarcoma spreads preferentially along muscle fibers. We explore the utility of deriving muscle fiber orientations from diffusion tensor MRI (DT-MRI) for defining the boundary of the clinical target volume (CTV) in muscle tissue.Approach.We recruited eight healthy volunteers to acquire MR images of the left and right thigh. The imaging session consisted of (a) two MRI spin-echo-based scans, T1- and T2-weighted; (b) a diffusion weighted (DW) spin-echo-based scan using an echo planar acquisition with fat suppression. The thigh muscles were auto-segmented using the convolutional neural network. DT-MRI data were used as a geometry encoding input to solve the anisotropic Eikonal equation with the Hamiltonian Fast-Marching method. The isosurfaces of the solution modeled the CTV boundary.Main results.The auto-segmented muscles of the thigh agreed with manually delineated with the Dice score ranging from 0.8 to 0.94 for different muscles. To validate our method of deriving muscle fiber orientations, we compared anisotropy of the isosurfaces across muscles with different anatomical orientations within a thigh, between muscles in the left and right thighs of each subject, and between different subjects. The fiber orientations were identified reproducibly across all comparisons. We identified two controlling parameters, the distance from the gross tumor volume to the isosurface and the eigenvalues ratio, to tailor the proposed CTV to the satisfaction of the clinician.Significance.Our feasibility study with healthy volunteers shows the promise of using muscle fiber orientations derived from DW MRI data for automated generation of anisotropic CTV boundary in soft tissue sarcoma. Our contribution is significant as it serves as a proof of principle for combining DT-MRI information with tumor spread modeling, in contrast to using moderately informative 2D CT planes for the CTV delineation. Such improvements will positively impact the cancer centers with a small volume of sarcoma patients. abstract_id: PUBMED:37586375 Consistency of muscle fibers directionality in human thigh derived from diffusion-weighted MRI. Objective.Diffusion-weighted MR imaging (DW-MRI) is known to quantify muscle fiber directionality and thus may be useful for radiotherapy target definition in sarcomas. Here, we investigate the variability of tissue anisotropy derived from diffusion tensor (DT) in the human thigh to establish the baseline parameters and protocols for DW-MRI acquisition for future studies in sarcoma patients.Approach.We recruited ten healthy volunteers to acquire diffusion-weighted MR images of the left and right thigh. DW-MRI data were used to reconstruct DT eigenvectors within each individual thigh muscle. Deviations of the principal eigenvector from its mean were calculated for different experimental conditions.Main results.Within the majority of muscles in most subjects, the mode of the histogram of the angular deviation of the principal eigenvector of the water DT from its muscle-averaged value did not exceed 20°. On average for all subjects, the mode ranged from 15° to 24°. Deviations much larger than 20° were observed in muscles far from the RF coil, including cases with significant amounts of subcutaneous fat and muscle deformation under its own weight.Significance.Our study is a robust characterization of angular deviations of muscle fiber directionality in the thigh as determined by DW-MRI. We show that an appropriate choice of experimental conditions reduces the variability of the observed directionality. Precise determination of tissue directionality will enable reproducible models of microscopic tumor spread, with future application in defining the clinical target volume for soft tissue sarcoma. abstract_id: PUBMED:37674171 Impact of MRI on target volume definition in head and neck cancer patients. Background: Target volume definition for curative radiochemotherapy in head and neck cancer is crucial since the predominant recurrence pattern is local. Additional diagnostic imaging like MRI is increasingly used, yet it is usually hampered by different patient positioning compared to radiotherapy. In this study, we investigated the impact of diagnostic MRI in treatment position for target volume delineation. Methods: We prospectively analyzed patients who were suitable and agreed to undergo an MRI in treatment position with immobilization devices prior to radiotherapy planning from 2017 to 2019. Target volume delineation for the primary tumor was first performed using all available information except for the MRI and subsequently with additional consideration of the co-registered MRI. The derived volumes were compared by subjective visual judgment and by quantitative mathematical methods. Results: Sixteen patients were included and underwent the planning CT, MRI and subsequent definitive radiochemotherapy. In 69% of the patients, there were visually relevant changes to the gross tumor volume (GTV) by use of the MRI. In 44%, the GTV_MRI would not have been covered completely by the planning target volume (PTV) of the CT-only contour. Yet, median Hausdorff und DSI values did not reflect these differences. The 3-year local control rate was 94%. Conclusions: Adding a diagnostic MRI in RT treatment position is feasible and results in relevant changes in target volumes in the majority of patients. abstract_id: PUBMED:27466529 Target Volume Delineation Based on Diffusion-weighted magnetic Resonance Imaging for Locally Advanced Head and Neck Cancer. Aim: To compare gross tumor volume (GTV) definition in locally advanced head and neck squamous cell carcinoma (LAHNSCC) using diffusion-weighted magnetic resonance imaging (DW-MRI) and computed tomography (CT) with intravenous contrast. Patients And Methods: Patients with LAHNSCC were imaged with CT and DW-MRI before treatment. GTV was delineated in both CT and DW-MRI images by two investigators. CT and MRI images were co-registered and volume data were extracted for statistical analysis. Results: In general, DW-MRI volumes [based on the apparent diffusion coefficient (ADC)] were smaller than CT-based volumes. For all patients, GTV delineation based on pre-treatment DW-MRI was significantly smaller than that based on CT scan (CT-GTV) (p=0.0078). The mean difference (95% limits of agreement) between the two investigators was -0.37 cm(3) for CT-GTV and 0.17 cm(3) for ADC-GTV measurements, respectively. Conclusion: DW-MRI radiotherapy GTVs are smaller than CT-based targets with less interobserver variability. Further validation of these preliminary results is necessary in a much larger patient group. Answer: Yes, regional differences within the target volume can be visualized using sequential diffusion-weighted MRI (DW-MRI). In head and neck cancer, DW-MRI has been shown to predict response early during treatment. However, treatment-induced changes and DW-MRI-specific artifacts can hinder accurate registration between apparent diffusion coefficient (ADC) maps. A study developed a registration tool that calculates and visualizes regional changes in ADC, allowing for the visualization of regional differences of ΔADC within the tumor (PUBMED:24231234). Moreover, DW-MRI has been used in various studies to redefine the target volume during treatment. For instance, in the EXIT-trial for locally advanced cervical cancer, MRI-based exclusion of non-tumor-bearing parts of the uterus from the target volume was evaluated, with the hypothesis that baseline ADC values and their evolution could be prognostic for residual tumor (PUBMED:30223802). Similarly, DW-MRI has been used to predict selective serotonin reuptake inhibitor response in major depressive disorder, indicating its potential to provide a pre-treatment signature of treatment remission (PUBMED:23508528). In early-stage breast cancer treated with single-dose preoperative radiotherapy, DW-MRI and dynamic contrast-enhanced MRI were used to assess treatment response, showing that changes in ADC could be quantitatively analyzed (PUBMED:26134438). DW-MRI has also been implemented for target delineation in central lung cancer with atelectasis, demonstrating higher precision and reproducibility compared to CT and PET (PUBMED:28927030). Furthermore, a dedicated DW-MRI sequence was evaluated for target volume delineation in head-and-neck radiotherapy, showing good interobserver agreement and overlap with PET, suggesting that DW-MRI can lead to more precise target volume delineation (PUBMED:33458363). In soft-tissue sarcoma, muscle fiber orientations derived from diffusion tensor imaging (DT-MRI) were explored for defining the clinical target volume (CTV), showing promise for automated generation of anisotropic CTV boundary (PUBMED:35817048). Lastly, the consistency of muscle fibers directionality derived from DW-MRI was investigated for potential use in radiotherapy target definition in sarcomas, establishing baseline parameters for DW-MRI acquisition (PUBMED:37586375).
Instruction: Does end-stage kidney failure influence hepatitis C progression in hemodialysis patients? Abstracts: abstract_id: PUBMED:12630013 Does end-stage kidney failure influence hepatitis C progression in hemodialysis patients? Background/aims: The influence of end-stage kidney failure on the progression of liver disease in patients infected with hepatitis C virus and treated with hemodialysis is still controversial. Methodology: Liver histology of 154 hepatitis C virus infected non-uremic patients was compared with liver histology of 13 hepatitis C virus infected uremic patients treated with hemodialysis. Results: In either group of altogether 167 patients, no normal liver histology was found. Each patient had at least a low-grade lobular and/or portal inflammation. However, statistically significant differences were observed between hepatitis C virus infected uremic and non-uremic patients in the extent of intralobular changes, portal inflammation, and degree of fibrosis. Conclusions: Non-uremic hepatitis C virus infected patients appear to have more active and progressive liver disease than hepatitis C virus infected patients on hemodialysis. Regular follow-up of uremic patients, associated with earlier detection of hepatitis C virus infection, so as suggested uremia-associated impaired immunoreactivity and increased levels of hepatocyte growth factor described recently, might be implicated in a more favorable course of hepatitis C virus infection in uremic patients. In addition, due to the absence of normal liver histology in either group of hepatitis C virus infected patients, we propose liver biopsy to be mandatory in all these patients, provided that no contraindications exist clinically. abstract_id: PUBMED:29069858 Hepatocellular carcinoma in hemodialysis patients. We investigated the rates and predictors of mortality in hepatocellular carcinoma (HCC) patients who were or were not undergoing long-term hemodialysis. The participants in this retrospective observational study were 1298 HCC patients (60.0 ± 12.1 years old, 72% male), of whom 172 were undergoing hemodialysis and 1126 were not. HCC patients on hemodialysis exhibited a higher hepatitis C virus carrier rate (49.4% versus 39.3%, P = 0.012), lower hepatitis B virus carrier rate (37.2% versus 58.3%, P < 0.001) and lower hepatitis B or C virus carrier rate (77.9% versus 89.3%, P < 0.001) than those not on hemodialysis. Serum alkaline phosphatase levels were higher in the hemodialysis than non-hemodialysis group (162.8 ± 141.1 u/l versus 124.6 ± 102.5 u/l, P < 0.001). By the end of the analysis, 32.0% of HCC patients on hemodialysis and 28.0% of those not on hemodialysis had died. Kaplan-Meier analysis confirmed that cumulative survival was poorer in HCC patients on hemodialysis (P = 0.004). In a multivariate Cox regression model, hemodialysis (P < 0.001), older age (P < 0.001) and advanced tumor stages (P < 0.001) were found to be risk factors for mortality. HCC patients on hemodialysis had a 2.036-fold greater chance of death than HCC patients not on hemodialysis. Prospective studies with longer follow-ups and larger samples are warranted. abstract_id: PUBMED:18650408 Hepatitis C is less aggressive in hemodialysis patients than in nonuremic patients. Background And Objectives: The severity of liver disease among hepatitis C patients on hemodialysis is controversial. The aim of this study was to compare the clinical, biochemical, and liver histologic characteristics of hepatitis C virus (HCV) in hemodialysis patients and in those with normal renal function. Design, Setting, Participants, & Measurements: A case-control study was carried out with 36 HCV patients on hemodialysis and 37 HCV patients with normal renal function matched for gender, age at infection, and estimated time of infection. Results: HCV patients on hemodialysis had lower levels of alanine aminotransferase and lower viral load. Hepatic fibrosis was significantly higher in the patients with normal renal function (73%) than in hemodialysis patients (47.2%, P < 0.025); the same was observed for inflammatory activity (control group 59.5% versus hemodialysis patients 27.7%, P = 0.003). In addition, the risk of tissue inflammation was four times lower in hemodialysis patients (odds ratio = 0.23, P < 0.004), and severe inflammatory activity on biopsy was the only independent risk factor for fibrosis (P < 0.001). Conclusions: The lower biochemical and inflammatory activities observed in hemodialysis patients suggest that hemodialysis and uremia may have a protective role against progression of the disease caused by HCV. abstract_id: PUBMED:31720170 Treatment Outcomes for Patients Undergoing Hemodialysis with Chronic Hepatitis C on the Sofosbuvir and Daclatasvir Regimen. Background Hepatitis C (HCV) infection is the most commonly acquired infection for patients on hemodialysis and is associated with significant morbidity and disease progression. Direct-acting antivirals (DAAs) have revolutionized the management of HCV. However, limited data exist regarding their efficacy in end-stage renal disease (ESRD), especially for patients on dialysis in South Asia. Aims To evaluate the treatment outcomes of patients undergoing hemodialysis with chronic hepatitis C (CHC) on the sofosbuvir (SOF) and daclatasvir (DAC) regimen. Materials and methods All patients who were 18 years or older, diagnosed cases of chronic kidney disease (stage V), and undergoing maintenance hemodialysis were inducted into this study. Active HCV infection was demonstrated by polymerase chain reaction (PCR) HCV ribonucleic acid (RNA) (qualitative). All patients were then treated with a double regimen of SOF (400 mg once daily) and DAC (60 mg once daily) taken per oral for 12 weeks. Response to treatment was assessed at four, 12, and 52 weeks. Results A total of 31 out of 80 patients were inducted into the study over two years. The prevalence of HCV in hemodialysis patients was 38.75%. Sustained virological response (SVR) was achieved by 27 (87.09%) patients at one year. Four (12.90%) patients had a relapse of HCV. There was no deterioration of hepatological status in any of the patients. Overall survival at one year was 93.54%. Conclusion HCV is highly prevalent in patients undergoing hemodialysis. Prompt treatment with SOF and DAC demonstrates a good response, with negligible side effects. abstract_id: PUBMED:34820059 Prevalence of occult hepatitis C virus infection in hemodialysis patients. Background: Via hemodialysis, viral infections can be transmitted in patients a new definition of this infection with no increase in liver enzymes, negative HCV-PCR in serum and presence of virus in the liver and peripheral blood mononuclear cell (PBMC) called occult hepatitis C virus (HCV) infection (OCI). We decided to examine the prevalence of occult hepatitis C infection on hemodialysis cases. Methods: The current research is a cross-sectional study on patients with end-stage renal disease (ESRD) who were at three hemodialysis centers in Mazandaran province in Iran during 2012-2014. In this study of 356 patients who were undergoing hemodialysis, 54 patients were excluded due to positive HCV Ab, and the remaining 302 patients were enrolled. The test of all serum samples for HCV-RNA detection of plasma and PBMCs was done by real-time polymerase chain reaction (real-time PCR). Results: There was a significant association between the duration of dialysis with the prevalence of occult HCV infection (P=0.017). Eight (2.65%) patients were positive for HBs Ag and with OCI, but none of them was infected with both hepatitis C and B obviously. Also among the total number of patients, nine patients tested positive for HCV RT-PCR in PBMC in which one of them was positive for serum HCV RNA PCR and was excluded from the study. Conclusion: The results showed that eight patients had an OCI. There was not any association found between age and sex with OCI, but there was a significant relationship between the duration of dialysis with the prevalence of OCI. abstract_id: PUBMED:24976839 Risk factors of HCV seroconversion in hemodialysis patients in tabriz, iran. Background: Hepatitis C virus (HCV) infection is a significant health concern in patients with end-stage renal disease under dialysis. Epidemiological studies have reported a prevalence rate of 5.5-55.9% for this condition in Iran. Objectives: We evaluated the risk factors for HCV infection and seroconversion in hemodialysis patients. Patients And Methods: A retrospective analysis was performed on 455 hemodialysis patients from each of the five dialysis units in Tabriz, northwest Iran. Possible risk factors for HCV infection and seroconversion were evaluated. Results: A total of 37 patients were HCV positive (8.1% of the study population) and seroconversion occurred in 18 of them during the dialysis treatment (3.95% of the study population). History of renal transplantation (44.4%, P < 0.0001), surgical intervention (except for renal transplantation and AV fistula placement) (94.4%, P = 0.03), and mean duration of dialysis (106.06 ± 55.519, P < 0.0001) had strong statistically significant associations with the seroconversion. Conclusions: The current study indicates increased risk for HCV infection in patients under dialysis and its relation with the mean duration of hemodialysis, history of renal transplantation and surgical intervention. Considering the immune deficiency in these patients, intense education to both patients and medical staff will be beneficial. abstract_id: PUBMED:32509002 Infection with hepatitis C virus in hemodialysis patients: An overview of the diagnosis and prevention rules within a hemodialysis center (Review). Increase in the number of patients with chronic kidney disease (CKD) calls for improved management of these patients. In stage 5 CKD, when the initiation of renal replacement therapy (RRT) becomes necessary, there is an increase in the infection risk of the patients and immunological tests for hepatitis C virus (HCV) detection turn positive at an alarmingly higher rate compared to general population. With the introduction into clinical practice of diagnostic tests, the increased prevalence of HCV among CKD patients has been known since the 1990s. Also, the negative impacts of HCV infection on CKD evolution as well as the unfavorable evolution of grafts received by HCV infected patients are known. Chronic hemodialysis patients are a category of patients whose risk of HCV infection is substantial. Currently, in the hemodialysis centers, at the base of the transmission of HCV infection there are a multitude of factors. Infection with HCV has a different impact on patient with end-stage renal disease (ESRD). Comorbidities in this case have significant sources of mortality and morbidity. It was proven that the post transplantations problems were prevented and mortality was reduced for patients who were diagnosed with HCV and in whom the infection was treated before the kidney transplant (KT). Consequently, early detection of the infection and the application of specific treatment has a considerable impact on the outcome of the patients. Another important component of the management of HCV infection in the chronic hemodialysis patients is the prevention of the infection transmission by applying specific methods. abstract_id: PUBMED:26952005 Pharmacokinetics, safety and efficacy of a full dose sofosbuvir-based regimen given daily in hemodialysis patients with chronic hepatitis C. Background & Aims: Hepatitis C virus (HCV) infection is an independent risk factor for chronic kidney disease and leads to faster liver disease progression in patients requiring hemodialysis than in those with normal renal function. Little is known about the use of a sofosbuvir-containing regimen for infected patients on hemodialysis. We aimed to describe the pharmacokinetics, safety and efficacy of sofosbuvir in 2 dosing regimens and associated antiviral agents in HCV-infected patients requiring hemodialysis. Methods: Multicenter, prospective and observational study of patients receiving sofosbuvir, 400mg once daily (n=7) or 3 times a week (n=5), after hemodialysis with simeprevir, daclatasvir, ledipasvir or ribavirin was conducted. Drug plasma concentrations were determined by liquid chromatography-tandem mass spectrometry before and after a 4h hemodialysis and 1.5h after last drug intake at the end of hemodialysis. Results: Plasma concentrations of sofosbuvir or its inactive metabolite sofosbuvir-007 did not accumulate with either regimen between hemodialysis sessions or throughout the treatment course. Sofosbuvir-007 extraction ratio (52%) was consistent with historical data. In one patient receiving the once daily regimen, sofosbuvir-007 half-life was slightly higher (38h) than for patients with normal renal function receiving a full dose. Hemodialysis did not remove any other associated anti-HCV agents. Clinical and biological tolerance was good for all patients. Two relapses occurred with the 3 times a week regimen and none with the once daily. Conclusions: A regimen including sofosbuvir, 400mg once daily, could be proposed for HCV-infected patients requiring hemodialysis and should be associated with close clinical, biological, cardiovascular, and therapeutic drug monitoring. Lay Summary: Hepatitis C Virus (HCV) infection in hemodialysis patients is prevalent and aggressive. Effective anti-HCV treatment in these patients may stabilize their renal disease. However, sofosbuvir, the cornerstone of most anti-HCV-containing regimens, should not be administered to these patients until more data is available. In this pharmacokinetic study, sofosbuvir full dose (400mg once daily) administered every day with another direct antiviral agent did not accumulate in hemodialysis patients and was safe and effective. abstract_id: PUBMED:25937865 Hepatitis C infection in hemodialysis patients: A review. Hepatitis C virus (HCV)-related liver disease is a significant cause of morbidity and mortality in patients with end-stage renal disease (ESRD) who is treated with dialysis or kidney transplantation (KT). The survival rate for HCV-infected renal transplant recipients is better than that for HCV-infected hemodialysis patients on transplant waiting lists. Early diagnosis and treatment HCV infection prior to KT prevents complications post-transplantation and reduces mortality. In addition to screening for anti-HCV antibodies and detecting HCV RNA, percutaneous liver biopsy is particularly valuable for assessing the stage of liver damage in HCV-infected patients, because the stage of fibrosis is important determining optimal treatment for HCV. Studies have been demonstrated that with conventional interferon (IFN) monotherapy or pegylated IFN monotherapy are similar efficacy and safety in HCV-infected hemodialysis patients. Sustained viral responses (SVRs) with these monotherapies have ranged approximately 30% to 40%. Limited reports support the use of IFN and ribavirin combination therapy as antiviral treatment for ESRD patients or patients on hemodialysis. Ribavirin can be started at low dose and careful monitoring for side effects. Patients that show SVR after treatment are strong candidates for KT. It is also generally accepted that ESRD patients with decompensated cirrhosis and portal hypertension should be referred to the liver transplant team for consideration of combined liver-KT. abstract_id: PUBMED:21051906 Hemodialysis of patients with HCV infection: isolation has a definite role. HCV infection in patients on hemodialysis varies from 10 to 60% in various centers. There is controversy regarding the isolation of patients with HCV infection during hemodialysis. Guidelines developed by the Centers for Disease Control and Prevention do not suggest isolation of these patients. In this opposite view, evidence is being given to support the view that there is a definite role of isolation of HCV-infected patients during hemodialysis. Answer: The influence of end-stage kidney failure on the progression of hepatitis C in hemodialysis patients is a subject of debate. Some studies suggest that hepatitis C virus (HCV) infection may have a more favorable course in uremic patients on hemodialysis compared to non-uremic patients. For instance, a study comparing liver histology between HCV-infected uremic patients on hemodialysis and non-uremic patients found that non-uremic patients had more active and progressive liver disease. This suggests that uremia-associated impaired immunoreactivity and increased levels of hepatocyte growth factor might play a role in slowing the progression of HCV infection in uremic patients (PUBMED:12630013). Another study found that HCV patients on hemodialysis had lower levels of alanine aminotransferase and lower viral load, with significantly less hepatic fibrosis and inflammatory activity compared to patients with normal renal function. This indicates that hemodialysis and uremia may have a protective role against the progression of HCV (PUBMED:18650408). However, the situation is complex, as hemodialysis patients with HCV also face a higher risk of mortality, especially when they develop hepatocellular carcinoma (HCC). Hemodialysis was identified as a risk factor for mortality in HCC patients, with those on hemodialysis having a greater chance of death than those not on hemodialysis (PUBMED:29069858). In terms of treatment outcomes, the use of direct-acting antivirals (DAAs) such as sofosbuvir and daclatasvir has been shown to be effective in hemodialysis patients with chronic hepatitis C, achieving a high rate of sustained virological response (PUBMED:31720170). Overall, while there is evidence to suggest that hemodialysis may influence the progression of HCV infection, the relationship is complex and may be affected by various factors, including the presence of comorbidities like HCC and the use of antiviral treatments. Further research is warranted to fully understand the impact of end-stage kidney failure on HCV progression in hemodialysis patients.
Instruction: Does motor block related to long-acting brachial plexus block cause patient dissatisfaction after minor wrist and hand surgery? Abstracts: abstract_id: PUBMED:22864520 Does motor block related to long-acting brachial plexus block cause patient dissatisfaction after minor wrist and hand surgery? A randomized observer-blinded trial. Background: Patient dissatisfaction has been previously associated with motor block in shoulder surgery patients receiving brachial plexus block. For elective minor wrist and hand surgery, we tested whether a regional block accelerating the early return of upper extremity motor function would improve patient satisfaction compared with a long-acting proximal brachial plexus block. Methods: A total of 177 patients having elective 'minor' wrist and hand surgery under awake regional block randomly received adrenalized infraclavicular lidocaine 2% 10 ml+ropivacaine 0.75% 20 ml ('long acting', n=90), or adrenalized infraclavicular lidocaine 1.5% 30 ml+long-acting distal median, radial, and ulnar nerve blocks selected according to the anticipated area of postoperative pain ('short acting', n=87). A blinded observer questioned patients on day 1 for numerically rated (0-10) subjective outcomes. Results: With 95% power, there was no evidence for a 1-point satisfaction shift in the short acting group: satisfaction was similarly high for both groups [median (inter-quartile range)=10 (8-10) vs 10 (8-10), P=0.71], and also demonstrated strong evidence for equivalence [mean difference (95% confidence interval)=-0.18 (-0.70 to 0.35)]. There was no difference between the groups for weakness- or numbness-related dissatisfaction (low for both groups), or for numerically rated or time to first pain. Surgical anaesthesia success was similar between the groups (short acting, 97% vs 93%, P=0.50), although more patients in the short acting group had surgery initiated in ≤25 min (P=0.03). Conclusions: Patient satisfaction is not improved after elective minor wrist and hand surgery with a regional block accelerating the early return of motor function. For this surgery, motor block related to a long-acting brachial plexus block does not appear to cause patient dissatisfaction. Clinical Trial Registration number. ACTRN12610000749000, https://www.anzctr.org.au/registry/trial_review.aspx?ID=335931. abstract_id: PUBMED:37758499 A Comparison of Distal Nerve Blocks and Brachial Plexus Blocks in Terms of Block Success, Block-related Characteristics, Time to Discharge and Patient Satisfaction. Background: Ultrasound-guided brachial plexus blocks (BPB) are used as an anaesthetic method in hand and wrist surgeries. The radial, median and ulnar nerves can also be selectively blocked. The objective of this retrospective cohort study was to compare distal nerve blocks (DNB) and BPB in terms of block success, block-related characteristics, time to discharge and patient satisfaction. Methods: The data of patients who underwent elective hand and wrist surgery under regional anaesthetic procedures between 01.01.2022 and 01.09.2022 were analysed. Standard multimodal analgesia was performed in all groups, in addition to either ultrasound-guided BPB or DNB. Demographic characteristics, American Society of Anaesthesiology (ASA) classes, presence of additional diseases, block and volume applied, block-related data such as block performance time, onset time, initial analgesia times, perioperative additional anaesthetic or analgesic requirement, surgery times, types of surgery, discharge times and the presence of additional complaints were recorded. In addition, the anaesthesia quality score were determined using a 5-point Likert scale. Results: There was no difference between demographic data. Average surgical time was similar between the DNB and BPB groups (46.0 ± 8.92 vs. 59.95 ± 22.04 min, p< 0.05). Block onset time was significantly lower in the DNB group (26.4 ± 2.73 vs. 32.17 ± 2.94 min, p< 0.001). When patients discharged before and after 12 hours were compared, 11/22 of the patients in the DNB group and 4/23 of the patients in the BPB group were discharged in the first 12 hours (p< 0.05). Patient satisfaction scores were similar between groups (14.22 ± 0.86 vs. 13.65 ± 1.11, p> 0.05). Conclusions: In surgeries of the hand and wrist, the radial, median and ulnar nerves can be used individually or in combination as an alternative to proximal BPB. Level of Evidence: Level III (Therapeutic). abstract_id: PUBMED:16437465 Single, double or multiple injection techniques for axillary brachial plexus block for hand, wrist or forearm surgery. Background: Regional anaesthesia comprising axillary block of the brachial plexus is a common anaesthetic technique for distal upper extremity surgery. Objectives: To compare the relative effects of anaesthetic techniques using either single, double or multiple injections for axillary block of the brachial plexus for distal upper extremity surgery. Search Strategy: We searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, as well as reference lists of trials. We contacted trial authors and the medical industry. Date of last search: August 2004. Selection Criteria: We included randomized controlled trials that compared double with single injection techniques, multiple with single injection techniques, or multiple with double injection techniques for axillary block in adults undergoing surgery of the distal upper extremity. Data Collection And Analysis: We performed independent study selection, quality assessment and data extraction. We undertook meta-analysis, including exploratory analyses according to the method of nerve location and definition of primary anaesthesia failure. Main Results: The 12 included trials involved a total of 981 participants who received regional anaesthesia for hand, wrist, forearm or elbow surgery. Trial design and conduct was generally adequate although several trials failed to monitor longer-term effects and to provide sufficient description of their study populations. Substantial heterogeneity precluded the pooling of data for primary anaesthesia failure from the five trials comparing double versus single injections. However, double injections were significantly more effective than single injections in the three trials where electrolocation was used throughout (relative risk (RR) 0.31, 95% confidence interval (CI) 0.31 to 0.74). Five trials compared multiple with single injections. These showed a statistically significant decrease in primary anaesthesia failure (RR 0.24, 95% CI 0.13 to 0.46) and incomplete motor block (RR 0.61, 95% CI 0.39 to 0.96) in the multiple injection group. Six trials compared multiple with double injections. These showed a statistically significant decrease in primary anaesthesia failure (RR 0.23, 95% CI 0.14 to 0.38) and incomplete motor block (RR 0.55, 95% CI 0.36 to 0.85) in the multiple injection group.Generally, none of the differences between the two groups of any of the three comparisons in secondary analgesia failure, complications and patient discomfort were statistically significant. The time for block performance was significantly shorter for single and double injections compared with multiple injections, but the requirement for supplementary blocks in these groups tended to increase the time to readiness for surgery. Authors' Conclusions: This review provided some evidence that multiple injection techniques using nerve stimulation for axillary plexus block provide more effective anaesthesia than either double or single injection techniques. However, there was insufficient evidence for other outcomes, including safety. abstract_id: PUBMED:21735395 Single, double or multiple-injection techniques for axillary brachial plexus block for hand, wrist or forearm surgery in adults. Background: Regional anaesthesia comprising axillary block of the brachial plexus is a common anaesthetic technique for distal upper limb surgery. This is an update of a review first published in 2006. Objectives: To compare the relative effects of single, double or multiple injections for axillary block of the brachial plexus for distal upper limb surgery. Search Strategy: We searched the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library), MEDLINE, EMBASE and reference lists of trials. We contacted trial authors. The date of the last search was March 2011 (updated from March 2005). Selection Criteria: We included randomized controlled trials that compared double with single-injection techniques, multiple with single-injection techniques, or multiple with double-injection techniques for axillary block in adults undergoing surgery of the distal upper limb. We excluded trials using ultrasound-guided techniques. Data Collection And Analysis: We performed independent study selection, risk of bias assessment and data extraction. We undertook meta-analysis. Main Results: The 20 included trials involved a total of 2098 participants who received regional anaesthesia for hand, wrist, forearm or elbow surgery. The trial design and conduct were generally adequate although several trials failed to monitor longer-term effects.Eight trials comparing double versus single injections showed a statistically significant decrease in primary anaesthesia failure (RR 0.51, 95% CI 0.30 to 0.85). Subgroup analysis by method of nerve location showed that the effect size was greater when neurostimulation was used rather than the transarterial technique.Seven trials comparing multiple with single injections showed a statistically significant decrease in primary anaesthesia failure (RR 0.28, 95% CI 0.16 to 0.48) and of incomplete motor block (RR 0.61, 95% CI 0.39 to 0.96) in the multiple injection group.Eleven trials comparing multiple with double injections showed a statistically significant decrease in primary anaesthesia failure (RR 0.28, 95% CI 0.20 to 0.40) and of incomplete motor block (RR 0.55, 95% CI 0.36 to 0.85) in the multiple injection group.Tourniquet pain was significantly reduced with multiple injections compared with double injections (RR 0.53, 95% CI 0.33 to 0.84). Otherwise, there were no statistically significant differences between groups in any of the three comparisons on secondary analgesia failure, complications and patient discomfort. The time for block performance was significantly shorter for single and double injections compared with multiple injections. Authors' Conclusions: This review provides evidence that multiple injection techniques using nerve stimulation for axillary plexus block produce more effective anaesthesia than either double or single injection techniques. However, there was insufficient evidence for a significant difference in other outcomes, including safety. abstract_id: PUBMED:35658725 A Comparative Cost Analysis of Local Anesthesia versus Brachial Plexus Block for Complex Hand Surgery. Background: Local anesthesia has shown to be safe and cost-effective for elective hand surgery procedures performed outside of the operating room. The economic benefits of local anesthesia compared to regional anesthesia for hand surgeries performed in the operating room involving repair of tendons, nerves, arteries, or bones are unclear. This study aimed to compare costs pertinent to hand surgeries performed in the main operating room under local anesthesia (LA) or brachial plexus (BP) block. Methods: We performed a cross-sectional study on the first 70 randomized patients from a prospective controlled trial of anesthesia modalities for hand surgery. The primary objective was to determine the mean anesthesia-related cost, and the secondary objectives were to analyze block performance time, block onset time, duration of anesthesia, duration of surgery, and time in the recovery room. Results: The mean anesthesia-related cost of performing hand surgery under LA as a wrist and/or digital block was $236 ± 30, compared to $435 ± 43 for BP, a difference of $199 per case. The mean block performance time was shorter for LA (1.3 minutes) versus BP (7.0 minutes). The mean anesthesia-related time was longer in BP (30.7 ± 16 minutes) compared to LA (17.7 ± 6.7 minutes), and consequently the total anesthesia time was longer in BP. Conclusions: We demonstrated that local anesthesia compared to brachial plexus block achieved substantial cost savings in complex hand surgeries by decreasing major expenses. In an era of cost-consciousness, the use of LA represents an important modality for health systems to optimize patient flow and increase cost-effectiveness. abstract_id: PUBMED:19292372 Pneumothorax following infraclavicular brachial plexus block for hand surgery. The infraclavicular brachial plexus block is a routinely used anesthesia technique for orthopedic hand, wrist, and arm surgeries. Although the pulmonary anatomy surrounding the brachial plexus would suggest a theoretical potential for pneumothorax development during infraclavicular brachial plexus blockade, this complication is rarely reported in the literature. We report two occurrences of pneumothorax development following routine infraclavicular brachial plexus blockade performed by physicians at an academic training institution with technical experience. Both patients were healthy adult males with a normal body habitus undergoing elective upper extremity surgical intervention. They presented to the emergency department on postoperative days 1 and 2 with shortness of breath and chest pain. Despite different clinical management strategies for each patient, including conservative physical activity with inspirational spirometry compared to thoracic decompression, both patients' pneumothoraces resolved with no apparent sequelae. This article describes the clinical benefits of infraclavicular brachial plexus blockade versus general anesthesia during elective outpatient surgeries. The development of symptomatic pneumothoraces in 2 healthy adult patients stresses the importance of careful patient assessment pre- and postoperatively. Possible preventative strategies to avoid pneumothorax complications during infraclavicular brachial plexus blockade are discussed. This case report indicates there is a risk of iatrogenic injury during infraclavicular brachial plexus blockade. This information could be valuable in determining anesthesia modalities used for orthopedic upper extremity surgeries in patient populations with problematic follow-up or limited access to health care. abstract_id: PUBMED:16911996 Efficacy and safety of axillary brachial plexus block for operations on the hand. An axillary brachial plexus nerve block by a transarterial approach is commonly used to achieve regional anaesthesia for hand surgery. We designed a retrospective study to evaluate efficacy and safety of the technique for acute and elective operations. Anaesthetic records of 189 of all 5520 patients (1996-2000) who had axillary brachial plexus blocks for hand surgery were reviewed, and results compared with complications recorded in the anaesthetic register and in the hand surgery records. Successful axillary block was achieved in 5128/5520 (93%) of patients, according to anaesthetic charts, and primarily in 157/189 (83%), and after supplementation in 171/189 (90%), according to anaesthetic records. Four patients had a toxic drug reaction or axillary haematoma with a transient neurological deficit. Medical complications were recorded in the anaesthetic register in less than 0.7% of axillary brachial plexus procedures. Axillary brachial plexus block by a transarterial approach is effective and safe in hand surgery. abstract_id: PUBMED:27560643 Traction injury of the brachial plexus confused with nerve injury due to interscalene brachial block: A case report. Introduction: Shoulder surgery is often performed with the patient in the so called "beach-chair position" with elevation of the upper part of the body. The anesthetic procedure can be general anesthesia and/or regional block, usually interscalenic brachial plexus block. We present a case of brachial plexus palsy with a possible mechanism of traction based on the electromyographic and clinical findings, although a possible contribution of nerve block cannot be excluded. Presentation Of The Case: We present a case of a 62 year-old female, that suffered from shoulder fracture-dislocation. Open reduction and internal fixation were performed in the so-called "beach-chair" position, under combined general-regional anesthesia. In the postoperative period complete motor brachial plexus palsy appeared, with neuropathic pain. Conservative treatment included analgesic drugs, neuromodulators, B-vitamin complex and physiotherapy. Spontaneous recovery appeared at 11 months. DISCUSION: in shoulder surgery, there may be complications related to both anesthetic technique and patient positioning/surgical maneuvers. Regional block often acts as a confusing factor when neurologic damage appears after surgery. Intraoperative maneuvers may cause eventual traction of the brachial plexus, and may be favored by the fixed position of the head using the accessory of the operating table in the beach-chair position. Conclusion: When postoperative brachial plexus palsy appears, nerve block is a confusing factor that tends to be attributed as the cause of palsy by the orthopedic surgeon. The beach chair position may predispose brachial plexus traction injury. The head and neck position should be regularly checked during long procedures, as intraoperative maneuvers may cause eventual traction of the brachial plexus. abstract_id: PUBMED:21288970 A low approach to interscalene brachial plexus block results in more distal spread of sensory-motor coverage compared to the conventional approach. A low approach to the interscalene block (LISB) deposits local anesthetic farther caudad on the brachial plexus compared with the conventional interscalene block (ISB). We compared the efficacy of LISB and ISB in achieving anesthesia of the distal extremity in 254 patients having upper extremity surgery. The most frequent elicited motor response was the deltoid for ISB and wrist for LISB. There was significantly greater sensory-motor block of regions below the elbow with the LISB compared with ISB (P < 0.001 for both sensory and motor coverage). Our data indicate that LISB results in a higher incidence of distal elicited motor response and greater sensory-motor blockage of the wrist and hand. abstract_id: PUBMED:32548143 Clinical application of ultrasound-guided selective proximal and distal brachial plexus block in rapid rehabilitation surgery for hand trauma. Background: Anesthesia plays an essential role in the successful surgical procedures for hand trauma. Compared with general anesthesia, brachial plexus block shows lots of benefits for the upper extremity. Specifically, ultrasound-guided selective proximal and distal brachial plexus block may overcome the issues of incomplete block and failure of anesthesia in such circumstances. The present study assessed the efficacy of ultrasound-guided selective proximal and distal brachial plexus block in clinical practice. Aim: To explore the efficacy of ultrasound-guided selective proximal and distal brachial plexus block in rapid recovery surgery for hand trauma. Methods: A total of 68 patients with traumatic hand injuries treated at our hospital from January 2018 to June 2019 were selected. They were divided into an observation group and a control group with 34 patients in each group. The observation group underwent selective brachial plexus block guided by ultrasound and the control group underwent conventional brachial plexus block. The analgesic efficacy, intraoperative complications, wound healing time, and length of hospital stay were compared between the two groups under different anesthesia. Results: The percentage of cases with analgesia graded as "excellent or very good" was higher in the observation group than in the control group (P < 0.05). Moreover, the incidence of intraoperative complications was lower and the wound healing time and length of hospital stay were shorter in the observation group than in the control group (P < 0.05). Conclusion: Ultrasound-guided selective proximal and distal brachial plexus block has remarkable analgesic efficacy in patients with traumatic hand injuries. It can reduce the incidence of intraoperative complications, promote wound healing, and shorten the length of hospital stay. Answer: No, motor block related to long-acting brachial plexus block does not appear to cause patient dissatisfaction after minor wrist and hand surgery. A randomized observer-blinded trial with 177 patients having elective 'minor' wrist and hand surgery under awake regional block found that patient satisfaction was similarly high for both groups receiving long-acting and short-acting regional blocks. There was no evidence for a 1-point satisfaction shift in the short-acting group, and there was no difference between the groups for weakness- or numbness-related dissatisfaction, or for numerically rated or time to first pain. Surgical anesthesia success was similar between the groups, although more patients in the short-acting group had surgery initiated in ≤25 min. The study concluded that patient satisfaction is not improved with a regional block accelerating the early return of motor function for minor wrist and hand surgery (PUBMED:22864520).
Instruction: Is Aortic Valve Repair Reproducible? Abstracts: abstract_id: PUBMED:30557939 An intraoperative test device for aortic valve repair. Objective: Aortic valve repair is currently in transition from surgical improvisation to a reproducible operation and an option for many patients with aortic regurgitation. Our research efforts at improving reproducibility include development of methods for intraoperatively testing and visualizing the valve in its diastolic state. Methods: We developed a device that can be intraoperatively secured in the transected aorta allowing the aortic root to be pressurized and the closed valve to be inspected endoscopically. Our device includes a chamber that can be pressurized with crystalloid solution and ports for introduction of an endoscope and measuring gauges. We show use of the device in explanted porcine hearts to visualize the aortic valve and to measure leaflet coaptation height in normal valves and in valves that have undergone valve repair procedures. Results: The procedure of introducing and securing the device in the aorta, pressurizing the valve, and endoscopically visualizing the closed valve is done in less than 1 minute. The device easily and reversibly attaches to the aortic root and allows direct inspection of the aortic valve under conditions that mimic diastole. It enables the surgeon to intraoperatively study the valve immediately before repair to determine mechanisms of incompetence and immediately after the repair to assess competence. We also show its use in measuring valve leaflet coaptation height in the diastolic state. Conclusions: This device enables more relevant prerepair valve assessment and also enables a test of postrepair valve competence under physiological pressures. abstract_id: PUBMED:26798709 Can the Results of Aortic Valve Repair Equal the Results of a Biologic Aortic Valve Replacement? Aortic valve replacement (AVR) has been the default procedure for the surgical management of aortic valve disease, with repair techniques heterogeneously and infrequently used. However, surgical aortic valve repair has evolved with improved techniques. Yet many questions remain regarding the ideal techniques and real-world applicability and effectiveness of valve repair. The AORTA Great Debate highlighted and discussed the controversies regarding the surgical management of aortic valve disease. abstract_id: PUBMED:31084444 Predictors of Aortic Valve Repair Failure. Aortic valve repair is the preferred approach for the treatment of severe aortic insufficiency (AI), as it allows patients to keep their native aortic valve, thus substantially reducing the risk of prosthesis-related complications. Several studies have documented excellent long-term outcomes of aortic valve repair. The major complication of this operation is AI recurrence, with ensuingneed for reoperation. The surgical experience accumulated over the last two decades has allowed for better understanding of the mechanisms of recurrent AI after aortic valve repair. Herein, we review the current state of knowledge on predictors of aortic valve repair failure. These include unaddressed annular dilation, residual cusp prolapse or retraction, commissural orientation, and use of patch material. This enhanced understanding has led to the development of increasingly refined techniques and improved patient outcomes. Continued follow-up and detailed data collection at the time of surgery, together with three-dimensional echo imaging, will allow further improvements in aortic valve repair. abstract_id: PUBMED:36351612 Successful Repair of the Quadricuspid Aortic Valve. Quadricuspid aortic valve (QAV) is rare. Aortic valve replacement is the most frequent treatment for those patients with severe regurgitation or stenosis of the QAV. However, as our comprehension of the aortic valve grows, to repair a dysfunctional QAV becomes more and more feasible. We present a case of successful repair of the QAV using the surgical strategy of bicuspidization. abstract_id: PUBMED:27440158 Long-term results of external aortic ring annuloplasty for aortic valve repair. Objectives: An untreated dilated aortic annulus is a major risk factor for failure of aortic valve-sparing operations or repair of either bicuspid or tricuspid valve. Aortic annuloplasty efficiently reduces the annulus and increases the coaptation height, thus protecting the repair. This study analyses long-term results of 232 consecutive patients operated on with a standardized and physiological approach to aortic valve repair according to each phenotype of the dystrophic ascending aorta. Subvalvular aortic annuloplasty was systematically added using an external aortic ring to reduce annulus diameter when ≥25 mm. Methods: Data were collected into the multicentric international AVIATOR registry (AorticValve repair InternATiOnal Registry): 149 patients with root aneurysm underwent remodelling with an external ring; 21 patients with tubular aortic aneurysm underwent supracoronary grafts with an external open ring and 62 patients with isolated aortic insufficiency (AI) underwent double sub- and/or supravalvular external open ring annuloplasty. Preoperative AI ≥ Grade III was present in 58.6% (133), and the valve was bicuspid in 37.9% (88). Results: Cusp repair was performed in 75.4% (175) patients. The 30-day operative mortality rate was 1.4% (3). The mean follow-up was 40.1 ± 37.8 months (0-145.5). The actuarial survival rate at 7 years was 89.9%. The rate of freedom from reoperation at 7 years was similar among each phenotype, being 90.5% for root aneurysms, 100% for tubular aortic aneurysms and 97.5% for isolated AI with no difference between the bicuspid and tricuspid valve. The rates of freedom from AI ≥ Grade 2 and from AI ≥ Grade 3 at 7 years were, respectively, 76.0 and 93.1% for root aneurysms, 92.9 and 100% for tubular aortic aneurysms and 57.3 and 82.2% for isolated AI. Eye balling repair achieved suboptimal valve competency when compared with systematic cusp effective height assessment, which tended to improve the rate freedom from reoperation, respectively, from 85.8 ± 5.5% to 98.9 ± 1.1% and the rate of freedom from AI ≥ Grade 3 from 89.8 ± 4.9% to 100%. For isolated AI, an additional sinotubular junction ring (double sub- and supravalvular annuloplasty) tended to reduce recurrent AI when compared with single subvalvular annuloplasty. Conclusions: External aortic ring annuloplasty provides a reproducible technique for aortic valve repair with satisfactory long-term results for each ascending aorta phenotype with bicuspid or tricuspid valve. Longer follow-up is ongoing with the AVIATOR registry. abstract_id: PUBMED:38204640 Aortic valve cusp repair does not affect durability of modified aortic valve reimplantation for tricuspid aortic valves. Objective: During aortic valve reimplantation, cusp repair may be needed to produce a competent valve. We investigated whether the need for aortic valve cusp repair affects aortic valve reimplantation durability. Methods: Patients with tricuspid aortic valves who underwent aortic valve reimplantation from January 2002 to January 2020 at a single center were retrospectively analyzed. Propensity matching was used to compare outcomes between patients who did and did not require aortic valve cusp repair. Results: Cusp repair was performed in 181 of 756 patients (24%). Patients who required cusp repair were more often male, were older, had more aortic valve regurgitation, and less often had connective tissue disease. Patients who underwent cusp repair had longer aortic clamp time (124 ± 43 minutes vs 107 ± 36 minutes, P = .001). In-hospital outcomes were similar between groups and with no operative deaths. A total of 98.3% of patients with cusp repair and 99.3% of patients without cusp repair had mild or less aortic regurgitation at discharge. The median follow-up was 3.9 and 3.2 years for the cusp repair and no cusp repair groups, respectively. At 10 years, estimated prevalence of moderate or more aortic regurgitation was 12% for patients with cusp repair and 7.0% for patients without cusp repair (P = .30). Mean aortic valve gradients were 6.2 mm Hg and 8.0 mm Hg, respectively (P = .01). Ten-year freedom from reoperation was 99% versus 99% (P = .64) in the matched cohort and 97% versus 97%, respectively (P = .30), in the unmatched cohort. Survival at 10 years was 98% after cusp repair and 93% without cusp repair (P = .05). Conclusions: Aortic valve reimplantation for patients with tricuspid aortic valves has excellent long-term results. Need for aortic valve cusp repair does not affect long-term outcomes and should not deter surgeons from performing valve-sparing surgery. abstract_id: PUBMED:30871890 Aortic Valve Repair: Where Are We Now? The advent of aortic valve repair alongside the well-established technique of valve replacement changed the landscape of cardiac surgery, as well as the lives of patients suffering from valvular disease. Repair represents a novel option in those unfit for replacement and avoids the burden of lifelong anticoagulation in younger patients. Despite this, the associated risk of persistence of aortic insufficiency, and therefore reoperation, with valvular repair renders clinical decision making between the two techniques difficult. Unlike the burden of evidence supporting mitral valve repair over replacement, the debate surrounding aortic valve surgery continues. This article aims to explore the development of operative techniques underlying aortic valve repair as well as summarising existing research into short- and long-term patient outcomes in both aortic valve repair and replacement. abstract_id: PUBMED:31705828 Bicuspid aortic valve repair with hemi-remodeling technique and external ring annuloplasty. Background: Current guidelines recommend root replacement when diameter of the sinuses of Valsalva are superior to 45 mm particularly for bicuspid valve. However, in case of tubular aorta aneurysms with moderate root dilatation (40-45 mm diameter), the approach is still debated regarding the increased risk of coronary reimplantation. We present a modified hemi-remodeling aortic repair technique that includes the replacement of the noncoronary sinus, ascending aorta, and valve repair with external ring annuloplasty in patients with bicuspid aortic valve (BAV) and moderately dilated aortic root. Methods: Between 2003 and 2017, 18 patients presenting with left-right BAV and an aortic root diameter at 42.3+/-3.3 mm underwent hemi-root and ascending aorta replacement and aortic valve repair with external annuloplasty. Results: Postoperatively, 16 (88.9%) had no aortic insufficiency (AI) and 2 (11.1%) had grade I AI, no patients had grade III or grade IV AI. Overall survival and freedom from grade II AI at 4 years and freedom from aortic valve-related reoperation were 100%. Conclusion: The standardized modified hemi-remodeling technique we present is a safe and reproducible procedure, with satisfactory durability at follow-up. This technique represents an interesting alternative to full valve sparing root replacement, as it avoids the operative risk of coronary reimplantation, allows shorter cross-clamping time and a better exposition on the valve for a symmetrical repair, placing the commissure at 180°, compared with valve sparing root replacement. abstract_id: PUBMED:36949861 Hybrid Thoracic Aortic Repair and Aortic Valve-In-Valve Replacement for Chronic Type A Dissection. We describe a case of hybrid total thoracic aortic repair and valve-in-valve transcatheter aortic valve replacement in a high-risk patient with complicated chronic type A dissection and severe prosthetic aortic stenosis. The patient underwent a three-stage-procedure including aortic arch debranching, thoracic endovascular aortic repair of the ascending aorta, aortic arch and descending thoracic aorta, and direct aortic valve-in-valve replacement using transcatheter heart valve. The details of the procedures are described with a discussion of the challenges and the decision-making process. abstract_id: PUBMED:33367683 Comparison of bicuspid and tricuspid aortic valve repair. Objectives: The aim of this study was to compare the outcomes of tricuspid aortic valve (TAV) and bicuspid aortic valve (BAV) repair. Methods: We assessed mortality, freedom from reoperation and the rate of aortic valve regurgitation recurrence. Mortality in both groups was compared with expected survival, and risk factors for reoperation were identified. Results: From January 2010 to April 2020, a total of 368 elective aortic valve repair procedures were performed, including 223 (60.6%) in patients with TAV. The perioperative mortality was 0.7% in the BAV group and 3.6% in the TAV group (P = 0.079). Estimated survival at 5 years in the BAV versus TAV group was 97 ± 3% vs 80 ± 6%, respectively (P < 0.001). Freedom from reoperation at 5 years in the TAV versus BAV group was 96 ± 3% vs 93 ± 4%, respectively (P = 0.28). Grade 2 or more aortic valve regurgitation was noted in 9.9% of BAV patients and 11% of TAV patients (P = 0.66). Reoperation was predicted by cusp perforation [hazard ratio 15.86 (4.44-56.61); P < 0.001], the use of pericardial patch [hazard ratio 8.58 (1.96-37.53); P = 0.004] and aortic valve annulus diameter >27.5 mm [hazard ratio 3.07 (0.99-9.58); P = 0.053]. Conclusions: BAV repair is as durable as TAV repair. BAV is not a predictor of a higher rate of reoperations. BAV repair yields survival comparable to expected. Cusp perforation, aortic valve annulus diameter >27.5 mm and the use of pericardial patch adversely impact long-term outcome of aortic valve repair. Answer: Aortic valve repair has evolved from surgical improvisation to a more reproducible operation, with the development of improved techniques and devices that aid in the assessment and repair of the valve. The intraoperative test device described in PUBMED:30557939 allows for the pressurization of the aortic root and endoscopic inspection of the closed valve, enabling surgeons to study the valve before and after repair to assess competence and determine mechanisms of incompetence. This contributes to the reproducibility of the procedure by providing a more relevant prerepair valve assessment and a test of postrepair valve competence under physiological pressures. Moreover, the use of external aortic ring annuloplasty, as reported in PUBMED:27440158, provides a standardized and physiological approach to aortic valve repair, which has shown satisfactory long-term results and contributes to the reproducibility of the procedure for different phenotypes of the dystrophic ascending aorta. The development of increasingly refined techniques, as mentioned in PUBMED:31084444, has led to improved patient outcomes and a better understanding of the predictors of aortic valve repair failure, which also enhances the reproducibility of successful repairs. Additionally, the modified hemi-remodeling technique and external ring annuloplasty for bicuspid aortic valve repair, as described in PUBMED:31705828, represent a safe and reproducible procedure with satisfactory durability at follow-up. Overall, these advancements suggest that aortic valve repair is becoming more reproducible, with standardized techniques and devices that aid in the assessment and repair process, leading to improved outcomes and a more predictable procedure (PUBMED:30557939, PUBMED:27440158, PUBMED:31084444, PUBMED:31705828).
Instruction: Does specific immunotherapy injection cause an increase in bronchial reactivity? Abstracts: abstract_id: PUBMED:16316871 Does specific immunotherapy injection cause an increase in bronchial reactivity? Background: Several well-controlled studies have proven the clinical benefit of specific immunotherapy (SIT) for seasonal allergic rhinitis (AR). However, whether subcutaneous SIT injection could cause a transient increase in bronchial reactivity (BR) remains unknown. Objective: To investigate whether subcutaneous SIT injection, either during or outside the pollen season, could cause an increase in BR in children with pollen allergy. Methods: Twenty-two children (mean age 13.6 +/- 0.7 years) with AR who were receiving maintenance SIT for 15 months were included in the study. Pre-injection BR of the patients was evaluated with methacholine provocation test immediately before maintenance dose of SIT during the peak pollen season and outside the season. The post-injection test was administered 24 hours after SIT injection. Results: There was no difference in FEV1 measures recorded during [98(93-109)%] and outside [102(96-111)%] the pollen season. There was no significant difference between pre- [64(7-64) mg/mL] and post-allergen injection [32(7.5-64) mg/mL] BR outside the pollen season (p = 0.9). A trend towards improvement following allergen injection [64(5.4-64)] as compared to pre-allergen injection [14.6(3.5-64)] was shown during the pollen season (p = 0.053). Although PC20 measures in the pollen season were lower than outside the season, the difference was not significant. The percentage of the patients with bronchial hyperreactivity was 62% during and 43% outside the season. Conclusion: SIT injections both during and outside the pollen season cause no increase in BR in children with AR. This calls into question the necessity of empirical dose reduction during the pollen season. abstract_id: PUBMED:25276155 Long-term intense exposure to grass pollen can mask positive effects of allergenic immunotherapy on non-specific bronchial hyperresponsiveness. Introduction: There are many potential factors that can modulate bronchial reactivity, including exposure to allergens, viral infections, and medications. The aim of this study was to analyze the effect of grass pollination intensity on the bronchial reactivity in seasonal allergic rhinitis (SAR) patients subjected to subcutaneous allergenic immunotherapy (SCIT). Material And Methods: This study, performed between 2005 and 2008, included 41 patients with confirmed sensitivity to grass pollens and predominating symptoms of SAR, randomly assigned to desensitization by pre-seasonal or maintenance SCIT. Bronchial provocation challenge with histamine was performed before the onset of immunotherapy, and repeated three times after each pollen season covered by this study. Bronchial reactivity was analyzed with regard to grass pollination intensity in 2005-2008 (air concentration of grass pollen grains, seasonal number of days when air concentration of grass pollen reached at least 20 or 50 grains per 1 m(3)). Results: After 3 years of SCIT, a significant decrease in bronchial responsiveness was observed in the analyzed group as confirmed by an increase in PC20 FEV1 histamine values (p = 0.001). An inverse tendency was observed after 2 years of SCIT, however. This second year of SCIT corresponded to the 2007 season, when a significantly higher number of days with at least 50 grains of pollen per 1 m(3) of air was recorded. Conclusions: FLUCTUATIONS IN POLLINATION INTENSITY OBSERVED DURING CONSECUTIVE YEARS OF IMMUNOTHERAPY CAN INFLUENCE BRONCHIAL REACTIVITY IN PATIENTS SUBJECTED TO SCIT (ISRCTN REGISTER: ISRCTN 86562422). abstract_id: PUBMED:11144047 The effect of specific immunotherapy on bronchial hyperreactivity in patients with bronchial asthma The aim of the paper was to answer the question, weather specific immunotherapy affected bronchial hyperreactivity in asthmatic patients. The authors used their own results of specific immunotherapy studies and those found in the literature. From these data the following conclusions were formulated: 1. Specific immunotherapy decreases specific bronchial hyperreactivity. The improvement is seen both in the early and late asthmatic reactions. 2. No significant effects of specific immunotherapy on nonspecific bronchial hyperreactivity were found. 3. In order to find an unequivocal answer to the question above multicenter studies on a representative group of patients based on the same protocol and lasting for at least 5 years are necessary. abstract_id: PUBMED:23578317 Bronchial allergen provocation: a useful method to assess the efficacy of specific immunotherapy in children. Background: The clinical efficacy of subcutaneous allergen-specific immunotherapy (SCIT) varies between patients. New preparations are under development, and an objective tool with which to evaluate their efficacies in individual patients has become necessary. Our primary research question is whether bronchial allergen provocation (BAP) can be used to assess the efficacy of SCIT. Methods: In 42 house dust mite (HDM) allergic children (average age: 8.6 yr) with asthma, we analysed the clinical and objective improvements of a standardised HDM allergoid. All patients underwent two BAPs, one before SCIT and another 1 yr after SCIT. Fourteen patients who were recommended but chose not to undergo SCIT represented the control group. The total and specific IgE were analysed before SCIT; in addition, after SCIT, specific IgG and IgG4 were analysed. Results: After SCIT, the patients' allergen-specific bronchial hyper-reactivity (BHR) was significantly improved; specifically, their PD(20) FEV(1) was 34.4 AU before and 63.3 AU after SCIT (p < 0.01). The PD(20) FEV(1) of the control group remained unchanged. Although BHR improved significantly in the treatment group, we were able to differentiate between the responders (n = 17, 60.7%) and non-responders (n = 11, no improvement in BAP). The patients in both groups stated that SCIT had led to a subjective improvement in their symptoms, in contrast to the untreated control group, but only the responders required less medication after SCIT (p < 0.01). Conclusions: After 1 yr of SCIT against HDM, 60.7% of the patients observed in this study exhibited significant improvements, as defined by BAP. However, BAP was also able to identify the non-responders to treatment. Thus, BAP is a useful and objective method of estimating the effectiveness of SCIT and is not influenced by a placebo effect. abstract_id: PUBMED:7581057 Course of non-specific bronchial reactivity to histamine after bronchospasm induced by allergen challenge in patients with bronchial asthma Bronchial hyperresponsiveness is the characteristic feature of bronchial asthma. Inhalation of allergen can cause dual asthmatic response--early and late reaction. Thirteen patients with mild and moderate asthma underwent bronchial allergen challenge. Non-specific bronchial reactivity was measured before, 90 minutes, on the 2nd, 7th, 14th day after provocation. Peak Expiratory Flow was measured every hour during 12 hours after challenge to search for late phase reaction. Increased bronchial reactivity was discovered as early as 90 minutes after challenge, it was still observed on the 2nd and 7th but not on the 14th day after allergen provocation. No relationship was found between appearance of late asthmatic reaction and increased bronchial reactivity. abstract_id: PUBMED:3813194 Hypotonic and isotonic aerosols increase bronchial reactivity in basenji-greyhound dogs. Because basenji-greyhound dogs have greater bronchial reactivity to a range of inhaled stimuli than mongrel dogs do, and because bronchial hyperreactivity to nonspecific stimuli is characteristic of asthma, we asked whether basenji-greyhound dogs have greater bronchial reactivity to hypotonic and isotonic aerosols than mongrel dogs do. We assessed bronchial reactivity by measuring both the total pulmonary resistance and the bronchial response to an acetylcholine aerosol, before and after delivery of hypotonic and isotonic aerosols. Bronchial reactivity as measured by a change in total pulmonary resistance increased 9-fold after delivery of hypotonic and 5-fold after delivery of isotonic aerosols in 5 anesthetized basenji-greyhound dogs, but not in 3 similarly challenged mongrel dogs (p less than 0.01). Bronchial reactivity as measured by an increased bronchial response to acetylcholine aerosol increased 3-fold in basenji-greyhound dogs but not in mongrel dogs. Thus, hypotonic and isotonic aerosols increase bronchial reactivity in basenji-greyhound dogs. We also asked whether vagal or nonvagal pathways are involved in the increase in total pulmonary resistance induced by a hypotonic aerosol. Both vagal and nonvagal pathways appear to be involved, since blockade of the vagal pathway by intravenously administered atropine only partially inhibited the bronchoconstriction induced by a hypotonic aerosol (54%). Disodium cromoglycate, which inhibits vagal and nonvagal pathways, partially inhibited the bronchoconstriction (57%), but even in combination with atropine, did not completely inhibit it (68%). Our observations in basenji-greyhound dogs are similar to results in asthmatic subjects, suggesting that basenji-greyhound dogs are useful experimental animals in which to study the mechanisms by which hypotonic and isotonic aerosols increase bronchial reactivity. abstract_id: PUBMED:11293488 The effect of immunotherapy on nonspecific bronchial hyperresponsiveness in bronchial asthma and allergic rhinitis. Allergen injection therapy may improve nonallergic bronchial hyperresponsiveness, but results at the moment are less than convincing. The present study was conducted to evaluate the effect of immunotherapy on the degree of nonspecific bronchial hyperresponsiveness in patients with allergic bronchial asthma (BA) and/or allergic rhinitis (AR). Methacholine challenge bronchial provocation test, allergic skin test, serum IgE and peripheral blood eosinophil counts were performed before and after 12 months or more of immunotherapy. The improved group, as determined by a shift of at least two doubling concentrations of methacholine, was 75% of AR (n=16), 41.7% of BA (n=24) and 53.8% of BA+ AR (n=13). The geometric mean of the methacholine provocational concentration (PC20) changed from 3.40 to 14.36 mg/ml (P <0.05) in AR, from 0.73 to 1.04 mg/ml in BA (not significant), and from 1.43 to 5.07 mg/ml (P <0.05) in BA+ AR. In conclusion, nonspecific bronchial hyperresponsiveness was improved by immunotherapy in three quarters of the allergic rhinitis cases and in about a half of the allergic bronchial asthma patients, which suggests that immunotherapy might be helpful at preventing the development of bronchial hyperresponsiveness in allergic rhinitis patients, and that it does not improve bronchial hyperresponsiveness in about a half of allergic bronchial asthma patients. abstract_id: PUBMED:4014783 Non-allergic bronchial hyperreactivity in asthmatic children decreases with age and increases with mite immunotherapy. Bronchial reactivity to histamine increased 2-fold or greater in six of seven mite-sensitive asthmatic children after mite immunotherapy for 1 year. Bronchial reactivity decreased 2-fold or greater in seven of the eight subjects who received no injections (P = .008). abstract_id: PUBMED:7872988 The effect of immunotherapy on bronchial hyperresponsiveness in asthmatic children. Bronchial hyperresponsiveness (BHR) to methacholine were evaluated in 47 asthmatic children before and after allergen-specific immunotherapy (IT) by using the forced oscillation method. Eighty-seven percent (13/16) of BHR-negative patients had good clinical response after 1-year immunotherapy while there were only 45% (14/31) in the BHR-positive asthmatic children (p < 0.02). In the BHR-positive group, the relationship between clinical response and the change of nonspecific bronchial sensitivity was further analyzed. In those of good clinical response (IT responder), the tolerance dose of methacholine was significantly increased from 0.78 +/- 0.71 to 4.11 +/- 4.65 mg/ml (p < 0.05), and bronchial sensitivity increased from 1.14 +/- 1.42 U to 7.55 +/- 9.55 U (p < 0.02). In those with no clinical improvement (IT non-responder), there were no significant changes in either methacholine tolerance dose or bronchial sensitivity. With respect to other parameters, such as Grs, PD35, and SGrs, the differences between before and after immunotherapy were similar in both the IT responders and IT non-responders. These results suggest that asthmatic children with different bronchial sensitivity had different responses to immunotherapy and the clinical improvement after immunotherapy is significantly related to the improvement of bronchial hyperresponsiveness. abstract_id: PUBMED:10086204 Non-specific nasal and bronchial reactivity are not correlated in non-asthmatic subjects occupationally exposed to irritants and in healthy subjects. Background: Non-specific nasal and bronchial reactivity are frequently correlated in disease (rhinitis and asthma). It is not known whether such a correlation exists in subjects exposed to irritants and in healthy subjects. In order to test the hypothesis that a correlation between non-specific nasal and bronchial reactivity exists in non-asthmatic subjects, two groups of subjects were studied: 110 workers occupationally exposed to respiratory irritants, and 86 non-exposed healthy controls. Methods: Allergy, non-specific nasal, and non-specific bronchial reactivity were tested, and smoking habits were categorized in each subject. Results: Respiratory irritants cause a substantial increase in nasal and bronchial reactivity when compared with the group of healthy, non-exposed subjects (33.6% nasal hyperreactors and 20.0% bronchial hyperreactors vs. 4.7% nasal hyperreactors and 2.3% bronchial hyperreactors, respectively). But, occupational exposure to respiratory irritants does not induce a correlation between non-specific nasal and bronchial reactivity frequently found in asthmatic and rhinitic subjects. Conclusions: We found no correlation between non-specific nasal and bronchial reactivity either in subjects occupationally exposed to respiratory irritants or in the group of healthy subjects. This lack of correlation in both studied groups seems to be a feature of non-diseased airways. Smoking as an additional factor does not increase nasal and bronchial reactivity either in workers exposed to irritants or in healthy subjects. Smoking also does not strengthen the correlation between upper and lower airways' reactivity in both groups. Answer: Specific immunotherapy (SIT) injection does not appear to cause an increase in bronchial reactivity (BR). A study investigating the effects of subcutaneous SIT injection on children with pollen allergy found no increase in BR following SIT injections both during and outside the pollen season. The study concluded that SIT injections do not cause an increase in BR, which suggests that empirical dose reduction during the pollen season may not be necessary (PUBMED:16316871). Additionally, another study indicated that long-term intense exposure to grass pollen can mask the positive effects of allergenic immunotherapy on non-specific bronchial hyperresponsiveness (BR), but fluctuations in pollination intensity during consecutive years of immunotherapy can influence BR in patients subjected to SCIT (PUBMED:25276155). Furthermore, other research has shown that specific immunotherapy decreases specific bronchial hyperreactivity in asthmatic patients, particularly in both early and late asthmatic reactions, although no significant effects on non-specific bronchial hyperreactivity were found (PUBMED:11144047). These findings suggest that SIT does not exacerbate BR and may even have beneficial effects on specific bronchial hyperreactivity in some patients.
Instruction: ERCP-induced and non-ERCP-induced acute pancreatitis: Two distinct clinical entities with different outcomes in mild and severe form? Abstracts: abstract_id: PUBMED:29523291 ERCP induced and non-ERCP-induced acute pancreatitis: Two distinct clinical entities? Acute pancreatitis (AP) is a common gastrointestinal disease of varied etiology; however, the most common causes of AP are gallstones and alcohol abuse. AP has emerged as the most frequent complication after endoscopic retrograde cholangiopancreatography (ERCP). Post-ERCP pancreatitis is generally a clinically irrelevant condition; however, it can be severe or even fatal in up to 0.8% of cases. Different clinical courses and outcomes have been observed between mild and severe AP of different etiologies (i.e., non-ERCP AP and post-ERCP AP), which opens the discussion as to whether they are the same or distinct clinical entities. abstract_id: PUBMED:17351804 Mild ERCP-induced and non-ERCP-related acute pancreatitis: two distinct clinical entities? Background: The aim of this study was to evaluate the hypothesis that mild endoscopic retrograde cholangiopancreatography (ERCP)-induced acute pancreatitis (AP) runs a distinctly shorter clinical course compared with mild attacks caused by other etiological factors. Methods: ERCP-induced AP was defined as the persistence of postprocedure abdominal pain requiring analgesia for at least 24 h in association with hyperamylasemia of more than three times the normal value. Among 2552 patients who had undergone ERCP between 1996 and 2002 in two different hospitals, mild AP developed after "elective" ERCP in 48 (group I). Among a random sample of 10% of 700 patients admitted with AP to both hospitals during the same study period, 58 had mild non-ERCP-related AP (group II). Results shown are medians. Results: Patients in group I had significantly shorter durations of pain (55 vs. 114 h, P < 0.0001), analgesic intake (56 vs.93 h, P = 0.007), and intravenous hydration (48 vs. 80 h, P < 0.0001), a lower opiate analgesic requirement (58 mg morphine equivalent vs. 100 mg, P = 0.001), a shorter time to resumption of oral diet (3 vs. 5 days, P < 0.0001), and a shorter hospital stay (4 vs. 7 days, P < 0.0001) than patients in group II. Conclusions: The current definitions of ERCP-induced AP select a cohort of patients whose mild attacks run a significantly shorter and milder course than non-ERCP-related mild attacks. A new consensus definition of ERCP-induced AP that describes attacks of similar clinical course is needed. abstract_id: PUBMED:25673350 Technical outcomes and complications of pediatric ERCP. Background: Previous reports have suggested that endoscopic retrograde cholangiopancreatography (ERCP) in pediatric patients is safe, but because the total number of cases reported in the literature remains small, questions remain regarding safety and outcomes of this procedure in children. Methods: Pediatric patients who underwent ERCP were identified from diagnostic to billing codes. Medical records were examined for age, gender, diagnosis codes, indication, type of sedation, findings, interventions, and complications. Descriptive statistics were prepared for data including frequencies, percentages for categorical variables, and means and standard deviations for quantitative variables. Extensions of logistic regression were used to examine the relationship between demographic and clinical characteristics and post-ERCP pancreatitis. Results: Four hundred and twenty-five ERCPs were performed on 276 pediatric patients. Patient's age ranged from 72 days to 18 years, and there was equal gender distribution. Targeted duct cannulation was achieved in over 95% of cases, and therapeutic interventions were common (81.3%). The most common indications were biliary obstruction (43.3%) and chronic pancreatitis (26.8%). The most common complication was post-ERCP pancreatitis which occurred in 26 cases (7.7%), and the majority was mild by consensus definition. Procedural characteristics significantly associated with post-ERCP pancreatitis included: pancreatogram, pancreatic sphincterotomy, pancreatic duct stenting, and pancreatic duct stricture dilation. Immediate post-sphincterotomy bleeding occurred twice (1.1%), and both cases were successfully treated during the procedure. There was one instance of a possible delayed sphincterotomy bleed. No mortality was observed. Conclusions: This retrospective review is one of the larger ERCP series reported in the pediatric population and adds to the growing body of literature supporting that ERCP can be safely applied in younger patients by skilled endoscopists. The complication rate found in this series closely parallels that observed in adults. Prospective studies of pediatric ERCP are still required to more definitively define the safe and appropriate use of this procedure in children. abstract_id: PUBMED:29560692 The safety and efficacy of ERCP in octogenarians: a comparison of two geriatric age cohorts. Background And Aims: The application of endoscopic retrograde cholangiopancreatography (ERCP) in the octogenarian population is limited. The aim of the study was to compare the clinical characteristics and outcomes of ERCP in octogenarian patients with non-octogenarian elderly patients. Materials Methods: A total of 1044 patients who underwent ERCP for the first time and are over the age of 65 were enrolled in this study. Data regarding demographic characteristics, medical history of patients, indications, technical success rate, final clinical diagnosis and complications were analyzed. 284 patients and 760 patients composed the octogenarian and non-octogenarian group, respectively. Results: The mean ages in octogenarian and non-octogenarian groups were 83.99±3.85 and 71.50±4.27 years of age respectively. Both groups had similar values with regards to chronic concomitant diseases (96.6% vs. 96.5%). The most common indication of ERCP in the two groups was common bile duct stones. There was no significant difference in technical success and failure of cannulation in procedures of either group (p > 0,05). The duration of overall ERCP procedure was shorter in length in the non-octogenarian group in comparison to the octogenarian group (p = 0.004). The overall complication rate was similar in both groups (p > 0.177). The most common complication in either group was post-ERCP pancreatitis. Conclusions: ERCP is an effective and safe procedure in both octogenarian and non-octogenarian patients with naive papilla when performed by experienced endoscopists. abstract_id: PUBMED:35641898 The effect of nafamostat mesilate infusion after ERCP for post-ERCP pancreatitis. Background: Nafamostat mesilate decreases the incidence of pancreatitis after endoscopic retrograde cholangiopancreatography (ERCP). However, no studies have administered nafamostat mesilate after ERCP. So we investigated if the infusion of nafamostat mesilate after ERCP can affect the post-ERCP pancreatitis (PEP) in high-risk patients. Methods: In a tertiary hospital, 350 high-risk patients of PEP were reviewed retrospectively. Among them, 201 patients received nafamostat mesilate after ERCP. Patient-related and procedure-related risk factors for PEP were collected. We performed a propensity score matching to adjust for the significant different baseline characteristics. The incidence and severity of PEP were evaluated according to the infusion of nafamostat mesilate. The risk factors of PEP were also analyzed by multivariate logistic regression. Results: The baseline characteristics were not different after the matching. The PEP rate (17.4% vs. 10.3%, P = 0.141) was insignificant. Among the PEP patients, mild PEP was significantly higher in the nafamostat mesilate group (85.7% vs. 45.5%, P = 0.021). Only one patient in the nafamostat mesilate group developed severe PEP. Although young age (odds ratio [OR] 3.60, 95% CI 1.09-11.85, P = 0.035) was a risk factor, nafamostat mesilate (odds ratio [OR] 0.30, 95% CI 0.09-0.98, P = 0.047) was a protective factor for moderate to severe PEP. Conclusions: The administration of nafamostat mesilate after ERCP in high-risk patients was not effective in preventing PEP, but may attenuate the severity of PEP. abstract_id: PUBMED:35366271 Risk Stratification in Post-ERCP Pancreatitis: How Do Procedures, Patient Characteristics and Clinical Indicators Influence Outcomes? Background: Post-endoscopic retrograde cholangiopancreatography (ERCP) pancreatitis (PEP) remains common, and severe complications are associated with ERCP. There is no previous study detailing the effect of race and gender in a US-based population on risk of PEP. Methods: Data were collected on 269 "first-performed" consecutive ERCPs followed by division by race (White vs. African-American) and sex (Female vs. Male). A total of 53 probable risk factors were evaluated by uni- and multivariate analysis followed by outcomes expressed as an odds ratio (OR) (with a 95% confidence interval, 95% CI). Finally, a principal component analysis was performed to construct a risk prediction model for PEP, which can be used by clinicians at bedside. Results: After analyzing the risk factors based on race and gender-based groups, Caucasian males with PEP are more likely to have prior history of pancreatitis (p = 0.009), lower hemoglobin (p = 0.02)/blood urea nitrogen (BUN) (p = 0.01)/creatinine before ERCP (p = 0.07) and lower BUN (p = 0.01)/creatinine after ERCP (p = 0.07), while Caucasian females with PEP are more likely to have higher white blood cell (WBC) count before ERCP (p = 0.08) and lower amylase (p = 0.10)/bilirubin (p = 0.09)/aspartate aminotransferase (AST) after ERCP (p = 0.08). African-American males with PEP are more likely to have lower weight (p = 0.001)/smaller height (p = 0.0005)/lower alkaline phosphatase (p = 0.002)/AST (p = 0.04)/alanine transaminase (ALT) (p = 0.03) before ERCP and lower alkaline phosphatase (p = 0.002)/AST (p = 0.01)/ALT (p = 0.004) after ERCP, while African-American females with PEP are more likely to have prior history of pancreatitis (p = 0.004)/higher lipase before (p = 0.0001) and after (p = 0.05) ERCP along with increased risk with pancreatic duct cannulation (p = 0.0001) and injection (p = 0.0001)/biliary sphincterotomy (p = 0.0001). Importantly, prior history of ERCP, elevated AST after ERCP, and BUN prior to ERCP were found to be important clinical features predicting post-ERCP pancreatitis. To our knowledge, this is a first known attempt at developing a risk scoring system for PEP in a US population with decision tree learning. Conclusions: It is very evident that both patient and procedure-related risk factors vary by race and gender in the US population, leading to the development of a new risk assessment tool for PEP that can be used in clinical practice. We need to follow up with a larger prospective study to validate this novel race and gender-based risk scoring system for PEP. abstract_id: PUBMED:36117813 Laparo-endoscopic management of chole-choledocholithiasis: Rendezvous or intraoperative ERCP? A single tertiary care center experience. Introduction: The management of chole-choledocholithiasis remains a matter of debate to preserve minimal invasive management and different options have been proposed, with single- or two-stage approaches. Two techniques of single-stage approach are intraoperative ERCP and laparoscopic rendezvous, which have the great advantage of reducing the length of hospital stay with increased patient compliance. This retrospective study aims to evaluate and compare the efficacy and safety of intraoperative ERCP and rendezvous technique for more than 15 years. Materials And Methods: Clinical records of 113 patients who underwent single-stage management for chole-choledocholithiasis between January 2003 and December 2020 were retrospectively reviewed using a prospectively maintained database. Patients were separated into two groups: those managed with intraoperative ERCP and those with rendezvous, and their intraoperative and postoperative parameters were compared. All patients were followed up for 6 months in an outpatient setting. Results: A total of 68 (60%) patients were treated with intraoperative ERCP, while the remaining 45 (40%) were treated with rendezvous. There were no significant differences in terms of comorbidities. ERCP was performed with a median operative time of 145 min (104-168) and an endoscopic time of 27 min (15-36). Meanwhile, rendezvous was performed with a significantly lower operative [120 min (94-147)] and endoscopic time [15 min (12-22)]. No intraoperative complications were recorded. Patients treated with rendezvous had a significantly shorter median hospitality stay [4 (3-5) vs. 3 (2-4) days, p < 0.05]. No hospital readmissions or mortalities were observed in either group after 30 days. Ten mild pancreatitis cases were observed, mainly in the intraoperative ERCP group (9 vs. 1, p < 0.05), and all were treated conservatively. Only two patients treated with intraoperative ERCP developed biliary complications later on. Conclusion: Laparoscopic rendezvous should be considered a preferable alternative to intraoperative ERCP for the treatment of patients with concomitant CBD stones and gallstones. abstract_id: PUBMED:34684056 EUS and ERCP in the Same Session for Biliary Stones: From Risk Stratification to Treatment Strategy in Different Clinical Conditions. Endoscopic retrograde cholangiopancreatography (ERCP) with sphincterotomy and stone extraction is the treatment of choice for choledocholithiasis, reaching a successful clearance of the common bile duct (CBD) in up to 90% of the cases. Endoscopic ultrasound (EUS) has the best diagnostic accuracy for CBD stones, its sensitivity and specificity range being 89-94% and 94-95%, respectively. Traditionally seen as two separate entities, the two worlds of EUS and ERCP have recently come together under the new discipline of bilio-pancreatic endoscopy. Nevertheless, the complexity of both EUS and ERCP led the European Society of Gastrointestinal Endoscopy to identify quality in endoscopy as a top priority in its recent EUS and ERCP curriculum recommendations. The clinical benefits of performing EUS and ERCP in the same session are several, such as benefiting from real-time information from EUS, having one single sedation for both the diagnosis and the treatment of biliary stones, reducing the risk of cholangitis/acute pancreatitis while waiting for ERCP after the EUS diagnosis, and ultimately shortening the hospital stay and costs while preserving patients' outcomes. Potential candidates for the same session approach include patients at high risk for CBD stones, symptomatic individuals with status post-cholecystectomy, pregnant women, and those unfit for surgery. This narrative review discusses the main technical aspects and evidence from the literature about EUS and ERCP in the management of choledocholithiasis. abstract_id: PUBMED:32481755 Anatomy of Major Duodenal Papilla Influences ERCP Outcomes and Complication Rates: A Single Center Prospective Study. Background: Endoscopic retrograde cholangiopancreatography (ERCP) has been one of the most intensely studied endoscopic procedures due to its overall high complication rates when compared to other digestive endoscopy procedures. The safety and outcome of such procedures have been linked to multiple procedure- or patient-related risk factors. The aim of our study is to evaluate whether the morphology of the major duodenal papilla influences the ERCP outcomes and complication rates. Methods: A total of 322 patients with a native papilla have been included in the study over an eight month period. Morphology of the papilla has been classified into normal papilla and four anatomical variations (Type I-IV). All patients have been prospectively monitored over a 15 day period after ERCP. Procedural outcomes and complication rates have been registered. Results: Morphology of the papilla influences both overall complication rates (95%CI, p = 0.0066) and post-ERCP pancreatitis rates (95%CI, p = 0.01001) in univariate analysis. Type IV papillae have proven to be independent risk factors for post-ERCP pancreatitis in multivariate analysis (OR = 12.176, 95%CI, p = 0.005). Type I papillae have been significantly linked to difficult cannulation (AUC = 0.591, 95%CI, p = 0.008); Conclusions: In the monitored cohort morphology of the major duodenal papilla has significantly influenced both ERCP outcomes and post-procedural complication rates. abstract_id: PUBMED:38103747 CLINICAL PREDICTIVE VALUE OF RENALASE IN POST-ERCP PANCREATITIS. Background & Aims: Plasma levels of renalase decrease in acute experimental pancreatitis. We aimed to determine if decreases in plasma renalase levels after ERCP predict the occurrence of post-ERCP pancreatitis (PEP). Methods: In this prospective cohort study at a tertiary hospital, plasma renalase was determined before ERCP (baseline), at 30 and 60 min after ERCP. Native renalase levels, acidified renalase, and native/acidified renalase proportions were analyzed over time using a longitudinal regression model. Results: Among 273 subjects, 31 developed PEP. Only one PEP patient had baseline native renalase >6.0μg/ml, while 38 of 242 without PEP had native renalase >6.0μg/ml, indicating sensitivity of 97% (30/31) and specificity of 16% (38/242) in predicting PEP. Longitudinal models did not show differences over-time between the groups. Conclusion: Baseline native renalase levels are very sensitive for predicting PEP. Further studies are needed to determine the potential clinical role of renalase in predicting and preventing PEP. Answer: The literature suggests that ERCP-induced and non-ERCP-induced acute pancreatitis (AP) may indeed be two distinct clinical entities, particularly when considering their clinical courses and outcomes in mild and severe forms. Acute pancreatitis has various etiologies, with gallstones and alcohol abuse being the most common causes. Post-ERCP pancreatitis has emerged as the most frequent complication following endoscopic retrograde cholangiopancreatography (ERCP), and while it is generally a clinically irrelevant condition, it can be severe or even fatal in a small percentage of cases (up to 0.8%) (PUBMED:29523291). A study comparing mild ERCP-induced AP with mild attacks caused by other etiological factors found that ERCP-induced AP had a significantly shorter duration of pain, analgesic intake, intravenous hydration, time to resumption of oral diet, and hospital stay. This suggests that mild ERCP-induced AP runs a distinctly shorter and milder course than non-ERCP-related mild attacks, indicating that they may indeed be different clinical entities (PUBMED:17351804). In pediatric patients, ERCP has been shown to be safe, with a complication rate that closely parallels that observed in adults. The most common complication was post-ERCP pancreatitis, which was mostly mild by consensus definition (PUBMED:25673350). In octogenarians, ERCP has been found to be an effective and safe procedure, with no significant difference in technical success, failure of cannulation, or overall complication rate when compared to non-octogenarian elderly patients. The most common complication in both groups was post-ERCP pancreatitis (PUBMED:29560692). The administration of nafamostat mesilate after ERCP in high-risk patients did not effectively prevent PEP but may attenuate the severity of PEP, suggesting a potential role in influencing the outcome of ERCP-induced AP (PUBMED:35641898). Risk stratification in post-ERCP pancreatitis has shown that both patient and procedure-related risk factors vary by race and gender in the US population, leading to the development of a new risk assessment tool for PEP that can be used in clinical practice (PUBMED:35366271). In conclusion, the evidence supports the notion that ERCP-induced and non-ERCP-induced AP are two distinct clinical entities with different outcomes, especially in their mild forms. The severity of The severity of ERCP-induced AP can be influenced by various factors, including the morphology of the major duodenal papilla. A study found that the morphology of the papilla influences both overall complication rates and post-ERCP pancreatitis rates, with certain types of papillae being independent risk factors for post-ERCP pancreatitis (PUBMED:32481755). Additionally, a study on the clinical predictive value of renalase in post-ERCP pancreatitis found that baseline native renalase levels are very sensitive for predicting PEP, although further studies are needed to determine the potential clinical role of renalase in predicting and preventing PEP (PUBMED:38103747). Moreover, the management of chole-choledocholithiasis through single-stage approaches like intraoperative ERCP and laparoscopic rendezvous has been evaluated. The study suggests that laparoscopic rendezvous should be considered a preferable alternative to intraoperative ERCP for the treatment of patients with concomitant CBD stones and gallstones, as it resulted in a significantly shorter median hospital stay and fewer cases of mild pancreatitis (PUBMED:36117813). Finally, the combination of EUS and ERCP in the same session for biliary stones has been discussed as a strategy that can benefit from real-time information from EUS, reduce the risk of cholangitis/acute pancreatitis while waiting for ERCP after the EUS diagnosis, and ultimately shorten the hospital stay and costs while preserving patients' outcomes (PUBMED:34684056). In summary, the literature indicates that ERCP-induced AP and non-ERCP-induced AP have different clinical courses and outcomes, particularly in mild cases. The severity of ERCP-induced AP can be influenced by patient characteristics, procedural factors, and anatomical variations, and there are strategies and predictive tools that can potentially improve the management and outcomes of ERCP-induced AP.
Instruction: Does conversion of a laparoscopic colectomy adversely affect patient outcome? Abstracts: abstract_id: PUBMED:15540299 Does conversion of a laparoscopic colectomy adversely affect patient outcome? Purpose: Conversion during laparoscopic colectomy varies in frequency according to the surgeon's experience and case selection. However, there remains concern that conversion is associated with increased morbidity and higher hospital costs. Methods: From January 1999 to August 2002, 430 laparoscopic colectomies were performed by two surgeons, with 51 (12 percent) cases converted to open surgery. Converted cases were matched for operation and age to 51 open cases performed mostly by other colorectal surgeons from our department. Data collected included gender, American Society of Anesthesiology score, operative indication, resection type, operative stage at conversion, in-hospital complications, direct hospital costs, unexpected readmission within 30 days, and mortality. Results: There were no significant differences between the groups for age (converted, 55 +/- 19; open, 62 +/- 16), male:female ratio (converted, 17:34; open, 23:28), or American Society of Anesthesiology score distribution. Indications for surgery were neoplasia (converted, 16; open, 31); diverticular disease (converted, 21; open, 13); Crohn's disease (converted, 12; open, 5); and other disease (converted, 2; open, 2). Operative times were similar (converted, 150 + 56 minutes; open, 132 +/- 48 minutes). Conversions occurred before defining the major vascular pedicle/ureter (50 percent), in relation to intracorporeal vascular ligation (15 percent), or during bowel transection or presacral dissection (35 percent). Specific indications for conversion were technical (41 percent), followed by adhesions (33 percent), phlegmon or abscess (23 percent), bleeding (6 percent), and failure to identify the ureter (6 percent). Median hospital stay was five days for both groups. In-hospital complications (converted 11.6 percent; open 8 percent), 30-day readmission rate (converted 13 percent vs. open 8 percent), and direct costs were similar between groups. There were no mortalities. Conclusion: Conversion of a laparoscopic colectomy does not result in inappropriately prolonged operative times, increased morbidity or length of stay, increased direct costs, or unexpected readmissions compared with similarly complex laparotomies. A policy of commencing most cases suitable for a laparoscopic approach laparoscopically offers patients the benefits of a laparoscopic colectomy without adversely affecting perioperative risks. abstract_id: PUBMED:28070033 Analysis of indication for laparoscopic right colectomy and conversion risks. Laparoscopic surgery developed continuously over the past years becoming the gold standard for some surgical interventions. Laparoscopic colorectal surgery is well established as a safe and feasible procedure to treat benign and malignant pathologies. In this paper we studied in deep the role of laparoscopic right colectomy analysing the indications to this surgical procedure and the factors related to the conversion from laparoscopy to open surgery. We described the different surgical techniques of laparoscopic right colectomy comparing extra to intracorporeal anastomosis and we pointed out the different ways to access to the abdomen (multiport VS single incision). The indications for laparoscopic right colectomy are benign (inflammatory bowel disease and rare right colonic diverticulitis) and malignant diseases (right colon cancer and appendiceal neuroendocrine neoplasm): we described the good outcomes of laparoscopic right colectomy in all these illnesses. Laparoscopic conversion rates in right colectomy are reported as 12-16%; we described the different type of risk factors related to open conversion: patient-related, disease-related and surgeon-related factors, procedural factors and intraoperative complications. We conclude that laparoscopic right colectomy is considered superior to open surgery in the shortterm outcomes without difference in long-term outcomes. Key Words: Conversion risks, Indication to treatment, Laparoscopy, Post-operative pain, Right colectomy. abstract_id: PUBMED:32057167 Colorectal anastomosis after laparoscopic extended left colectomy: techniques and outcome. Aim: After extended left colectomy, traditional colorectal anastomosis is often not feasible because of insufficient length of the remaining colon to perform a tension-free anastomosis. Total colectomy with ileorectal anastomosis could be an alternative but this can lead to unsatisfactory quality of life. Trans-mesenteric colorectal anastomosis or inverted right colonic transposition (the so-called Deloyers procedure) are two possible solutions for creating a tension-free colorectal anastomosis after extended left colectomy. Few studies have reported their results of these two techniques and mostly via laparotomy. The aim of this study was to describe the trans-mesenteric colorectal anastomosis and the inverted right colonic transposition procedure via a laparoscopic approach and report the outcome in a series of 13 consecutive patients. Method: This was retrospective chart review of laparoscopic colorectal surgery with trans-mesenteric colorectal anastomosis or the inverted right colonic transposition procedure from January 2015 up to 2019. An accompanying video demonstrates these two techniques. Results: Thirteen consecutive patients underwent either a laparoscopic trans-mesenteric colorectal anastomosis (n = 9) or an inverted right colonic transposition procedure (n = 4). One patient had intra-operative presacral bleeding that was stopped successfully without conversion. Two patients had a postoperative intra-abdominal abscess, but no anastomotic complications were recorded. The median number of bowel movements per day after 6 months was 2 (range 2-5). Conclusions: Trans-mesenteric colorectal anastomosis or the inverted right colonic transposition procedure is feasible laparoscopically. The now well-established classical advantages of the laparoscopic approach are associated with good functional outcome after these procedures. abstract_id: PUBMED:26768006 Does Conversion in Laparoscopic Colectomy Portend an Inferior Oncologic Outcome? Results from 104,400 Patients. Background: Limited data exist regarding the effect of conversion from laparoscopic to open colectomy on perioperative and oncologic outcomes in colon cancer. Study Design: The National Cancer Data Base was used to identify patients who underwent colectomy for non-metastatic colon cancer (2010-2012). Patients were stratified into three groups: laparoscopic/robotic-assisted colectomy (MIC), converted colectomy (CC), and open colectomy (OC). Multivariable modeling was applied to compare outcomes from CC and MIC to OC while adjusting for patient, clinical, and tumor characteristics. Results: Of 104,400 patients, 40,328 (38.6 %) underwent MIC, 57,928 (55.5 %) OC, and 6144 (5.9 %) CC. After adjustment, the rate of positive surgical margins was not significantly different between CC and OC (p = 0.44). However, with adjustment, CC versus OC was associated with shorter hospital length of stay (4 % decrease, 95 % CI 2-5 %, p < 0.0001) and lower odds of 30-day mortality (OR 0.77, 95 % CI 0.64-0.94, p = 0.0112). Adjusted overall survival was similar between CC and OC (p = 0.34). Conclusions: Conversion from laparoscopic to open colectomy was not associated with compromised oncologic outcomes, while maintaining improved short-term outcomes despite being attempted in only 45 % of patients. This data suggests that utilization of laparoscopic colectomy should be attempted for patients with colon cancer. abstract_id: PUBMED:25311007 Factors associated with conversion from laparoscopic to open colectomy using the National Surgical Quality Improvement Program (NSQIP) database. Aim: Conversion rates from laparoscopic to open colectomy and associated factors are traditionally reported in clinical trials or reviews of outcomes from experienced institutions. Indications and selection criteria for laparoscopic colectomy may be more narrowly defined in these circumstances. With the increased adoption of laparoscopy, conversion rates using national data need to be closely examined. The purpose of this study was to use data from the American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP) to identify factors associated with conversion of laparoscopic to open colectomy at a national scale in the United States. Method: The ACS-NSQIP Participant Use Data Files for 2006-2011 were used to identify patients who had undergone laparoscopic colectomy. Converted cases were identified using open colectomy as the primary procedure and laparoscopic colectomy as 'other procedure'. Preoperative variables were identified and statistics were calculated using sas version 9.3. Logistic regression was used to model the multivariate relationship between patient variables and conversion status. Results: Laparoscopy was successfully performed in 41 585 patients, of whom 2508 (5.8%) required conversion to an open procedure. On univariate analysis the following factors were significant: age, body mass index (BMI), American Society of Anesthesiologists (ASA) class, presence of diabetes, smoking, chronic obstructive pulmonary disease, ascites, stroke, weight loss and chemotherapy (P < 0.05). The following factors remained significant on multivariate analysis: age, BMI, ASA class, smoking, ascites and weight loss. Conclusion: Multiple significant factors for conversion from laparoscopic to open colectomy were identified. A novel finding was the increased risk of conversion for underweight patients. As laparoscopic colectomy is become increasingly utilized, factors predictive of conversion to open procedures should be sought via large national cohorts. abstract_id: PUBMED:19789665 Influence of obesity on the short-term outcome of laparoscopic colectomy for colorectal cancer. Purpose: Obesity has been generally associated with increased surgical risk. However, data on the outcome of laparoscopic colectomy in obese and non-obese patients are controversial. The aim of this study is to assess the short-term outcome of laparoscopic colectomy for colorectal cancer (CRC) in obese patients as compared with non-obese patients. Materials And Methods: Sixty-nine patients who underwent laparoscopic anterior resection for CRC during the past six years were retrospectively evaluated. The patients with CRC involving the sigmoid or rectosigmoid colon and subjected to intracorporeal anastomosis were included in this study. They were divided into three groups according to body mass index (BMI): obese (BMI >/= 28.0 kg/m(2)), pre-obese (BMI: 25.0-27.9 kg/m(2)) and non-obese (BMI < 25.0 kg/m(2)). Results: Nine patients (13.0%) were obese, 11 patients (15.9%) were pre-obese and 49 patients (71.1%) were non-obese. Patient characteristics, such as age, gender, tumor location, previous laparotomy, were similar among the three groups. There were no significant differences in operative time, blood loss, intraoperative complications and conversion rates. Postoperative complications and duration of postoperative hospital stay were also similar among the three groups. However, two of the three patients in the pre-obese group had to be operated on again due to incarceration of the small bowel into a port site. Conclusions: Laparoscopic colectomy can be safely performed in obese patients with short-term results similar to those obtained in non-obese and pre-obese patients. abstract_id: PUBMED:28636471 Laparoscopic colectomy in a district hospital: the single surgeon can be safe. Background: Several outcome measures have been identified for colorectal surgery and published in the literature. This study sought to compare outcomes of high volume laparoscopic colectomy by a single surgeon in a district hospital with outcomes from tertiary referral centres. Methods: This was a retrospective review of elective laparoscopic colectomy by a single laparoscopic general surgeon in a district hospital over a 51-month period using a prospectively maintained database. The key outcome measures studied were length of hospital stay, conversion to open, anastomotic leak, wound infection, re-admission and 30-day mortality. Results: 187 elective laparoscopic colectomies were performed at the Kent and Canterbury Hospital between July 2008 and October 2012. The median patient age was 69 years (range 22-90 years). Median length of hospital stay was 4 days (range 1-48 days). Anastomotic leak occurred in 4 (2.1%) patients. Seven (3.7%) patients underwent conversion to open surgery. Re-admission occurred in 4 (2.1%) patients for small bowel obstruction (1), wound infection (1), anastomotic leak (1) and colo-vaginal fistula (1). There was one post-operative death from severe chest infection (0.5%). These results are similar to those published by tertiary referral centres. Conclusions: This study of outcomes at a district hospital shows that the outcome reported from laparoscopic colorectal surgery in tertiary referral centres is reproducible at the district hospital level by a single surgeon with a high operative volume. abstract_id: PUBMED:24996578 Laparoscopic colectomy is feasible in the mega-obese patient using a standardized technique. The use of laparoscopic colectomy in morbidly obese individuals is controversial. There are significant safety issues including increased risk of bleeding, increased anastomotic leak and high operative conversion rates. We report the first successful laparoscopic colectomy in a patient with the most extreme category of morbid obesity, mega-obesity (body mass index [BMI]>70). We describe a standardized technique using a completely intracorporeal technique with all resection lines reinforced with a bioabsorbable staple line reinforcement material. This technique has the potential of making laparoscopic colectomy more feasible for patients with morbid obesity in general. abstract_id: PUBMED:35731300 Trends in utilization of laparoscopic colectomy according to race: an analysis of the NIS database. Background: Laparoscopic colectomy has been associated with improved recovery and decreased complications when compared to an open approach. Consequently, the rates of laparoscopic colectomy have increased. Race has been identified as a factor that influences a patient's likelihood of undergoing laparoscopic colectomy. Therefore, the purpose of this study is to analyze the rates of laparoscopic colectomy stratified by race over time. Methods: Patients were selected using procedure codes for colectomy within the National Inpatient Sample (NIS) database from 2009 to 2018. The primary independent variable was race (Black, BL; Hispanic, HI; White, WH), and the primary outcome was surgical approach (laparoscopic vs open). Covariates included age, sex, case complexity, insurance status, income, year of surgery, urbanicity, region, bedsize, and teaching status. We examined the univariable association of race with laparoscopic vs open colectomy with chi-square. We used multivariable logistic regression to examine the association of race with procedure type adjusting for covariates. All analyses were done using SAS (version 9.4, Cary, NC) with p < .05 considered significant. Results: 267,865 patients (25,000 BL, 19,685 HI, and 223,180 WH) were identified. Laparoscopy was used in 47% of cases, and this varied significantly by race (BL 44%, HI 49%, WH 47%, p < .0001). After adjusting for covariates, Black patients had significantly lower adjusted odds of undergoing laparoscopic colectomy vs White patients (aOR 0.92, p < 0.0001). Utilization of laparoscopy was similar in Hispanic compared to White patients (aOR 1.00, p = 0.9667). Racial disparity in the adjusted odds of undergoing laparoscopic colectomy was persistent over time. Conclusion: Race was independently associated with the rate of laparoscopic colectomy, with Black patients less likely to receive laparoscopic surgery than White patients. This disparity persisted over a decade. Attention should be paid to increasing the rates of laparoscopic colectomy in under-represented populations in order to optimize surgical care and address racial disparities. abstract_id: PUBMED:12477980 Care of the laparoscopic colectomy patient. Laparoscopic bowel surgery has demonstrated patient care benefits of decreased duration of hospital stay, smaller incisions, lower risk of cardiopulmonary complications, and reduced risk of small-bowel obstruction. Resection of complicated diverticular disease and inflammatory bowel disease can be technically challenging and may be associated with higher conversion rates. The applicability of these techniques to colon cancer is supported by a growing body of evidence that demonstrates similar survival and recurrence rates obtained by open resection and the exaggeration of the risk of port site recurrences. Laparoscopic colorectal surgery has also challenged much of the standard postoperative care plans used for colectomy. Optimal postoperative care of the laparoscopic colectomy patient requires an appreciation of the faster recovery enjoyed by these patients and the fact that ambulation and dietary advancement need to be accelerated. Coordination between the surgical team and the postoperative care team is essential to obtain all the benefits associated with this new approach to the management of colorectal disease. Answer: Conversion of a laparoscopic colectomy does not appear to adversely affect patient outcomes significantly. According to the study by PUBMED:15540299, conversion from laparoscopic to open colectomy did not result in prolonged operative times, increased morbidity or length of stay, increased direct costs, or unexpected readmissions when compared with similarly complex open surgeries. Furthermore, PUBMED:26768006 found that conversion was not associated with compromised oncologic outcomes and maintained improved short-term outcomes. Additionally, PUBMED:28070033 concluded that laparoscopic right colectomy is superior to open surgery in short-term outcomes without a difference in long-term outcomes, despite a conversion rate of 12-16%. PUBMED:25311007 identified multiple factors associated with conversion from laparoscopic to open colectomy, suggesting that as laparoscopic colectomy becomes more utilized, predictive factors of conversion should be sought via large national cohorts. PUBMED:19789665 reported that laparoscopic colectomy can be safely performed in obese patients with short-term results similar to those obtained in non-obese and pre-obese patients. PUBMED:28636471 demonstrated that outcomes reported from laparoscopic colorectal surgery in tertiary referral centers are reproducible at the district hospital level by a single surgeon with a high operative volume. PUBMED:24996578 described a standardized technique that could make laparoscopic colectomy more feasible for patients with morbid obesity. Lastly, PUBMED:35731300 highlighted that race was independently associated with the rate of laparoscopic colectomy, with Black patients less likely to receive laparoscopic surgery than White patients, indicating a need to address racial disparities in surgical care. Overall, the evidence suggests that conversion from laparoscopic to open colectomy does not necessarily lead to worse patient outcomes and that laparoscopic colectomy should be attempted when appropriate.
Instruction: Jasmonate and ethylene dependent defence gene expression and suppression of fungal virulence factors: two essential mechanisms of Fusarium head blight resistance in wheat? Abstracts: abstract_id: PUBMED:22857656 Jasmonate and ethylene dependent defence gene expression and suppression of fungal virulence factors: two essential mechanisms of Fusarium head blight resistance in wheat? Background: Fusarium head blight (FHB) caused by Fusarium species like F. graminearum is a devastating disease of wheat (Triticum aestivum) worldwide. Mycotoxins such as deoxynivalenol produced by the fungus affect plant and animal health, and cause significant reductions of grain yield and quality. Resistant varieties are the only effective way to control this disease, but the molecular events leading to FHB resistance are still poorly understood. Transcriptional profiling was conducted for the winter wheat cultivars Dream (moderately resistant) and Lynx (susceptible). The gene expressions at 32 and 72 h after inoculation with Fusarium were used to trace possible defence mechanisms and associated genes. A comparative qPCR was carried out for selected genes to analyse the respective expression patterns in the resistant cultivars Dream and Sumai 3 (Chinese spring wheat). Results: Among 2,169 differentially expressed genes, two putative main defence mechanisms were found in the FHB-resistant Dream cultivar. Both are defined base on their specific mode of resistance. A non-specific mechanism was based on several defence genes probably induced by jasmonate and ethylene signalling, including lipid-transfer protein, thionin, defensin and GDSL-like lipase genes. Additionally, defence-related genes encoding jasmonate-regulated proteins were up-regulated in response to FHB. Another mechanism based on the targeted suppression of essential Fusarium virulence factors comprising proteases and mycotoxins was found to be an essential, induced defence of general relevance in wheat. Moreover, similar inductions upon fungal infection were frequently observed among FHB-responsive genes of both mechanisms in the cultivars Dream and Sumai 3. Conclusions: Especially ABC transporter, UDP-glucosyltransferase, protease and protease inhibitor genes associated with the defence mechanism against fungal virulence factors are apparently active in different resistant genetic backgrounds, according to reports on other wheat cultivars and barley. This was further supported in our qPCR experiments on seven genes originating from this mechanism which revealed similar activities in the resistant cultivars Dream and Sumai 3. Finally, the combination of early-stage and steady-state induction was associated with resistance, while transcript induction generally occurred later and temporarily in the susceptible cultivars. The respective mechanisms are attractive for advanced studies aiming at new resistance and toxin management strategies. abstract_id: PUBMED:27174222 Suppressed recombination and unique candidate genes in the divergent haplotype encoding Fhb1, a major Fusarium head blight resistance locus in wheat. Key Message: Fine mapping and sequencing revealed 28 genes in the non-recombining haplotype containing Fhb1 . Of these, only a GDSL lipase gene shows a pathogen-dependent expression pattern. Fhb1 is a prominent Fusarium head blight resistance locus of wheat, which has been successfully introgressed in adapted breeding material, where it confers a significant increase in overall resistance to the causal pathogen Fusarium graminearum and the fungal virulence factor and mycotoxin deoxynivalenol. The Fhb1 region has been resolved for the susceptible wheat reference genotype Chinese Spring, yet the causal gene itself has not been identified in resistant cultivars. Here, we report the establishment of a 1 Mb contig embracing Fhb1 in the donor line CM-82036. Sequencing revealed that the region of Fhb1 deviates from the Chinese Spring reference in DNA size and gene content, which explains the repressed recombination at the locus in the performed fine mapping. Differences in genes expression between near-isogenic lines segregating for Fhb1 challenged with F. graminearum or treated with mock were investigated in a time-course experiment by RNA sequencing. Several candidate genes were identified, including a pathogen-responsive GDSL lipase absent in susceptible lines. The sequence of the Fhb1 region, the resulting list of candidate genes, and near-diagnostic KASP markers for Fhb1 constitute a valuable resource for breeding and further studies aiming to identify the gene(s) responsible for F. graminearum and deoxynivalenol resistance. abstract_id: PUBMED:38409036 Identification and functional characterisation of a locus for target site integration in Fusarium graminearum. Background: Fusarium Head Blight (FHB) is a destructive floral disease of different cereal crops. The Ascomycete fungus Fusarium graminearum (Fg) is one of the main causal agents of FHB in wheat and barley. The role(s) in virulence of Fg genes include genetic studies that involve the transformation of the fungus with different expression cassettes. We have observed in several studies where Fg genes functions were characterised that integration of expression cassettes occurred randomly. Random insertion of a cassette may disrupt gene expression and/or protein functions and hence the overall conclusion of the study. Target site integration (TSI) is an approach that consists of identifying a chromosomal region where the cassette can be inserted. The identification of a suitable locus for TSI in Fg would avert the potential risks of ectopic integration. Results: Here, we identified a highly conserved intergenic region on chromosome 1 suitable for TSI. We named this intergenic region TSI locus 1. We developed an efficient cloning vector system based on the Golden Gate method to clone different expression cassettes for use in combination with TSI locus 1. We present evidence that integrations in the TSI locus 1 affects neither fungal virulence nor fungal growth under different stress conditions. Integrations at the TSI locus 1 resulted in the expression of different gene fusions. In addition, the activities of Fg native promoters were not altered by integration into the TSI locus 1. We have developed a bespoke bioinformatic pipeline to analyse the existence of ectopic integrations, cassette truncations and tandem insertions of the cassette that may occurred during the transformation process. Finally, we established a protocol to study protein secretion in wheat coleoptiles using confocal microscopy and the TSI locus 1. Conclusion: The TSI locus 1 can be used in Fg and potentially other cereal infecting Fusarium species for diverse studies including promoter activity analysis, protein secretion, protein localisation studies and gene complementation. The bespoke bioinformatic pipeline developed in this work together with PCR amplification of the insert could be an alternative to Southern blotting, the gold standard technique used to identify ectopic integrations, cassette truncations and tandem insertions in fungal transformation. abstract_id: PUBMED:26214711 Transgenic Wheat Expressing a Barley UDP-Glucosyltransferase Detoxifies Deoxynivalenol and Provides High Levels of Resistance to Fusarium graminearum. Fusarium head blight (FHB), mainly caused by Fusarium graminearum, is a devastating disease of wheat that results in economic losses worldwide. During infection, F. graminearum produces trichothecene mycotoxins, including deoxynivalenol (DON), that increase fungal virulence and reduce grain quality. Transgenic wheat expressing a barley UDP-glucosyltransferase (HvUGT13248) were developed and evaluated for FHB resistance, DON accumulation, and the ability to metabolize DON to the less toxic DON-3-O-glucoside (D3G). Point-inoculation tests in the greenhouse showed that transgenic wheat carrying HvUGT13248 exhibited significantly higher resistance to disease spread in the spike (type II resistance) compared with nontransformed controls. Two transgenic events displayed complete suppression of disease spread in the spikes. Expression of HvUGT13248 in transgenic wheat rapidly and efficiently conjugated DON to D3G, suggesting that the enzymatic rate of DON detoxification translates to type II resistance. Under field conditions, FHB severity was variable; nonetheless, transgenic events showed significantly less-severe disease phenotypes compared with the nontransformed controls. In addition, a seedling assay demonstrated that the transformed plants had a higher tolerance to DON-inhibited root growth than nontransformed plants. These results demonstrate the utility of detoxifying DON as a FHB control strategy in wheat. abstract_id: PUBMED:23605803 Transcript profiling of the phytotoxic response of wheat to the Fusarium mycotoxin deoxynivalenol. Deoxynivalenol (DON) is a trichothecene mycotoxin commonly produced by Fusarium graminearum and F. culmorum during infection of cereal plants, such as wheat and barley. This toxin is a fungal virulence factor that facilitates the development of Fusarium head blight (FHB) disease. Wheat cultivar (cv.) Remus is susceptible to DON; the toxin causes premature bleaching of spikelets and inhibits root growth. This study used custom-made wheat cDNA arrays to analyse the effect of DON on the transcriptome of heads of the toxin-sensitive wheat cv. Remus at both 4 and 24 h post-toxin treatment. DON-induced transcripts encoded an array of proteins collectively associated with a range of cellular functions, such as metabolite transformation and detoxification, the ubiquitin-proteasome proteolytic pathway, jasmonate biosynthesis and signalling, carbohydrate metabolism, and phenylpropanoid biosynthesis. This study is the first to demonstrate that the fungal virulence factor DON modulates jasmonate biosynthesis and signalling. It also highlights the fact that the toxin-mediated accumulation of transcripts associated with metabolite transformation and detoxification, proteolysis and phenylpropanoid accumulation is not unique to DON-resistant wheat genotypes. Therefore, the respective encoded proteins are likely part of the general wheat defence against DON. Comparative analysis of the results of this and other studies suggests that it is likely to be the rapidity and magnitude rather than the components of the response that are critical in determining resistance to DON and thus the spread of FHB disease in wheat heads. abstract_id: PUBMED:26713352 Involvement of Fungal Pectin Methylesterase Activity in the Interaction Between Fusarium graminearum and Wheat. The genome of Fusarium graminearum, the causal agent of Fusarium head blight of wheat, contains two putative pectin methylesterase (PME)-encoding genes. However, when grown in liquid culture containing pectin, F. graminearum produces only a single PME, which was purified and identified. Its encoding gene, expressed during wheat spike infection, was disrupted by targeted homologous recombination. Two Δpme mutant strains lacked PME activity but were still able to grow on highly methyl-esterified pectin even though their polygalacturonase (PG) activity showed a reduced capacity to depolymerize this substrate. The enzymatic assays performed with purified F. graminearum PG and PME demonstrated an increase in PG activity in the presence of PME on highly methyl-esterified pectin. The virulence of the mutant strains was tested on Triticum aestivum and Triticum durum spikes, and a significant reduction in the percentage of symptomatic spikelets was observed between 7 and 12 days postinfection compared with wild type, demonstrating that the F. graminearum PME contributes to fungal virulence on wheat by promoting spike colonization in the initial and middle stages of infection. In contrast, transgenic wheat plants with increased levels of pectin methyl esterification did not show any increase in resistance to the Δpme mutant, indicating that the infectivity of the fungus relies only to a certain degree on pectin degradation. abstract_id: PUBMED:36135693 Insights on KP4 Killer Toxin-like Proteins of Fusarium Species in Interspecific Interactions. KP4 killer toxins are secreted proteins that inhibit cell growth and induce cell death in target organisms. In Fusarium graminearum, KP4-like (KP4L) proteins contribute to fungal virulence in wheat seedling rot and are expressed during Fusarium head blight development. However, fungal KP4L proteins are also hypothesized to support fungal antagonism by permeabilizing cell walls of competing fungi to enable penetration of toxic compounds. Here, we report the differential expression patterns of F. graminearum KP4L genes (Fgkp4l-1, -2, -3 and -4) in a competitive interaction, using Trichoderma gamsii as the antagonist. The results from dual cultures indicate that Fgkp4l-3 and Fgkp4l-4 could participate in the recognition at the distance of the antagonist, while all Fgkp4l genes were highly activated in the pathogen during the physical interaction of both fungi. Only Fgkp4l-4 was up-regulated during the interaction with T. gamsii in wheat spikes. This suggests the KP4L proteins could participate in supporting F. graminearum interspecific interactions, even in living plant tissues. The distribution of KP4L orthologous within the genus Fusarium revealed they are more represented in species with broad host-plant range than in host-specific species. Phylogeny inferred provides evidence that KP4L genes evolved through gene duplications, gene loss and sequence diversification in the genus Fusarium. abstract_id: PUBMED:29117786 Molecular Characterization and Functional Analysis of PR-1-Like Proteins Identified from the Wheat Head Blight Fungus Fusarium graminearum. The group 1 pathogenesis-related (PR-1) proteins originally identified from plants and their homologs are also found in other eukaryotic kingdoms. Studies on nonplant PR-1-like (PR-1L) proteins have been pursued widely in humans and animals but rarely in filamentous ascomycetes. Here, we report the characterization of four PR-1L proteins identified from the ascomycete fungus Fusarium graminearum, the primary cause of Fusarium head blight of wheat and barley (designated FgPR-1L). Molecular cloning revealed that the four FgPR-1L proteins are all encoded by small open reading frames (612 to 909 bp) that are often interrupted by introns, in contrast to plant PR-1 genes that lack introns. Sequence analysis indicated that all FgPR-1L proteins contain the PR-1-specific three-dimensional structure, and one of them features a C-terminal transmembrane (TM) domain that has not been reported for any stand-alone PR-1 proteins. Transcriptional analysis revealed that the four FgPR-1L genes are expressed in axenic cultures and in planta with different spatial or temporal expression patterns. Phylogenetic analysis indicated that fungal PR-1L proteins fall into three major groups, one of which harbors FgPR-1L-2-related TM-containing proteins from both phytopathogenic and human-pathogenic ascomycetes. Low-temperature sodium dodecyl sulfate polyacrylamide gel electrophoresis and proteolytic assays indicated that the recombinant FgPR-1L-4 protein exists as a monomer and is resistant to subtilisin of the serine protease family. Functional analysis confirmed that deletion of the FgPR-1L-4 gene from the fungal genome results in significantly reduced virulence on susceptible wheat. This study provides the first example that the F. graminearum-wheat interaction involves a pathogen-derived PR-1L protein that affects fungal virulence on the host. abstract_id: PUBMED:34351929 Fusarium graminearum DICER-like-dependent sRNAs are required for the suppression of host immune genes and full virulence. In filamentous fungi, gene silencing by RNA interference (RNAi) shapes many biological processes, including pathogenicity. Recently, fungal small RNAs (sRNAs) have been shown to act as effectors that disrupt gene activity in interacting plant hosts, thereby undermining their defence responses. We show here that the devastating mycotoxin-producing ascomycete Fusarium graminearum (Fg) utilizes DICER-like (DCL)-dependent sRNAs to target defence genes in two Poaceae hosts, barley (Hordeum vulgare, Hv) and Brachypodium distachyon (Bd). We identified 104 Fg-sRNAs with sequence homology to host genes that were repressed during interactions of Fg and Hv, while they accumulated in plants infected by the DCL double knock-out (dKO) mutant PH1-dcl1/2. The strength of target gene expression correlated with the abundance of the corresponding Fg-sRNA. Specifically, the abundance of three tRNA-derived fragments (tRFs) targeting immunity-related Ethylene overproducer 1-like 1 (HvEOL1) and three Poaceae orthologues of Arabidopsis thaliana BRI1-associated receptor kinase 1 (HvBAK1, HvSERK2 and BdSERK2) was dependent on fungal DCL. Additionally, RNA-ligase-mediated Rapid Amplification of cDNA Ends (RLM-RACE) identified infection-specific degradation products for the three barley gene transcripts, consistent with the possibility that tRFs contribute to fungal virulence via targeted gene silencing. abstract_id: PUBMED:30465882 Fusarium graminearum KP4-like proteins possess root growth-inhibiting activity against wheat and potentially contribute to fungal virulence in seedling rot. The virally encoded KP4 killer toxin protein was first identified from Ustilago maydis (Um), and its homologues are present in diverse fungi and in one species of moss. No KP4-like (KP4L) proteins have been functionally characterized. Here, we report the identification and functional analysis of four KP4L proteins from Fusarium graminearum (Fg), the primary causal pathogen of Fusarium head blight (FHB), which is also known to associate with seedling rot of wheat. The four FgKP4L proteins (FgKP4L-1, -2, -3 and -4) are encoded by small open reading frames (378-825 bp) located on chromosome 1 with the FgKP4L-1, -2 and -3 genes clustering together. Sequence analysis indicated that FgKP4L proteins have conserved domains predicted to form a three-dimensional alpha/beta-sandwich structure as first reported for UmKP4, with FgKP4L-4 featuring double Kp4 domains. Further analyses revealed that the FgKP4L genes are expressed in vitro under certain stress conditions, and all up-regulated during FHB and/or seedling rot development, the recombinant FgKP4L-2 protein does not induce cell death in wheat leaves or spikelets, but inhibits root growth of young seedlings, and the elimination of the FgKP4L-1/-2/-3 gene cluster from the fungal genome results in reduced virulence in seedling rot but not in FHB. Database searches revealed KP4L proteins from ∼80 fungal species with more than half from human/animal pathogens. Phylogenetic analysis suggested that UmKP4 and the moss KP4L proteins are closely related to those from a zygromycete and Aspergillus, respectively, implying cross-kingdom horizontal gene transfer. Answer: Yes, jasmonate and ethylene-dependent defense gene expression and suppression of fungal virulence factors are two essential mechanisms of Fusarium head blight (FHB) resistance in wheat. In a study of the wheat cultivars Dream (moderately resistant) and Lynx (susceptible), a non-specific defense mechanism was identified that was likely induced by jasmonate and ethylene signaling. This included the up-regulation of genes such as lipid-transfer protein, thionin, defensin, and GDSL-like lipase in response to FHB. Additionally, defense-related genes encoding jasmonate-regulated proteins were also up-regulated in response to the disease (PUBMED:22857656). Furthermore, the suppression of essential Fusarium virulence factors, including proteases and mycotoxins, was found to be an essential induced defense mechanism in wheat. Genes associated with this defense mechanism, such as ABC transporter, UDP-glucosyltransferase, protease, and protease inhibitor genes, were active in different resistant genetic backgrounds, including the cultivars Dream and Sumai 3 (PUBMED:22857656). The importance of these mechanisms is further supported by the identification of a GDSL lipase gene in the Fhb1 locus, a major FHB resistance locus in wheat, which shows a pathogen-dependent expression pattern (PUBMED:27174222). Additionally, the expression of a barley UDP-glucosyltransferase in transgenic wheat has been shown to detoxify deoxynivalenol (DON), a mycotoxin produced by F. graminearum, and provide high levels of resistance to the pathogen (PUBMED:26214711). Moreover, the transcript profiling of wheat's phytotoxic response to DON revealed the accumulation of transcripts associated with metabolite transformation and detoxification, proteolysis, and phenylpropanoid accumulation, which are not unique to DON-resistant wheat genotypes, suggesting these proteins are part of the general wheat defense against DON (PUBMED:23605803). In summary, both jasmonate and ethylene signaling pathways and the targeted suppression of fungal virulence factors play crucial roles in the resistance of wheat to FHB, and these mechanisms are attractive targets for developing new resistance and toxin management strategies.
Instruction: Down or up? Abstracts: abstract_id: PUBMED:22525145 Kidney cancer: radiofrequency ablation of small renal masses--more work required. N/A abstract_id: PUBMED:13362478 Concerning researchers' thinking in schizophrenia research. N/A abstract_id: PUBMED:33764252 Perspectives of rehabilitation professionals on assistive technology provision to young children in South Africa: a national survey. Purpose: Various factors influence the selection of assistive technology for young children within a context with limited resources, such as South Africa. Rehabilitation professionals are required to weigh up different factors as part of their professional reasoning process when making assistive technology (AT) selections. Insight into the perceived influence of different factors may assist in understanding how professionals make decisions about AT in this context. Materials And Methods: An online survey with questions designed using best-worst scaling was distributed to rehabilitation professionals throughout South Africa. Factors influencing assistive technology selection included in the best-worst survey were identified in previous phases of a larger project. A total of n = 451 rehabilitation professionals completed the survey by selecting the factors that were most and least influential on their assistive technology provision. Results: Results of the survey were obtained by calculating the number of times each factor was selected as most influential across the entire sample, and across all questions, enabling the researchers to sort the items in terms of the frequency of selection. Conclusions: Even though the rehabilitation professionals that participated in the study provide services in a context with limited resources, assessment and factors pertaining to the assistive technology itself were generally perceived to be of greater influence than environmental factors. It is recommended that these factors be reflected in frameworks and models of AT selection.IMPLICATIONS FOR REHABILITATIONThe family's ability to support the implementation of AT is an important resource that is perceived to influence the selection of AT by an RP.Insight into the mind-set of professionals that are used to selecting AT within settings with limited resources may provide RPs in well-resourced contexts with guidance on how to do more, with less.RPs should aim to determine child preference and attitude towards AT during the AT selection process.RPs should be aware of their own influence on AT selection. Existing AT Selection models should be adapted to clearly reflect the influence of the recommending professional. abstract_id: PUBMED:25102918 Modified tectonic keratoplasty with minimal corneal graft for corneal perforation in severe Stevens--Johnson syndrome: a case series study. Background: Corneal perforation in severe Stevens-Johnson syndrome (SJS) presenting great therapeutic difficulties, the imperative corneal transplantation always result in graft failure and repeated recurrence of perforation. The aim of this study was to evaluate the effectiveness of a modified small tectonic keratoplasty (MSTK) with minimal corneal graft in the management of refractory corneal perforation in severe SJS. Methods: Refractory corneal perforations in ten patients (10 eyes) with severe SJS were mended with a minimal corneal patch graft, under the guidance of anterior chamber optical coherence tomography, combined with conjunctival flap covering. The outcome measures included healing of the corneal perforation, survival of the corneal graft and conjunctival flap, relevant complications, and improvement in visual acuity. Results: Corneal perforation healed, and global integrity was achieved in all eyes. No immune rejection or graft melting was detected. Retraction of conjunctival flap occurred in one eye, which was treated with additional procedure. Visual acuity improved in six eyes (60%), unchanged in three eyes (30%) and declined in one eye (10%). Conclusions: The MSTK combined with conjunctival flap covering seems to be effective for refractory corneal perforation in severe SJS. abstract_id: PUBMED:19857504 Male and female odors induce Fos expression in chemically defined neuronal population. Olfactory information modulates innate and social behaviors in rodents and other species. Studies have shown that the medial nucleus of the amygdala (MEA) and the ventral premammillary nucleus (PMV) are recruited by conspecific odor stimulation. However, the chemical identity of these neurons is not determined. We exposed sexually inexperienced male rats to female or male odors and assessed Fos immunoreactivity (Fos-ir) in neurons expressing NADPH diaphorase activity (NADPHd, a nitric oxide synthase), neuropeptide urocortin 3, or glutamic acid decarboxylase mRNA (GAD-67, a GABA-synthesizing enzyme) in the MEA and PMV. Male and female odors elicited Fos-ir in the MEA and PMV neurons, but the number of Fos-immunoreactive neurons was higher following female odor exposure, in both nuclei. We found no difference in odor induced Fos-ir in the MEA and PMV comparing fed and fasted animals. In the MEA, NADPHd neurons colocalized Fos-ir only in response to female odors. In addition, urocortin 3 neurons comprise a distinct population and they do not express Fos-ir after conspecific odor stimulation. We found that 80% of neurons activated by male odors coexpressed GAD-67 mRNA. Following female odor, 50% of Fos neurons coexpressed GAD-67 mRNA. The PMV expresses very little GAD-67, and virtually no colocalization with Fos was observed. We found intense NADPHd activity in PMV neurons, some of which coexpressed Fos-ir after exposure to both odors. The majority of the PMV neurons expressing NADPHd colocalized cocaine- and amphetamine-regulated transcript (CART). Our findings suggest that female and male odors engage distinct neuronal populations in the MEA, thereby inducing contextualized behavioral responses according to olfactory cues. In the PMV, NADPHd/CART neurons respond to male and female odors, suggesting a role in neuroendocrine regulation in response to olfactory cues. abstract_id: PUBMED:26806954 The rise and fall of anaesthesia-related neurotoxicity and the immature developing human brain. N/A abstract_id: PUBMED:15467073 Euthanasia: above ground, below ground. The key to the euthanasia debate lies in how best to regulate what doctors do. Opponents of euthanasia frequently warn of the possible negative consequences of legalising physician assisted suicide and active euthanasia (PAS/AE) while ignoring the covert practice of PAS/AE by doctors and other health professionals. Against the background of survey studies suggesting that anything from 4% to 10% of doctors have intentionally assisted a patient to die, and interview evidence of the unregulated, idiosyncratic nature of underground PAS/AE, this paper assesses three alternatives to the current policy of prohibition. It argues that although legalisation may never succeed in making euthanasia perfectly safe, legalising PAS/AE may nevertheless be safer, and therefore a preferable policy alternative, to prohibition. At a minimum, debate about harm minimisation and the regulation of euthanasia needs to take account of PAS/AE wherever it is practised, both above and below ground. abstract_id: PUBMED:140408 The Salop Skate. N/A abstract_id: PUBMED:23471424 Depictions of insomniacs' behaviors and thoughts in music lyrics. Study Objectives. Studies have found that depictions of unhealthy behaviors (e.g., illicit substance use, violence) are common in popular music lyrics; however, we are unaware of any studies that have specifically analyzed the content of music lyrics for unhealthy sleep-related behaviors. We sought to determine whether behaviors known to perpetuate insomnia symptoms are commonly depicted in the lyrics of popular music. Methods. We searched three online lyrics sites for lyrics with the word "insomnia" in the title and performed content analysis of each of the lyrics. Lyrics were analyzed for the presence/absence of the following perpetuating factors: extending sleep opportunity, using counter fatigue measures, self-medicating, and engaging in rituals or anti-stimulus control behaviors. Results. We analyzed 83 music lyrics. 47% described one or more perpetuating factor. 30% described individual(s) engaging in rituals or antistimulus control strategies, 24% described self-medicating, 7% described engaging in counter fatigue measures, and 2% described extending sleep opportunity (e.g., napping during daytime). Conclusion. Maladaptive strategies known to perpetuate insomnia symptoms are common in popular music. Our results suggest that listeners of these sleep-related songs are frequently exposed to lyrics that depict maladaptive coping mechanisms. Additional studies are needed to examine the direct effects of exposing individuals to music lyrics with this content. abstract_id: PUBMED:12012506 Funding cuts compromise sexual health care. N/A Answer: The provided abstracts do not contain sufficient information to determine whether the phrase "Down or up?" refers to a specific trend, direction, or outcome in a particular context. The abstracts cover a variety of unrelated topics, including radiofrequency ablation in kidney cancer, perspectives on assistive technology provision, tectonic keratoplasty for corneal perforation, olfactory-induced neuronal activation, euthanasia regulation, and depictions of insomniac behavior in music lyrics. Without additional context or information, it is not possible to accurately answer the question "Down or up?" based on the abstracts provided.
Instruction: Is there any effect of insulin resistance on male reproductive system? Abstracts: abstract_id: PUBMED:30173427 Correlation of insulin resistance with male reproductive hormone levels and semen parameters Objective: To investigate the influence of insulin resistance on male reproductive hormones and semen quality. Methods: Using the electrochemiluminescence method, we measured the levels of follicle stimulating hormone (FSH), luteinizing hormone (LH), prolactin (PRL), estradiol (E2) and testosterone (T) in the serum of 83 infertile males. We detected the levels of fasting plasma glucose (FPG) and fasting insulin (FINS) and calculated the insulin resistance index presented as homeostasis model assessment of insulin resistance (HOMA-IR). Based on HOMA-IR, we divided the patients into three tertile groups, T1 (HOMA-IR 0.36-0.55, n = 27), T2 (HOMA-IR 0.56-0.80, n = 28) and T3 (HOMA-IR 0.81-1.97, n = 28), obtained their semen parameters by computer-assisted semen analysis (CASA) and analyzed the correlation of HOMA-IR with male reproductive hormone levels and semen parameters. Results: With the elevation of HOMA-IR, the patients of the T1, T2 and T3 groups showed significant decreases in the serum T level ([14,26 ± 4.27] vs [14.75 ± 5.00] vs [11.62 ± 3.68] nmol/L, P <0.05) and the percentage of progressively motile sperm (PMS) ([51.04 ± 15.10]% vs [48.04 ± 16.24]% vs [37.84 ± 18.23]%, P <0.05). HOMA-IR was correlated negatively with the serum T level (r = -0.333, P = 0.002), semen volume (r = -0.23, P = 0.029) and PMS (r = -0.27, P = 0.015), and so was FINS with the serum T level (r = -0.327, P = 0.003) and PMS (r = -0.315, P = 0.004), while the semen volume was correlated positively with the levels of serum T (r = 0.221, P = 0.048) and FSH (r = 0.222, P = 0.047). Multivariate linear regression analysis showed that HOMA-IR was an independent influencing factor for PMS and the body mass index (BMI) was that for the semen volume and total sperm count. Conclusions: Insulin resistance may reduce semen quality by changing the levels of male reproductive hormones. abstract_id: PUBMED:34410936 Diabetes Mellitus Causes Male Reproductive Dysfunction: A Review of the Evidence and Mechanisms. The metabolic disorders caused by diabetes can lead to various complications, including dysfunction of the male reproductive system. In patients with diabetes, long-term hyperglycemia results in diabetic vascular neuropathy, oxidative stress injury, abnormal zinc metabolism, and insulin resistance syndrome. In addition, insulin deficiency and resistance in diabetes can damage the hypothalamus, pituitary gland, gonads, and perigonads. This can reduce the secretion of sex hormones including gonadotropin-releasing hormone, follicle stimulating hormone, luteinizing hormone, and testosterone, and can lead to testicular atrophy, stromal cell atrophy, seminiferous tubule damage, spermatogenic cell damage, and other structural injuries of the male reproductive organs. These actions can affect male fertility and reproductive health. Herein, we review studies that report a causative role of diabetes in male reproductive function. We also discuss the evidence-based mechanisms involved in the processes of diabetes-related male sexual and reproductive dysfunction as well as the progress in treatment. abstract_id: PUBMED:33017586 A closer look at the role of insulin for the regulation of male reproductive function. While insulin demonstrates to have a considerable influence on the reproductive system, there are various unanswered questions regarding its precise sites, mechanisms of action, and roles for the developing and functioning of the adult male reproductive system. Apart from its effects on glucose level, insulin has an important role in the reproductive system directly by binding on insulin and IGF receptors in the brain and testis. To date, however, the effect of insulin or its alterations on blood-testis-barrier, as an important regulator of normal spermatogenesis and fertility, has not yet been studied. This review aimed to focus on the experimental and clinical studies to describe mechanisms by which insulin affects the hypothalamic-pituitary-gonadal (HPG) axis, testicular cells, spermatozoa, and sexual behavior. Moreover, we discussed the mechanism and impact of insulin changes in type 1 (insulin deficiency along with persisted or even increased sensitivity) and 2 (insulin resistance along with increased insulin level at the early stages of disease) diabetes and obesity on the male reproductive tract. abstract_id: PUBMED:29699342 Insulin resistance/hyperinsulinemia and reproductive disorders in infertile women. Obesity, which disturbs lipid and glucose metabolism, is a recent medical concern. It threatens human health and also has adverse effects on reproductive functions by causing insulin resistance/hyperinsulinemia, especially in women with polycystic ovary syndrome (PCOS). For PCOS patients to prevent these adverse effects, it is important to take into account improving their lifestyles by exercise and proper diets. The relationship between insulin resistance/hyperinsulinemia and reproductive disorders should be understood as fully as possible in order to provide effective treatment. It is well known that insulin resistance and compensatory hyperinsulinemia can be triggered by obesity with visceral fat accumulation. Hyperinsulinemia affects granulosa cells in small follicles and theca cells. This condition induces early response to luteinizing hormones on granulosa cells of small follicles and causes premature differentiation of these cells, which eventually results in anovulation. For improvement of anovulation because of hyperinsulinemia, insulin-sensitizing agents (biguanide and thiazolidinedione derivatives) are useful. Hyperinsulinemia may adversely affect the endometrial functions and environment, and evoke implantation disturbance. Treatment with an insulin-sensitizing agent (metformin) improves the levels of glycodelin, insulin-like growth factor binding protein 1, and blood flow in spiral arteries during the peri-implantation period. It supports endometrial function, improves the endometrial environment, and facilitates embryo implantation. The rate of early pregnancy loss during the first trimester is 30-50% in women with PCOS, which is threefold higher than for normal women. Metformin treatment improves the levels of insulin, the homeostasis model assessment for insulin resistance, and plasminogen activator inhibitor activity, and decreases early pregnancy loss. It goes without saying that lifestyle change is fundamental for improving reproductive performance in addition to treatment with insulin-sensitizing agents. abstract_id: PUBMED:28323978 Assisted Reproductive Technologies Predispose to Insulin Resistance and Obesity in Male Mice Challenged With a High-Fat Diet. Assisted reproductive technology (ART) alters glucose homeostasis in mice and humans, but the underlying mechanisms are incompletely understood. ART induces endothelial dysfunction and arterial hypertension by epigenetic alteration of the endothelial nitric oxide synthase (eNOS) gene. In eNOS-deficient mice, insulin resistance is related to impaired insulin stimulation of muscle blood flow and substrate delivery and defective intrinsic skeletal muscle glucose uptake. We therefore assessed glucose tolerance, insulin sensitivity (euglycemic clamp), insulin stimulation of muscle blood flow in vivo, and muscle glucose uptake in vitro in male ART and control mice fed a normal chow (NC) or challenged with a high-fat diet (HFD) during 8 weeks. Glucose tolerance and insulin sensitivity were similar in NC-fed animals. When challenged with a HFD, however, ART mice developed exaggerated obesity, fasting hyperinsulinemia and hyperglycemia, and a 20% lower insulin-stimulated glucose utilization than did control mice (steady-state glucose infusion rate (GIR), 51.3 ± 7.3 vs 64.0 ± 10.8 mg/kg/min, P = 0.012). ART-induced insulin resistance was associated with defective insulin stimulation of muscle blood flow, whereas intrinsic skeletal muscle glucose uptake was normal. In conclusion, ART-induced endothelial dysfunction, when challenged with a metabolic stress, facilitates glucose intolerance and insulin resistance. Similar mechanisms may contribute to ART-induced alterations of the metabolic phenotype in humans. abstract_id: PUBMED:31890674 Molecular effects of Moringa leaf extract on insulin resistance and reproductive function in hyperinsulinemic male rats. Background: Many studies have reported that insulin resistance impairs the antioxidant defense system and causes male infertility. Moringa oleifera is a medicinal plant that has been employed for the medicament of many disorders. It controls the levels of glucose and manages male sexual disorders. However, its extracts can reverse insulin resistance-linked metabolic alterations remains unknown. Therefore, the current study investigated the potential of the aqueous leaves extract from Moringa oleifera to reverse insulin resistance and testicular disorders in rats. Methods: Rats were fed either a chow (as a control group) or a high fructose diet (HFD, to persuade a state of insulin resistance), in addition to a group of rats fed HFD and treated with Moringa (300 mg/kg) for 4 weeks. Results: Moringa reversed hepatic insulin insensitivity and this was linked to up-regulation of genes involved in insulin receptors and glucose uptake in the liver. These results were associated with amended the insulin level in serum and standardization of insulin sensitivity. In addition, it improved the serum testosterone level and the gene expression of the testicular steridogenic acute regulatory protein (StAR) and 3β-hydroxysteroid dehydrogenase (3β-HSD). Conclusion: Taken together, our findings demonstrate that Moringa reversed HFD diet-induced insulin resistance and improved the testicular function. abstract_id: PUBMED:33158485 Association between reproductive years and insulin resistance in middle-aged and older women: A 10-year prospective cohort study. Objectives: We hypothesized that reproductive years, a marker of total estrogen exposure, may play an important role in insulin resistance. Study Design: A total of 3327 middle-aged and older women (age range 40-69 years) from the Korean Genome and Epidemiology Study were included in this large prospective cohort study with a mean follow-up of 10.8 years. Main Outcome Measures: Insulin resistance and sensitivity were calculated using the homeostatic model assessment of insulin resistance (HOMA-IR) and quantitative insulin sensitivity check index (QUICKI). A linear mixed model for a repeated-measures covariance pattern with unstructured covariance within participants was used to assess longitudinal associations between baseline reproductive years and subsequent changes in HOMA-IR and QUICKI. Cox proportional hazards regression was used to estimate hazard ratios (HRs) and 95 % confidence intervals (CIs) for new-onset insulin resistance according to quartiles of reproductive years. Results: Changes in HOMA-IR were significantly greater in Q1 (fewest reproductive years) than in Q4 (most reproductive years) (beta[SE] = 0.038[0.016]; p-value = 0.022), while changes in QUICKI were significantly smaller in Q1 than in Q4 (beta[SE] = -0.001[0.000]; p-value = 0.048) after adjusting for possible confounders over time. Compared with Q1, HRs (95 % CIs) for the incidence of new-onset insulin resistance were 0.807 (0.654-0.994) for Q2, 0.793 (0.645-0.974) for Q3, and 0.770 (0.622-0.953) for Q4 after adjusting for possible confounders. Conclusion: A short reproductive period is associated with elevated levels on the HOMA-IR and decreased levels on the QUICKI over time. The lowest quartile of reproductive years was significantly associated with a higher risk of new-onset insulin resistance. abstract_id: PUBMED:30360409 Effect of Cistanche Tubulosa Extracts on Male Reproductive Function in Streptozotocin⁻Nicotinamide-Induced Diabetic Rats. Diabetes is a chronic disorder characterized by hyperglycemia due to decreased levels of insulin or the inefficiency of the tissue to use it effectively. Infertility is known as a major outcome of diabetes and affects the male reproductive system by causing sperm impairment and gonadal dysfunction. Cistanche tubulosa is a parasitic plant which has the capacity to improve memory, immunity, and sexual ability, reduce impotence, and minimize constipation. This study was focused on the investigation of the anti-inflammatory and protective effects of echinacoside (ECH) in Cistanche tubulosa extract (CTE) on the male reproductive system of the diabetic rats. The antioxidant, anti-inflammatory, and protective effects of CTE were evaluated by both in vitro and in vivo methods. The in vitro results show that the ECH inhibited reactive oxygen species (ROS) production and improved StAR, CYP11A1, CYP17A1, and HSD17β3 protein expression. The in vivo analysis was carried out with three doses of echinacoside (ECH) (80, 160, and 320 mg/kg) in CTE. In total, 0.571 mg/kg of rosiglitazone (RSG) was administered as a positive control. Diabetes was induced by streptozotocin (STZ) (65 mg/kg) and nicotinamide (230 mg/kg) in combination with a high-fat diet (45%). The in vivo studies confirmed that the ECH improved blood sugar levels, insulin resistance, leptin resistance, and lipid peroxidation. It can restore kisspeptin 1 (KiSS1), G protein-coupled receptor GPR 54, suppressor of cytokine signaling 3 (SOCS-3), and sirtuin 1 (SIRT1) messenger ribonucleic acid (mRNA) expression in the hypothalamus and recover sex hormone level. Thus, this study confirmed the antioxidant, anti-inflammatory, and steroidogenesis effects of CTE. abstract_id: PUBMED:30365256 Effect of Acupuncture on Reproductive Endocrine and p-Akt Protein Expression in Polycystic Ovary Syndrome Rats with Insulin Resistance Objective: To observe the effect of manual acupuncture intervention on the reproductive endocrine and p-Akt protein expression in polycystic ovary syndrome with insulin resistance (PCOS-IR) rats, so as to reveal its mechanism in relieving PCOS-IR. Methods: Female Wistar rats were randomly divided into control, model, medication (Metformin) and acupuncture groups (n=8 in each). The PCOS-IR model was established by subcutaneous injection of Dehydroepiandrosterone (DHEA, 6 mg/100 g), once daily for 20 days. Rats of the medication group were treated by gavage of Metformin solution (100 mg/kg), once daily for 28 days. For rats of the acupuncture group, manual acupuncture stimulation (lifting-thrusting and rotating of needles) was applied to the bilateral "Shenshu"(BL 23), "Yishu"(EX-B 3), "Zigong" (EX-CA 1), "Sanyinjiao"(SP 6), "Fenglong"(ST 40) and "Guanyuan"(CV 4) for 5 min, once a day for 28 days. H.E. staining was used to observe the histopathologic changes of the ovary tissue. Serum levels of follicle-stimulating hormone (FSH), estradiol (E2), luteinizing hormone (LH), testosterone (T) and fasting insulin (FINS) were measured by ELISA. Fasting plasma glucose (FPG) was measured by blood glucose meter, and homeostasis model assessment-insulin resistance index (HOMA-IR) was calculated. Western blot was used to detect the expression of Akt and p-Akt proteins in ovarian tissues. Results: Following modeling, the ovaries showed polycystic changes and an increase of atresia follicles with a larger diameter and no corpus luteum in the model group. After the intervention, there were various types of follicles and a small amount of corpus luteum at different stages of development in both acupuncture and medication groups. In comparison with the control group, serum FSH and E2 levels as well as ovarian Akt and p-Akt protein expression were considerably decreased (P<0.05, P<0.01), and serum LH, T and FINS contents and HOMA-IR were significantly increased in the model group (P<0.01). Following the intervention, the decreased serum FSH and E2 levels as well as ovarian Akt and p-Akt protein expression, and the increased serum LH, T and FINS contents and HOMA-IR were remarkably reversed in both acupuncture and medication groups (P<0.05,P<0.01), without significant differences between the two groups in the levels of those serum and ovary indexes (P>0.05).. Conclusion: Acupuncture intervention can regulate the levels of reproductive endocrine hormones and reduce insulin resistance in PCOS-IR rats, which may be related to its effect in up- regulating the expression of p-Akt protein in the ovaries. abstract_id: PUBMED:31088279 Insulin-insensitivity of male genitalia maintains reproductive success in Drosophila. For most arthropod species, male genital size is relatively implastic in response to variation in developmental nutrition, such that the genitals in large well-fed males are similar in size to those in small poorly-fed males. In Drosophila melanogaster, reduced nutritional plasticity of the male genitalia is a consequence of low insulin sensitivity through a tissue-specific reduction in the expression of FOXO, a negative growth regulator . Despite an understanding of the proximate developmental mechanisms regulating organ size, the ultimate evolutionary mechanisms that may have led to reduced FOXO expression in the genitalia have not been fully elucidated. Here we show that restoring FOXO activity in the developing genitalia reduces the male genital size and decreases various aspects of male reproductive success. These data support the hypothesis that sexual selection has acted on the male genitalia to limit their nutritional plasticity through a reduction in FOXO expression, linking proximate with ultimate mechanisms of genital evolution. Answer: Yes, there is evidence suggesting that insulin resistance can have an effect on the male reproductive system. Insulin resistance has been associated with changes in male reproductive hormones and semen quality. A study found that with the elevation of the homeostasis model assessment of insulin resistance (HOMA-IR), infertile males showed significant decreases in serum testosterone levels and the percentage of progressively motile sperm (PMS). HOMA-IR was negatively correlated with serum testosterone levels, semen volume, and PMS, indicating that insulin resistance may reduce semen quality by altering levels of male reproductive hormones (PUBMED:30173427). Diabetes mellitus, which can lead to insulin resistance, has been reported to cause dysfunction of the male reproductive system. Long-term hyperglycemia in diabetic patients can result in diabetic vascular neuropathy, oxidative stress injury, abnormal zinc metabolism, and insulin resistance syndrome, which can reduce the secretion of sex hormones and lead to structural injuries of the male reproductive organs, affecting male fertility and reproductive health (PUBMED:34410936). Insulin has a role in the reproductive system by binding to insulin and IGF receptors in the brain and testis. However, the effect of insulin or its alterations on the blood-testis-barrier, which is an important regulator of normal spermatogenesis and fertility, has not been fully studied (PUBMED:33017586). Furthermore, insulin resistance and compensatory hyperinsulinemia, often triggered by obesity, can affect reproductive functions. In women with polycystic ovary syndrome (PCOS), hyperinsulinemia can induce early response to luteinizing hormones on granulosa cells of small follicles, causing premature differentiation and anovulation. Insulin-sensitizing agents have been used to improve anovulation due to hyperinsulinemia (PUBMED:29699342). In summary, insulin resistance is associated with negative impacts on the male reproductive system, including alterations in reproductive hormone levels, semen quality, and potentially contributing to infertility.
Instruction: Is measurement of basal levels of serum pepsinogen II useful in proving the eradication of Helicobacter pylori by treatment? Abstracts: abstract_id: PUBMED:7500674 Is measurement of basal levels of serum pepsinogen II useful in proving the eradication of Helicobacter pylori by treatment? Background: The aim of this study was to demonstrate the influence of the eradication of Helicobacter pylori on the basal concentrations of serum pepsinogen II in patients with duodenal ulcer. Methods: Thirty-two patients with active duodenal ulcer were prospectively studied. A triple therapy was used consisting in bismuth, metronidazole and tetracycline. At the time of initial endoscopy and in those performed 5 months later on completion of the treatment, biopsies of the gastric mucosa were taken for histologic and microbiologic studies, and the basal concentrations of serum pepsinogen II are also determined. Results: The eradication of H. pylori was accompanied by a significant pathologic improvement (p < 0.001) in both the antrum and the gastric body. On eradication of H. pylori, the basal pepsinogen II value (m +/- SD) decreased from 9.2 +/- 2.7 ng/ml to 6.4 +/- 1.7 ng/ml after treatment (p < 0.001). However, when eradication was not achieved, these values increased (11.8 ng/ml) with respect to the initial determination (9.3 ng/ml) (p < 0.05). The area under the ROC curve was 0.99 (SE 0.01) with a sensitivity of 92% and specificity of 100% with respect to the diagnosis of infection eradication (cut off point of the decrease of pepsinogen levels O). Conclusions: The eradication of Helicobacter pylori in patients with duodenal ulcer is associated with a significant decrease in the basal concentrations of serum pepsinogen II measured 5 months after completion of treatment. The verification of this decrease constitutes a useful, inexpensive non invasive method to prove the eradication of H. pylori with treatment. abstract_id: PUBMED:11247290 Basal concentrations of gastrin and pepsinogen I and II in gastric ulcer: influence of Helicobacter pylori infection and usefulness in the control of the eradication Aim: To study the influence of Helicobacter pylori eradication on basal gastrin and pepsinogen I and II levels in patients with gastric ulcer over a 1-year follow-up period, and to assess the usefulness of these values in confirming H. pylori eradication after treatment. Methods: Fifty-six patients with gastric ulcer and H. pylori infection were prospectively studied. At the beginning of the study, endoscopy with biopsies for histologic examination and urease testing was carried out, as were 13C-urea breath test and blood samples for determination of gastrin and pepsinogen I and II values by radioimmunoassay and serology. Histologic study, 13C-urea breath test and laboratory determinations were repeated at months 1, 6 and 12 after completion of eradication treatment. Results: H. pylori infection was eradicated in 82.1% of patients. In patients with successful H. pylori eradication, the initial mean gastrin value was 75.5 +/- 39.1 pg/ml, while at 1 month after treatment this value decreased to 49.2 +/- 21 pg/ml (p < 0.0001). No further reductions were noted. Initial pepsinogen I and II values were 104 +/- 58 and 15.8 +/- 10 ng/ml, respectively, whereas at month 1 after treatment these values were 77 +/- 42 and 7.3 +/- 4 ng/ml, respectively (p < 0.0001) and were 72 +/- 41 and 6.7 +/- 3 ng/ml respectively at month 6 (p < 0.01); no further variations were observed thereafter. The area under the ROC curve which reveals eradication through reductions in hormonal values was 0.70 for gastrin, 0.78 for pepsinogen I, 0.93 for pepsinogen II and 0.92 for the pepsinogen I/II ratio. At months 6 and 12 after treatment completion, differences in mean gastrin and pepsinogen I and II values between the patients with normal histologic findings and those with chronic gastritis were significant (p < 0.05). Conclusions: a) H. pylori eradication is associated with an early fall in basal gastrin values and a progressive decrease in basal pepsinogen I and II values. b) In patients with gastric ulcer, determination of the decrease in basal pepsinogen II levels is a useful and early non-invasive method for confirming eradication. c) Determination of gastrin and pepsinogen I and II values may be useful for assessing improvement in gastritis 6 months after treatment completion. abstract_id: PUBMED:17559360 Helicobacter pylori eradication improves gastric histology and decreases serum gastrin, pepsinogen I and pepsinogen II levels in patients with duodenal ulcer. Background And Aim: The aim of this study was to assess the gastric histopathology and serum gastrin-17 and pepsinogens profiles in patients with duodenal ulcer before and after Helicobacter pylori eradication in a population with a very high prevalence of H. pylori. At the same time we assessed the role of H. pylori density on these variables. Methods: Eighty Caucasian patients with H. pylori-associated duodenal ulcer before treatment and 1 year after randomized eradication were studied. Among patients with unsuccessful eradication two groups were distinguished according to the data obtained after treatment: the group with negative rapid urease test and decreased bacterial density according to morphological score (partial elimination group); the group with positive rapid urease test and high bacterial density (failed eradication group). Results: One year after successful eradication, serum levels of gastrin-17, pepsinogen I and pepsinogen II decreased. Similar changes of serum pepsinogen I and pepsinogen II levels were observed in patients with partial elimination of H. pylori infection. In the group with successful eradication, inflammation, activity, atrophy and number of lymphoid follicles in the antral mucosa fell. In the group with partial elimination, antral mucosa activity and H. pylori score reduced. Other morphological changes were statistically non-significant. Conclusion: Patients with duodenal ulcer after successful eradication have improvement of morphological and functional characteristics of gastric mucosa. abstract_id: PUBMED:25060199 Pepsinogen testing for evaluation of the success of Helicobacter pylori eradication at 4 weeks after completion of therapy. Background And Objective: Pepsinogen levels in plasma are increased by inflammation in the gastric mucosa, including inflammation resulting from Helicobacter pylori infection. A decrease in pepsinogen II level has been suggested as a reliable marker to confirm the successful eradication of infection. The aim of our study was to evaluate the potential role of pepsinogens I and II, gastrin-17 and H. pylori antibodies in confirming successful eradication. Material And Methods: Altogether 42 patients (25 women, 17 men), mean age 45 years (range 23-74), were enrolled. Pepsinogens I and II, gastrin-17 and H. pylori IgG antibodies were measured in plasma samples using an ELISA test (Biohit, Oyj., Finland) before the eradication and 4 weeks after completing the treatment. The success of eradication was determined by a urea breath test. Results: Eradication was successful in 31 patients (74%) and unsuccessful in 11 patients (26%). Pepsinogen II decreased significantly in both the successful (P=0.029) and unsuccessful (P=0.042) eradication groups. Pepsinogen I decreased significantly in the successful (P=0.025) but not the unsuccessful (P=0.29) eradication group. The pepsinogen I/II ratio increased in the successful eradication group (P=0.0018) but not in the group in which treatment failed (P=0.12). There were no differences in gastrin-17 or H. pylori antibody values. Conclusions: A decrease in pepsinogen II levels cannot be used as a reliable marker for the successful eradication of H. pylori 4 weeks after the completion of treatment. The increase in pepsinogen I/II ratio reflects differences in pepsinogen production following the eradication irrespective of improvement in atrophy. abstract_id: PUBMED:10036944 Investigation about usefulness of serum antibody of Helicobacter pylori and serum pepsinogen I/II ratio as a marker of the judgment after eradication therapy To clarify that serum antibody of Helicobacter pylori (H. pylori) and serum pepsinogen I/II ratio are useful or not as a marker of the judgment after eradication therapy, we followed up 84 cases who received eradication therapy comparing with culture and histology (Carnoy's fixation and immunostaining using anti-H. pylori antibody, MIA method). Successful eradication was recognized in 45 of 84 cases (successful group), and remaining 39 cases were unsuccessful (unsuccessful group). Titers of serum H. pylori antibody went down gradually in the successful group, on the other hand, they did not go down constantly and often re-went up in the unsuccessful group. The difference of the various rate of titers in the both groups became clear statistically since 6 months after eradication therapy (p < 0.05). Supposing that cut-off rate of titer was 60% of pre-eradicated titer, sensitivity, specificity, and accuracy were 86.2%, 77.7%, and 84.2%, respectively, at 6 months after eradication therapy. Serum pepsinogen I/II ratio increased regardless of successful or unsuccessful eradication, but the various rate of serum pepsinogen I/II ratio was different in both groups at 1 month and 3 months after eradication therapy (p < 0.05). Serum antibody of H. pylori and serum pepsinogen I/II ratio could be a marker of the judgment after eradication therapy, especially the various rate of serum antibody of H. pylori is useful for the monitor of H. pylori infection in the long term. abstract_id: PUBMED:34997989 Pepsinogen II in gastritis and Helicobacter pylori infection. Background And Aim: In the gastric mucosa, pepsinogen II (PgII) is produced/secreted by glands in the mucus-secreting antral and cardia compartments, but also by the chief cells and the oxyntic glands. Increasing PgII serum levels are associated with the whole spectrum of gastric inflammatory diseases, including gastritis induced by Helicobacter pylori (H. pylori). This review critically addresses the clinical value of PgII serology for assessing gastric mucosal inflammation, and as a marker of H. pylori status, in both H. pylori-positive patients and after eradication therapy. Results: A search in PubMed/Scopus records yielded 39 out of 1190 published scientific studies meeting the selection criteria for this study. In the studies considered, PgII levels were significantly associated with non-atrophic gastric inflammatory lesions (p-values: 0.025-0.0001). H. pylori-positive patients had significantly higher PgII levels than H. pylori-negative individuals (p-values: 0.o5-0.0001). While a significant drop in serum PgII levels is consistently reported in H. pylori-eradicated patients (p-values: from 0.05 to 0.0001), inconsistencies in the related negative and positive predictive values significantly lower the clinical reliability of PgII testing by comparison with other available non-invasive tests. Conclusions: PgII serology may provide clinically useful information on gastric inflammatory diseases, particularly if they are non-atrophic. PgII serology is inconsistent, however, for the purposes of distinguishing patients whose H. pylori eradication therapy is successful from those who remain infected. abstract_id: PUBMED:31702966 Analysis of factors associated with recovery of the serum pepsinogen ratio after Helicobacter pylori eradication: a long-term follow-up study in Korea. Objective: Serum levels of pepsinogen (PG) are related to Helicobacter pylori-induced inflammation of the gastric mucosa. This study aimed to examine the influence of H. pylori eradication on serum PG, analyze its associated factors, and evaluate the long-term outcomes.Methods:H. pylori-positive patients who underwent gastroscopy and serum PG measurement were enrolled in a single academic hospital. After H. pylori eradication, the measurement of serum PG level was performed. Recovery of serum PG I/II ratio was defined as a PG I/II ratio after eradication of >3.0 in patients with a PG I/II ratio ≤ 3.0 before eradication. Follow-up involved serum PG measurement and gastroscopy with a rapid urease test annually.Results: In all, 327 patients were eligible for study inclusion. Compared to those before H. pylori eradication, serum PG I (74.9 vs. 44.3 ng/mL, p < .001) and PG II (25.4 vs. 9.1 ng/mL, p < .001) levels significantly decreased after successful eradication. In addition, there was a significant increase in serum PG I/II ratio after eradication (3.07 vs. 4.98, p < .001). In multivariate analyses, the following were independently associated with failed recovery of serum PG I/II ratio despite successful eradication: age ≥ 60 years (odds ratio [OR] = 0.231, 95% confidence interval [CI] = 0.084-0.629, p = .004) and severe gastric atrophy (OR = 0.156, 95% CI = 0.055-0.440, p < .001).Conclusions: Recovery of serum PG I/II ratio after H. pylori eradication may be achieved in H. pylori-infected patients aged <60 years without severe gastric atrophy. abstract_id: PUBMED:9396326 Changes in serum anti-Helicobacter pylori IgG antibody, pepsinogen I, and pepsinogen II after eradication therapy of Helicobacter pylori To investigate the changes in serum anti-Helicobacter pylori IgG antibody (HP Ab), pepsinogen I (PG I), pepsinogen II (PG II), and pepsinogen I/II ratio (PG I/II) after eradication therapy of Helicobacter pylori (HP), we studied 78 patients with HP-positive peptic diseases. They received combination therapy (proton pump inhibitor + amoxicillin: n = 17, proton pump inhibitor + amoxicillin + clarithromycin: n = 61). In the 68 patients in whom HP was eradicated, HP Ab, PG I, and PG II decreased and PG I/II increased significantly after eradication. Especially, the decrease in PG II and the increase in PG I/II were rapid and remarkable, found 2 months after the beginning of eradication therapy, and then continued. On the other hand, in the patients in whom HP was not eradicated, HP Ab and PG I/II did not change significantly, while PG I and PG II temporarily increased at the end of administration of proton pump inhibitor. In conclusion, it seems that the measurement of PG II and PG I/II is useful for the early detection of HP eradication. abstract_id: PUBMED:28854276 A significant increase in the pepsinogen I/II ratio is a reliable biomarker for successful Helicobacter pylori eradication. Background: Helicobacter pylori (H. pylori) eradication is usually assessed using the 13C-urea breath test (UBT), anti-H. pylori antibody and the H. pylori stool antigen test. However, a few reports have used pepsinogen (PG), in particular, the percentage change in the PG I/II ratio. Here, we evaluated the usefulness of the percentage changes in serum PG I/II ratios for determining the success of eradication therapy for H. pylori. Materials And Methods: In total, 650 patients received eradication therapy from October 2008 to March 2013 in our Cancer Institute Hospital. We evaluated the relationship between H. pylori eradication and percentage changes in serum PG I/II ratios before and 3 months after treatment with CLEIA® (FUJIREBIO Inc, Tokyo, Japan). The gold standard of H. pylori eradication was defined as negative by the UBT performed 3 months after completion of eradication treatment. Cut-off values for percentage changes in serum PG I/II ratios were set as +40, +25 and +10% when the serum PG I/II ratio before treatment was below 3.0, above 3.0 but below 5.0 and 5.0 or above, respectively. Results: Serum PG I and PG II levels were measured in 562 patients with H. pylori infection before and after eradication therapy. Eradication of H. pylori was achieved in 433 patients studied (77.0%). The ratios of first, second, third-line and penicillin allergy eradication treatment were 73.8% (317/429), 88.3% (99/112), 75% (12/16) and 100% (5/5), respectively. An increasing percentage in the serum levels of the PG I/II ratios after treatment compared with the values before treatment clearly distinguished success from failure of eradication (108.2±57.2 vs. 6.8±30.7, p<0.05). Using the above cut-off values, the sensitivity, specificity and validity for determination of H. pylori were 93.1, 93.8 and 93.2%, respectively. Conclusion: In conclusion, the percentage changes in serum PG I/II ratios are useful as evaluation criteria for assessing the success of eradication therapy for H. pylori. abstract_id: PUBMED:10102232 Basal and stimulated gastrin and pepsinogen levels after eradication of Helicobacter pylori: a 1-year follow-up study. Aim: A decrease in gastrin and pepsinogen (PG) levels 1 month after Helicobacter pylori eradication has been described repeatedly, but the long-term progression of such a decrease has been scarcely studied. We therefore studied the effect of H. pylori eradication on basal and stimulated gastrin and PG levels for 1 year. Initially, the usefulness of measuring these parameters for the noninvasive diagnosis of H. pylori eradication was validated. Furthermore, an assessment was made of the association between H. pylori reinfection and a re-increase in gastrin and PG values. Finally, an evaluation was made of the variables influencing gastrin and PG concentration, with particular attention to H. pylori infection and histological lesions of gastric mucosa. Methods: Two-hundred and twenty-two patients with duodenal ulcer were studied prospectively. Exclusion criteria were the administration of antibiotics, H2 antagonists, omeprazole or bismuth prior to endoscopy. In all patients serum basal levels of gastrin, PGI, and PGII were measured before and 1 month after completing eradication therapy. In the successfully eradicated patients, gastrin, PGI, and PGII were also measured at 6 and 12 months. In 80 patients stimulated measurements of gastrin (after ingestion of two beef cubes) and PGI (after injection of pentagastrin) were also performed. H. pylori-negative patients after therapy underwent a urea breath test at 6 and 12 months, and patients who had stimulated gastrin and PG concentration measured had also an endoscopy performed at 6 months. Results: H. pylori was eradicated in 73% of patients. A histological improvement was observed 1 month after completing H. pylori eradication therapy, both at gastric antrum and body (P < 0.001), while a further improvement at antrum was demonstrated at 6 months (P < 0.01). With regard to the different cut-off points for decreased basal and stimulated measurements for diagnosing H. pylori eradication, the best results were obtained, respectively, with PGII (sensitivity of 90% and specificity of 76%) and PGI 30 min after stimulation (sensitivity and specificity of 82%), with an area under the ROC curve of 0.87 in both cases. In the multiple regressions analysis H. pylori status correlated with gastrin, PGI and PGII after therapy (P < 0.001), while histological lesions correlated only with gastrin levels (P < 0.05). A decrease in basal and stimulated serum parameters was demonstrated immediately after eradication (Wilcoxon test, P < 0.001), and an additional decrease (at 6 months) was observed just in PGI (Friedman test, P < 0.01). However, gastrin and PGII values remained unchanged after the first month post-eradication. Seven patients were reinfected with H. pylori during follow-up. Quantitation of basal and stimulated gastrin and PGI levels was not reliable as a reinfection marker. Regarding basal PGII, the parallelism was strong at 6 months (re-increase in all four reinfected patients), although only in one out of three with reinfection at 1 year did PGII rise at that stage. Conclusions: (1) Measurement of gastrin and PG levels (especially basal PGII values) is a useful non-invasive method to confirm H. pylori eradication after therapy. (2) H. pylori eradication is associated with a significant decrease in basal and stimulated gastrin levels and in basal PGII levels that is detected immediately (1 month) after finishing treatment, and remains unchanged for 1 year. However, the decrease in basal and stimulated PGI levels occurs progressively for 6 months, although such levels remain also unchanged afterwards. (3) Measurement of gastrin and PGI concentrations has a limited usefulness in the diagnosis of H. pylori reinfections after successful eradication, although PGII determination could be more useful in this situation. Answer: Yes, the measurement of basal levels of serum pepsinogen II is useful in proving the eradication of Helicobacter pylori by treatment. Several studies have demonstrated that the eradication of H. pylori is associated with a significant decrease in the basal concentrations of serum pepsinogen II. For instance, in patients with duodenal ulcer, the basal pepsinogen II value decreased significantly from 9.2 +/- 2.7 ng/ml to 6.4 +/- 1.7 ng/ml after treatment, indicating successful eradication of H. pylori (PUBMED:7500674). Similarly, in patients with gastric ulcer, basal pepsinogen II levels decreased from 15.8 +/- 10 ng/ml to 7.3 +/- 4 ng/ml one month after treatment, with further decrease observed at month 6 (PUBMED:11247290). These decreases in pepsinogen II levels were found to be a useful and early non-invasive method for confirming eradication (PUBMED:11247290). Moreover, a study on patients with duodenal ulcer showed that one year after successful eradication, serum levels of pepsinogen II decreased, along with improvements in gastric histopathology (PUBMED:17559360). However, it is important to note that a decrease in pepsinogen II levels alone may not be a reliable marker for successful eradication when measured only 4 weeks after completion of treatment, as both successful and unsuccessful eradication groups showed a significant decrease (PUBMED:25060199). Nevertheless, the pepsinogen I/II ratio increased significantly in the successful eradication group, suggesting that this ratio might reflect differences in pepsinogen production following eradication irrespective of improvement in atrophy (PUBMED:25060199). In summary, the measurement of basal levels of serum pepsinogen II, often in conjunction with the pepsinogen I/II ratio, is a useful indicator for confirming the successful eradication of H. pylori following treatment.
Instruction: Does intraoperative navigation assistance improve bone tumor resection and allograft reconstruction results? Abstracts: abstract_id: PUBMED:24711134 Does intraoperative navigation assistance improve bone tumor resection and allograft reconstruction results? Background: Bone tumor resections for limb salvage have become standard treatment. Recently, computer-assisted navigation has been introduced to improve the accuracy of joint arthroplasty and possible tumor resection surgery; however, like with any new technology, its benefits and limitations need to be characterized for surgeons to make informed decisions about whether to use it. Questions/purposes: We wanted to (1) assess the technical problems associated with computer-assisted navigation; (2) assess the accuracy of the registration technique; (3) define the time required to perform a navigated resection in orthopedic oncology; and (4) the frequency of complications such as local recurrence, infection, nonunion, fracture, and articular collapse after tumor resection and bone reconstruction with allografts using intraoperative navigation assistance. Methods: We analyzed 69 consecutive patients with bone tumors of the extremities that were reconstructed with massive bone allografts using intraoperative navigation assistance with a minimum followup of 12 months (mean, 29 months; range, 12-43 months). All patients had their tumors reconstructed in three-dimensional format in a virtual platform and planning was performed to determine the osteotomy position according to oncology margins in a CT-MRI image fusion. Tumor resections and allograft reconstructions were performed using a computer navigation system according to the previously planned cuts. We analyzed intraoperative data such as technical problems related to the navigation procedure, registration technique error, length of time for the navigation procedure, and postoperative complications such as local recurrence, infection, nonunion, fracture, and articular collapse. Results: In three patients (4%), the navigation was not carried out as a result of technical problems. Of the 66 cases in which navigation was performed, the mean registration error was 0.65 mm (range, 0.3-1.2 mm). The mean required time for navigation procedures, including bone resection and allograft reconstruction during surgery, was 35 minutes (range, 18-65 minutes). Complications that required a second surgical procedure were recorded for nine patients including one local recurrence, one infection, two fractures, one articular collapse, and four nonunions. In two of these nine patients, the allograft needed to be removed. At latest followup, three patients died of their original disease. Conclusions: The navigation procedure could not be performed for technical reasons in 4% of the series. The mean registration error was 0.65 mm in this series and the navigation procedure itself adds a mean of 35 minutes during surgery. The complications rate for this series was 14%. We found a nonunion rate of 6% in allograft reconstructions when we used a navigation system for the cuts. Level Of Evidence: Level IV, case series. See the Guidelines for Authors for a complete description of levels of evidence. abstract_id: PUBMED:28928818 The value of computer-assisted navigation for bone reconstruction after tumor resection. This study was designed to evaluate the use of computer-assisted navigation with computed tomography (CT) images for bone reconstruction after resection in malignant bone tumor treatment. Forty-five patients with malignant bone tumors were recruited for this study. CT scan images in a computer-assisted navigation system were used to assist during the osteotomy, the pairing with allografts, and the monitoring of the allograft and joint lines to perform joint reconstruction. Our results show that osteotomy and allograft pairing were successful in all patients. The average duration of the osteotomy procedures was 46.8±12.3 min; and the average pairing time was 32.5±9.8 min. The anatomical registration points and the three-dimensional virtual CT images were successfully matched. The average error of registration was 0.36±0.09 mm. Also, the range of tumor resection and allograft osteotomy were successfully paired, with an average error of 0.11±0.03 mm. No complications such as unequal limbs length or joint deformities occurred after reconstruction. The average follow-up time was 11.6±3.9 months. The tumor recurrence rate was 11.1% (5/45) and the survival rate 95.6% (43/45). The average healing time for the allograft and host bone was 5.5±1.2 months and no unexpected internal fixations, fractures or joint collapses occurred. The average knee joint functionality MSTS score was 25.5±6.6 points. No significant differences were found in the length of tumor resection, rate of negative incision margin, duration of osteotomy or of pairing, registration error or allogeneic bone and defect matching error averages between those patients with tumor recurrence and those without it (p>0.05). Based on our results, the computer-assisted navigation system for bone reconstruction after malignant tumor resection allows for high precision during osteotomy, delivers a high success rate of pairing, results in great limb function and low complication rates, and is thus a highly successful and safe approach benefiting bone cancer patients. abstract_id: PUBMED:28878542 Ewings sarcoma of ilium: Resection and reconstruction with femoral head allograft. Ewing's sarcoma is a common malignant bone tumor seen in 5-15 years age group. It often arises from diaphysis of long bones. Ewing's sarcoma arising from the ilium is very rare, and it has an unfavourable prognosis. We present a rare case report of Ewings sarcoma of ilium with no metastasis in a two and a half year old boy, who was treated with neoadjuvant chemotherapy followed by surgical excision of the tumor and reconstruction using allograft from the femoral head fixed with multiple k-wires and screw. The patient is disease free at one year follow up and the allograft has taken the shape of growing ilium and excellent functionality and gait with minimal limp. Through this report, we emphasize on the occurrence of Ewings sarcoma in unusual site and resection and reconstruction of the tumor utilizing the allograft. abstract_id: PUBMED:26148694 Patient-specific instrument can achieve same accuracy with less resection time than navigation assistance in periacetabular pelvic tumor surgery: a cadaveric study. Purpose: Inaccurate resection in pelvic tumors can result in compromised margins with increase local recurrence. Navigation-assisted and patient-specific instrument (PSI) techniques have recently been reported in assisting pelvic tumor surgery with the tendency of improving surgical accuracy. We examined and compared the accuracy of transferring a virtual pelvic resection plan to actual surgery using navigation-assisted or PSI technique in a cadaver study. Methods: We performed CT scan in twelve cadaveric bodies including whole pelvic bones. Either supraacetabular or partial acetabular resection was virtually planned in a hemipelvis using engineering software. The virtual resection plan was transferred to a CT-based navigation system or was used for design and fabrication of PSI. Pelvic resections were performed using navigation assistance in six cadavers and PSI in another six. Post-resection images were co-registered with preoperative planning for comparative analysis of resection accuracy in the two techniques. Results: The mean average deviation error from the planned resection was no different ([Formula: see text]) for the navigation and the PSI groups: 1.9 versus 1.4 mm, respectively. The mean time required for the bone resection was greater ([Formula: see text]) for the navigation group than for the PSI group: 16.2 versus 1.1 min, respectively. Conclusions: In simulated periacetabular pelvic tumor resections, PSI technique enabled surgeons to reproduce the virtual surgical plan with similar accuracy but with less bone resection time when compared with navigation assistance. Further studies are required to investigate the clinical benefits of PSI technique in pelvic tumor surgery. abstract_id: PUBMED:23827882 Navigation-guided en bloc resection and defect reconstruction of craniomaxillary bony tumours. The aim of this study was to evaluate the validity of navigation-guided en bloc tumour resection and defect reconstruction in the treatment of craniomaxillary bony tumours. Three patients with ossifying fibroma and two patients with fibrous dysplasia were enrolled in this study. After preoperative planning and three-dimensional simulation, the osteotomy lines for resection were delineated and the normal anatomic structures for defect reconstruction were ascertained. With the guidance of an Accu-Navi navigation system, an en bloc tumour resection and simultaneous defect rehabilitation were performed. The system provided continuously updated information on the position and movement of surgical instruments in the operating field in relation to the preoperative imaging data set. The system error measured by the computer did not exceed 1mm. The osteotomy lines and reconstruction contour were checked by postoperative computed tomography, and good matching with the preoperative planning was achieved. Patients showed no signs of tumour recurrence or prosthesis infection during follow-up (range 12-35 months). Image-guided navigation makes radical bone tumour resection more reliable by implementing preoperative planning, showing the determined safety margins, preserving vital structures and guiding reconstruction. abstract_id: PUBMED:36931178 Pelvic ring reconstruction with tibial allograft, screws and rods following enneking type I and IV resection of primary bone tumors. Introduction: Surgery of primary malignant tumors involving the sacroiliac joint requires wide resection, which often interrupts the pelvic ring. Nowadays, restoration of the pelvic ring to provide stability and which technique is most fitting remain subject to debate. The aim of this study is to evaluate the results of pelvic reconstruction with pedicle screw instrumentation and tibial allograft following Enneking Type I-IV resections. Patients And Methods: All patients who underwent reconstruction with tibial allograft, screws and rods after resection of areas I and IV for primary bone tumors between 2017 and 2022 were reviewed. Clinical and radiological characteristics, fusion rate and functional results were analyzed. The MSTS score and the TESS were used to evaluate functional results. Results: Seven patients were included in the study. Chondrosarcoma was the most common histology. Only four patients reported pain. No fractures were observed at tumor diagnosis. Computer-assisted navigation was used in six cases. Reconstruction was performed in four cases with a screw inserted in the homolateral L5 pedicle and in the ischium, in two cases with a screw in the homolateral L4 pedicle and another in the homolateral L5 pedicle, in the last case with two screws inserted in L4 and L5, one screw in the ischium and another one in the residual iliac wing. In this case a contralateral stabilization was also carried out. The spine screws and the iliac screws were connected with a rod. The mean follow-up for all 7 patients was 37 months. One patient (16.6%) died due to general complications not directly related to the surgery; while the others are alive and apparently free of disease. Complete fusion was obtained in four out of seven patients and the average time for fusion was 9 months. The average MSTS score and TESS were 58.7% and 57.8%, respectively. Discussion: The need for reconstruction is thoroughly debated in literature. The advantages of restoring posterior pelvis stability are the prevention of long-term pain associated with limb shortening and secondary scoliosis. Re-establishment of the pelvic ring can be achieved through synthetic, biologic or hybrid reconstructions. Conclusions: More studies that assess the surgical consequences at long-term follow-up and help clarify the indications for reconstruction and the specific technique are necessary to confirm our preliminary results. abstract_id: PUBMED:34161677 Resection arthrodesis and osteoarticular allografts reconstruction after resection of primary bone tumors around the knee: Long-term clinical outcomes and prognostic factors for failure of biological reconstruction. Aim: The biological reconstruction of the knee with osteoarticular allografts and resection arthrodesis have been reported but there has not yet been a direct comparison between both these procedures. This study aimed to identify the prognostic factors that influence failure of biological reconstruction and compared the results between both procedures. Methods: Between 1994 and 2017, we performed 92 limb-sparing procedures using resection arthrodesis (n = 53) and osteoarticular allograft reconstruction (n = 39) for the management of primary bone tumors around the knee. The minimum follow-up time was 2 years in both groups. Results: The failure rate of reconstruction in the osteoarticular allograft and resection arthrodesis group was 48.7% and 39.6%, respectively (p = 0.75). The mean MSTS score in the osteoarticular allograft and resection arthrodesis group was 23.7 and 21.8, respectively (p = 0.01). The significant risk factor for failure after biological reconstruction was the administration of chemotherapy (p = 0.001; HR = 3.39; 95% CI, 1.60-7.17). Conclusion: Patients who underwent osteoarticular allograft had a better functional outcome than those who underwent resection arthrodesis reconstruction, but clinical outcomes between the groups were comparable. Chemotherapy is a significant adverse prognostic factor for failure of biological reconstruction. abstract_id: PUBMED:24144806 Three-dimensional surgical simulation-guided navigation in thoracic surgery: a new approach to improve results in chest wall resection and reconstruction for malignant diseases. Objectives: Oncological surgery of the chest wall should be performed to achieve free margins of at least 2 cm for metastasis or 4 cm for primary tumours. When the lesion is not visible or palpable, difficulty in identification may lead to a larger incision and a resection wider than is necessary. Methods: We report three cases of non-palpable metastatic chest wall lesions in which the preoperative surgical planning and the intraoperative identification of the tumour, and thus the subsequent chest wall reconstruction, was supported using computer-based surgery. Results: The application of high-resolution three-dimensional imaging technology and navigational systems is used in preoperative surgical planning to provide virtual simulations of a patient's skeletal changes and new soft tissue profile. Intraoperatively, a mobile navigation probe was used to identify the lesion, matching surgical landmarks and the preoperative computed tomography imaging, achieving the radical resection of the tumour with correct but not excessive surgical margins. Two patients underwent partial sternectomy followed by sternal allograft reconstruction. The third patient underwent chest wall resection followed by reconstruction using titanium bars and vicryl mesh. In all cases, the postoperative period was uneventful. After a follow-up period of 13.9 and 8 months, respectively, all patients are disease free, without complications. Conclusions: Application of navigation technology in thoracic surgery should be encouraged because it is easy to use and requires a limited learning curve. abstract_id: PUBMED:38375409 The application of extended reality technology-assisted intraoperative navigation in orthopedic surgery. Extended reality (XR) technology refers to any situation where real-world objects are enhanced with computer technology, including virtual reality, augmented reality, and mixed reality. Augmented reality and mixed reality technologies have been widely applied in orthopedic clinical practice, including in teaching, preoperative planning, intraoperative navigation, and surgical outcome evaluation. The primary goal of this narrative review is to summarize the effectiveness and superiority of XR-technology-assisted intraoperative navigation in the fields of trauma, joint, spine, and bone tumor surgery, as well as to discuss the current shortcomings in intraoperative navigation applications. We reviewed titles of more than 200 studies obtained from PubMed with the following search terms: extended reality, mixed reality, augmented reality, virtual reality, intraoperative navigation, and orthopedic surgery; of those 200 studies, 69 related papers were selected for abstract review. Finally, the full text of 55 studies was analyzed and reviewed. They were classified into four groups-trauma, joint, spine, and bone tumor surgery-according to their content. Most of studies that we reviewed showed that XR-technology-assisted intraoperative navigation can effectively improve the accuracy of implant placement, such as that of screws and prostheses, reduce postoperative complications caused by inaccurate implantation, facilitate the achievement of tumor-free surgical margins, shorten the surgical duration, reduce radiation exposure for patients and surgeons, minimize further damage caused by the need for visual exposure during surgery, and provide richer and more efficient intraoperative communication, thereby facilitating academic exchange, medical assistance, and the implementation of remote healthcare. abstract_id: PUBMED:30519732 Microsurgical reconstruction with vascularized fibula and massive bone allograft for bone tumors. Introduction: Combining massive bone allograft and vascularized fibula in intercalary reconstruction following resection of bone tumors represents a complex reconstructive procedure that requires specialists in microvascular surgery as well as orthopedic surgery. The purpose of our study was to examine the outcomes using this surgical technique in patients with bone tumors in terms of oncologic results, complications related to surgery, Musculoskeletal Tumor Society (MSTS) scores and duration of surgery. Materials And Methods: We analyzed 81 patients with femoral or tibial sarcomas who underwent intercalary resection and microsurgical reconstruction with massive bone allograft and vascularized fibula. There were 56 boys and 25 girls with a mean age of 13.4 years at the time of surgery. The patients' medical records were reviewed for clinical and functional outcomes as well as postoperative complications. The study group was comprised of 33 patients who underwent reconstruction of the femur with massive bone allograft and free vascularized fibula and 48 patients who underwent reconstruction of the tibia with massive bone allograft and free or pedicle vascularized fibula. The mean length of resection was 15.9 cm (8-31 cm). The functional evaluation of the patients was done at the end of the follow-up using MSTS score for the lower limb. All patients had at least a 2-year follow-up. Results: The overall limb salvage rate was 94%, although many patients required re-operation after the procedure. Complications occurred in 24 patients, 18 of which underwent additional surgical procedures. They included fractures of the massive bone allograft-vascularized fibula construct with or without implant failure (19) and deep infection (5). After surgical or conservative treatment, all the fractures successfully healed. The overall MSTS functional score was good to excellent in 91% of patients. Conclusions: The combination of massive bone allograft and vascularized fibula seems to be a reasonable option for reconstruction of diaphyseal defects following intercalary resection of bone tumors. Although there was a high rate of complications and therefore re-operations, the biology of vascularized fibula was able to save the reconstruction in most of the cases that had complications. Answer: Intraoperative navigation assistance appears to improve the precision and outcomes of bone tumor resection and allograft reconstruction. Studies have shown that computer-assisted navigation can enhance the accuracy of osteotomies and the pairing with allografts, resulting in successful reconstructions with high precision and low complication rates (PUBMED:28928818). The use of navigation systems has been associated with a mean registration error as low as 0.65 mm, indicating a high level of accuracy in the resection of tumors (PUBMED:24711134). Moreover, the application of extended reality (XR) technology-assisted intraoperative navigation in orthopedic surgery, including bone tumor surgery, has been reported to improve the accuracy of implant placement, reduce postoperative complications, shorten surgical duration, and minimize radiation exposure (PUBMED:38375409). Additionally, navigation-guided en bloc resection and defect reconstruction have been validated in the treatment of craniomaxillary bony tumors, showing that image-guided navigation can make radical bone tumor resection more reliable (PUBMED:23827882). In the context of pelvic tumors, computer-assisted navigation has been used to achieve accurate resections and reconstructions, with some studies reporting the use of tibial allografts, screws, and rods following Enneking Type I-IV resections of primary bone tumors, resulting in satisfactory functional outcomes and fusion rates (PUBMED:36931178). Furthermore, patient-specific instruments (PSIs) have been shown to achieve similar accuracy to navigation assistance with less resection time in periacetabular pelvic tumor surgery (PUBMED:26148694). However, it is important to note that while navigation assistance improves accuracy and outcomes, it is not without challenges. Technical problems can occur, as seen in a study where navigation could not be performed in 4% of cases due to technical issues (PUBMED:24711134). Additionally, the use of navigation systems can add time to the surgery, with an average of 35 minutes reported in one study (PUBMED:24711134). In summary, intraoperative navigation assistance does improve the results of bone tumor resection and allograft reconstruction by enhancing accuracy, reducing complications, and potentially improving functional outcomes. However, the technology requires careful implementation and consideration of the additional time and potential technical challenges.
Instruction: Robotic neurological surgery applications: accuracy and consistency or pure fantasy? Abstracts: abstract_id: PUBMED:19223694 Robotic neurological surgery applications: accuracy and consistency or pure fantasy? Introduction: Minimally invasive surgery was born out of recent advances in neuroimaging and stereotaxy, and the scale of future neurosurgical procedures will soon be so small that it will not be possible for the unassisted surgeons. Hence, neurosurgical robotics is a natural evolution in this field. The aim of this study was to evaluate the performance of a new robotic system in a wide range of neurosurgical applications. Methods: Patients undergoing image-guided surgical procedures were recruited to participate in this prospective ethically approved study from 2005. The PathFinder (Prosurgics, UK) is a neurosurgical robotic system with 6 degrees of freedom. It uses a fiducial system that is automatically detectable by the planning software and a camera system embedded in the robot's head. The registration procedure was performed automatically by photographing the fiducials from different angles. The robot then aligns its end-effector and tool holder along the specified path for the surgeon to pass the probe to the target. We recruited 37 consecutive patients to test the application accuracy and consistency of the system using three different fiducial fixation mechanisms: a double adhesive fixed to the skin, an ECG lead dot fixed to the skin, and a registration plate fixed to the skull. Results: Out of 37 consecutive patients, 17 were males and 20 were females, with a mean age of 46.6 years. The procedures were: transsphenoidal in 8, malignant tumour biopsies in 3 and resections in 5, benign tumour excisions in 6 and functional procedures in 15 [6 bilateral deep-brain stimulations (DBSs) of the subthalamic nucleus for Parkinson's disease, 3 bilateral anterior cingulotomies for depression, 3 bilateral DBSs of the ventral intermediate nucleus of the thalamus for tremor and 3 depth electrodes during epilepsy surgery]. We tested a total of 140 targets with an average of 3-4 targets per patient. The mean application accuracy was less than 1 mm and the application accuracy was consistent in all targets in the same patient. Conclusions: This robotic system was very accurate and consistent in practice as long as the robot had achieved acceptable registration. abstract_id: PUBMED:34263301 Robotic thymectomy for thymoma in patients with myasthenia gravis: neurological and oncological outcomes. Objectives: The goal of this study was to analyse the outcomes in 53 patients with thymoma, 34 of whom had myasthenia gravis (MG), who were treated with robotic surgery. The oncological outcomes of the whole series of patients were analysed. Furthermore, because consistent data are not yet available in the literature, the main focus was the analysis of the neurological results of the patients affected by MG and thymoma. Methods: The clinical outcomes of 53 patients with a diagnosis of thymoma who underwent robotic thymectomy between January 2014 and December 2019 in our institution were collected and evaluated; 34 of these patients had a concomitant diagnosis of MG. The neurological status of the patients was determined from a clinical evaluation according to the Osserman classification and on pre- and post-surgery Myasthenia Gravis Composite scores, whereas neurological clinical outcomes were assessed using the Myasthenia Gravis Foundation of America Post-Intervention Score. Reduction of steroid therapy was also considered. The recurrence rate, adjuvant radiotherapy and overall survival of the patients with a thymoma were evaluated. Results: Neurological outcomes: improvement of the clinical conditions was obtained in 26 patients (76.5%) following the operation: complete stable remission was observed in 5 patients (14.7%), pharmacological remission in 10 (29.4%) and minimal manifestation in 11 (32.3%). Four patients (11.8%) exhibited no substantial change from the pretreatment clinical manifestations or reduction in MG medication and 4 (11.8%) patients experienced worsening of clinical conditions. In 21 patients (61.7%) a reduction of the dosage of steroid therapy was obtained. Oncological outcomes: at an average follow-up of 36 months, the overall survival was 96%, 4 patients (7.5%) had pleural relapses and 12 patients (22.6%) underwent postoperative radiotherapy, according to their stage. In accordance with Masaoka staging, 34% were in stage I, 56.6% in stage II and 9.4% in stage III. Conclusions: Our results suggest that robotic surgical treatment of patients with thymoma and concomitant MG is effective in improving the neurological outcomes. Moreover, the oncological results obtained in this series confirm the efficacy of robotic surgery for the treatment of thymic malignancies, with results in line with those of open surgery. However, due to the indolent growth of thymomas, further observations with longer follow-up are necessary. abstract_id: PUBMED:29857219 Pure Robotic Surgery for Odontoid Tumor: First Case. Background: Transoral robotic surgery has been used successfully to assist many surgical procedures. Here, we report its first use as pure robotic surgery, applied to excise an odontoid metastatic mass. Case Description: A 48-year-old woman was hospitalized due to a lesion consistent with metastasis that extended to the pharyngeal constrictor muscles and that had completely destroyed the odontoid bone. Occipitocervical fixation was performed initially, and transoral surgery took place 5 days later. The da Vinci Surgical System Robot was used to resect the tumor. The patient was discharged with a very good general condition. Conclusions: This article presents the first pure robotic surgery for odontoid. The increasing number of robot-assisted cases will demonstrate the necessity of this evolution in neurosurgery and should accelerate the process. abstract_id: PUBMED:37675313 Uniportal pure robotic-assisted thoracic surgery-technical aspects, tips and tricks. The uniportal access for robotic thoracic surgery presents itself as a natural evolution of minimally invasive thoracic surgery (MITS). It was developed by surgeons who pioneered the uniportal video-assisted thoracic surgery (U-VATS) in all its aspects following the same principles of a single incision by using robotic technology. The robotic surgery was initially started as a hybrid procedure with the use of thoracoscopic staplers by the assistant. However, due to the evolution of robotic modern platforms, the staplers can be nowadays controlled by the main surgeon from the console. The pure uniportal robotic-assisted thoracic surgery (U-RATS) is defined as the robotic thoracic surgery performed through a single intercostal (ic) incision, without rib spreading, using the robotic camera, robotic dissecting instruments and robotic staplers. There are presented the advantages, difficulties, the general aspects and specific considerations for U-RATS. For safety reasons, the authors recommend the transition from multiportal-RATS through biportal-RATS to U-RATS. The use of robotic dissection and staplers through a single incision and the rapid undocking with easy emergent conversion when needed (either to U-VATS or to thoracotomy) are safety advantages over multi-port RATS that cannot be overlooked, offering great comfort to the surgeon and quick and smooth recovery to the patient. abstract_id: PUBMED:38046149 The role of robotic surgery in neurological cases: A systematic review on brain and spine applications. The application of robotic surgery technologies in neurological surgeries resulted in some advantages compared to traditional surgeries, including higher accuracy and dexterity enhancement. Its success in various surgical fields, especially in urology, cardiology, and gynecology surgeries was reported in previous studies, and similar advantages in neurological surgeries are expected. Surgeries in the central nervous system with the pathology of millimeters through small working channels around vital tissue need especially high precision. Applying robotic surgery is therefore an interesting dilemma for these situations. This article reviews various studies published on the application of brain and spine robotic surgery and discusses the current application of robotic technology in neurological cases. abstract_id: PUBMED:36532090 Early Experience of Pure Robotic Right Hepatectomy for Liver Donors in a Small-Volume Center. Background And Objectives: Living donor right hepatectomy has become the most common method of liver transplantation. With minimally invasive surgery, laparoscopic donor hepatectomy became possible, but with some limitations. Advancements in robotic technology made it possible to overcome these shortcomings and maximize the advantages of minimally invasive surgery in transplantation. For this reason, some centers have started robotic donor hepatectomy. Our study aimed to introduce our early experience of robotic donor right hepatectomy and investigate the feasibility of this surgery. Methods: This study included 10 (30%) living donors who underwent pure robotic donor right hepatectomy at Dong-A University Hospital from January 1, 2020 to December 31, 2021. The medical records were analyzed to determine the short-term outcomes of these patients. Results: The total operation time and warm ischemic time were 396.6 min ± 62.7 min and 19.7 min± 5.6 min, respectively. Moreover, there was no transfusion during the operation and no other port use and open conversion. The average real graft volume was 590 mL ± 73.5 mL, and the mean hospital stay was 8.7 d ± 2.6 d. There have been no specific complications noted in the donor group. Conclusions: Based on our positive experience with pure robotic right hepatectomy for a liver donor, the robotic technique may be a new option for achieving minimally invasive surgery for a liver donor. abstract_id: PUBMED:27637235 Does pure robotic partial nephrectomy provide similar perioperative outcomes when compared to the combined laparoscopic-robotic approach? Laparoscopic and robotic partial nephrectomy have become the preferred option for surgical management of incidentally discovered small renal tumors. Currently there is no consensus on which aspects of the procedure should be performed laparoscopically versus robotically. We believe that combining a laparoscopic exposure and hilar dissection followed by tumor extirpation and renorrhaphy with robotic assistance provides improved perioperative outcomes compared to a pure robotic approach alone. We performed a comparison of perioperative outcomes between combined laparoscopic-robotic partial nephrectomy-or hybrid procedure-and pure robotic partial nephrectomy (RPN). A multi-center retrospective analysis of patients undergoing RPN and hybrid PN using the da Vinci S system(®) was performed. Patient data were reviewed for demographic and perioperative variables. Statistical analysis was performed using the Welch t test and linear regression, and nonparametric tests with similar significance results. Thirty-one patients underwent RPN while 77 patients underwent hybrid PN between 2007 and 2011. Preoperative variables were comparable in both groups with the exception of lesion size and nephrometry score which were significantly higher in patients undergoing hybrid PN. Length of surgery, estimated blood loss and morphine used were significantly less in the hybrid group, while warm ischemia time was significantly longer. The difference in WIT was accounted for in this data by adjusting for nephrometry score. In our multi-center series, the hybrid approach was associated with a shorter operative time, reduced blood loss and lower narcotic usage. We believe this approach is a valid alternative to RPN. abstract_id: PUBMED:38254894 National Multicenter Study on the Comparison of Robotic and Open Thymectomy for Thymic Neoplasms in Myasthenic Patients: Surgical, Neurological and Oncological Outcomes. Thymectomy is the gold standard in the treatment of thymic neoplasm and plays a key role in the therapeutic path of myasthenia gravis. For years, sternotomy has been the traditional approach for removing anterior mediastinal lesions, although the robotic thymectomy is now widely performed. The literature is still lacking in papers comparing the two approaches and evaluating long-term oncological and neurological outcomes. This study aims to analyze the postoperative results of open and robotic thymectomy for thymic neoplasms in myasthenic patients. Surgical, oncological and neurological data of myasthenic patients affected by thymic neoplasms and surgically treated with extended thymectomy, both with the open and the robotic approach, in six Italian Thoracic Centers between 2011 and 2021 were evaluated. A total of 213 patients were enrolled in the study: 110 (51.6%) were treated with the open approach, and 103 (48.4%) were treated with robotic surgery. The open surgery, compared with the robotic, presented a shorter operating time (p < 0.001), a higher number of postoperative complications (p = 0.038) and longer postoperative hospitalization (p = 0.006). No other differences were observed in terms of surgical, oncological or neurological outcomes. The robotic approach can be considered safe and feasible, comparable to the open technique, in terms of surgical, oncological and neurological outcomes. abstract_id: PUBMED:25234092 Robotic thymectomy in patients with myasthenia gravis: neurological and surgical outcomes. Objectives: Thymectomy is frequently used in the treatment of myasthenia gravis (MG). But indication, timing or surgical approach remain controversial. This study reports our experiences with robotic thymectomy and surgical and neurological outcomes in a large cohort of patients with MG. Methods: We retrospectively analysed the outcome of 125 patients with MG who underwent a robotic thymectomy using the da Vinci Surgical System (Intuitive Surgical, Inc., Sunnyvale, CA, USA) between 2004 and 2012. The Myasthenia Gravis Foundation of America (MGFA) Classification was used to determine preoperative and postintervention status. Results: Ninety-five women and 30 men underwent a robotic thymectomy. One hundred patients had a neurological follow-up of more than 12 months. Preoperative most severe MGFA classification was Stage I in 11 patients (8.8%), Stage IIA in 18 patients (14.4%), Stage IIB in 18 patients (14.4%), Stage IIIA in 7 patients (5.6%), Stage IIIB in 29 patients (23.2%), Stage IVA in 10 patients (8.0%), Stage IVB in 29 patients (23.2%) and Stage V in 3 patients (2.4%). Median surgical procedure time was 123 min (range 45-353 min). There were no major perioperative complications or deaths. The median postoperative hospital stay was 3 days (range 2-24 days). Histological analysis showed thymic remnant tissue in 41 patients (32.8%), follicular hyperplasia in 52 patients (41.6%), thymoma in 31 patients (24%), lipoma in 1 patient (0.8%) and a cyst in 1 patient (0.8%). Patients with thymic remnant tissue were significantly more preoperative steroid users compared with the follicular hyperplasia group (P = 0.02). With a median follow-up of 33 months (range 12-104 months), 77% of the patients showed neurological improvement. Three-year probability remission rate [complete stable remission (CSR) and pharmacological remission] is 28.2%. Patients who were not treated with prednisolone preoperatively showed a significant higher CSR than patients who did take prednisolone (P = 0.014). No significant difference was observed regarding timing of surgery (P = 0.37). Conclusions: Robotic thymectomy in patients with MG is safe and feasible. A neurological benefit and decreased use of steroids can be obtained in the majority of patients. No significant difference in neurological outcome was observed as the result of timing of robot thymectomy after the onset of MG. abstract_id: PUBMED:7429734 The psychodynamics of a beating fantasy. A case history of a woman is presented whose primary conscious masturbatory fantasy from age three until age forty was of her father beating her. Late in her lengthy treatment, the conscious fantasy was seen to screen off the underlying, unconscious fantasy of being beaten by her mother. Early traumata, surgery, and primal scene exposures led to profound separation-individuation problems, heightened preoedipal and oedipal castration anxiety, and a faulty genital schematization. Compulsive masturbation, duplicating genital arousal initiated by the mother's early enema assaults and later by primal scene exposures, allowed neutralization of the intense aggression aroused by separation and castration traumata. Preoedipal fantasies screened off from conscious awareness by the oedipal ones were influenced by the primal scene exposures. These consisted of the ideas of being hurt (castrated) and beaten by the father. In the lengthy therapy with this patient an erotized transference developed in which, again, a libidinal gratification was used defensively to maintain the threatened object relationship. The interaction between beating fantasies and reality events is also discussed. Answer: Robotic neurological surgery applications are not pure fantasy; they are grounded in accuracy and consistency, as demonstrated by various studies and real-world applications. The study evaluating the performance of the PathFinder neurosurgical robotic system found that it was very accurate and consistent in practice, with a mean application accuracy of less than 1 mm across a range of neurosurgical procedures (PUBMED:19223694). Similarly, robotic thymectomy for thymoma in patients with myasthenia gravis showed effective neurological outcomes, with 76.5% of patients experiencing improvement in clinical conditions post-operation (PUBMED:34263301). The first case of pure robotic surgery for an odontoid tumor also supports the feasibility and potential benefits of robotic assistance in complex neurosurgical procedures (PUBMED:29857219). Moreover, the systematic review on the role of robotic surgery in neurological cases highlighted the advantages of robotic technologies in enhancing accuracy and dexterity, which are crucial for surgeries in the central nervous system (PUBMED:38046149). The early experience of pure robotic right hepatectomy for liver donors in a small-volume center further supports the feasibility and positive outcomes associated with robotic surgery (PUBMED:36532090). While a comparison of pure robotic partial nephrectomy to a combined laparoscopic-robotic approach found that the hybrid approach had certain perioperative benefits, it does not negate the validity of a pure robotic approach (PUBMED:27637235). Additionally, a national multicenter study comparing robotic and open thymectomy for thymic neoplasms in myasthenic patients found that the robotic approach was comparable to the open technique in terms of surgical, oncological, and neurological outcomes (PUBMED:38254894). Another study on robotic thymectomy in patients with myasthenia gravis reported that the majority of patients showed neurological improvement after the procedure (PUBMED:25234092). In conclusion, the evidence suggests that robotic neurological surgery is a reality with proven accuracy and consistency, offering significant benefits in various neurosurgical applications. It is not merely a fantasy but a technological advancement that is reshaping the field of neurosurgery.
Instruction: Do medical students want to learn about global health? Abstracts: abstract_id: PUBMED:24848658 Do medical students want to learn about global health? Background: One objective of the United Nations Global Strategy for Women's and Children's Health relates to ensuring a sufficiently skilled workforce. To prepare future healthcare professionals for their role in the 21st century as members of this workforce, awareness of global health is essential, but few studies have explored student perspectives on such education. The main objectives of this study were to establish the views of medical students on learning about women's and children's health in low-income countries, to identify the nature and extent of learning already experienced, and to assess the demand for such learning. Design: A questionnaire survey was conducted at three meetings of the International Federation of Medical Students Associations (IFMSA). Questionnaires were distributed to 500 participants from 75 countries and 336 medical schools, and 492 usable questionnaires were returned. Data were analysed using SPSS Version 20 and statistical analysis was undertaken using Fisher's exact test. Results: There were 492 questionnaires included in the analysis. Forty-eight per cent of participants were from low-middle income countries and 52% were from high-income countries. Less than half (43%) of the respondents had received some teaching on women's and children's health in low-income countries. Teaching received was primarily (96%) through lectures in the second year of study. Ninety-one per cent of respondents thought such teaching would be important and stated that group work (66%) would be the preferred method. In total, only 14% thought they had received sufficient teaching on global health and on women's and children's health in low-income countries. Conclusions: This study has revealed a high demand among medical students for global health teaching, particularly on women's and children's health in low-income countries. The timing and methods of existing teaching on these topics does not match that desired by medical students. To help address this gap, a collaborative approach is proposed which includes students' views in the processes for revitalising medical curricula to meet the needs of the 21st century. abstract_id: PUBMED:26882901 Ensuring the Health, Safety and Preparedness of U.S. Medical Students Participating in Global Health Electives Overseas. Global health electives based in resource-poor countries have become extremely popular with medical students from resource rich ones. As the number of such programs and participants increase, so too do the absolute health and safety risks. It is clear from a number of published reports that many institutions provide little or no meaningful preparedness for students and do little to ensure their health and safety. These deficiencies together can affect students, their foreign hosts, and sponsoring institutions. The School of Public Health at the State University of New York, Downstate Medical Center, and its predecessor, the Department of Preventive Medicine and Community Health, have sponsored a 6-8 week global health elective for fourth year medical students since 1980. The purposes of this elective are to provide students with an opportunity to observe the health care and public health systems in resource-poor countries, provide medical service, and have a cross-cultural experience. Over the course of the past 35 years, 386 students have participated in this global health elective in more than 41 resource-poor countries. Recent annual applications for this elective have been as high as 44 out of a class of 200 students. Over the past 10 years, annual acceptance rates have varied, ranging from a low of 32 % in 2007-2008 to a high of 74 % in 2010-2011 and 2013-2014. Careful screening, including a written application, review of academic records and personal interviews, has resulted in the selection of highly mature, adaptable, and dedicated students who have performed well at overseas sites. Appropriately preparing students for an overseas global health experience in resource-poor countries requires the investment of much professional and staff time and effort. At the SUNY Downstate School of Public Health, these resources have underpinned our Global Health in Developing Countries elective for many years. As a result, the elective is characterized by meticulous organization, extensive preparedness measures for students, and continuous monitoring of site and country safety. The health of students is ensured by one-on-one assessment of immunization needs, anti-malarials, and the provision of a five-day supply of post-exposure HIV prophylaxis. Students sign agreements regarding the legal issues, immunizations, and anti-malarials recommended as well as HIV post-exposure prophylaxis. They are also required to obtain medical evacuation insurance provided by the university, and medical care insurance valid overseas. Student travel plans are also approved as is in-country lodging. The focus of our 6-8 week global health elective is not clinical medicine. Rather, it is to enable students to learn about the health care and public health systems in a resource-poor country. Through that focus, they also come to understand the causes of health and health care disparities that exist in the country to which they are assigned. Our students are greatly advantaged with regard to cross-cultural understanding since our school is located in New York City's Borough of Brooklyn, where 40 % of the population was born outside of the U.S. Our comprehensive effort at risk management for this global health elective includes a thorough debriefing for each student upon his/her return. Special attention is given to ascertaining illness or injury while overseas, and, when necessary, immediate referral is made to an appropriate university clinical department where a student can be appropriately case managed. Meticulous oversight, careful selection of safe overseas sites, and attention to preparing students have resulted in significant risk reduction and successful experiences for the majority of our 386 students. This article describes the model we have developed for ensuring the health, safety, and preparedness of students participating in our global health elective. abstract_id: PUBMED:37821889 Exploring the interest of medical students in global health in South Korea: Does taking a global health course matter? Background: Prioritizing global health is important for positive health outcomes. Doctors play a pivotal role in addressing health issues that they need to recognize the importance of global health practice. However, medical education in global health is still in its early stages in many countries, including South Korea. Methods: This study is a quantitative cross-sectional study. Medical students were recruited from all 40 Korean medical schools and data collection was conducted in 2021. Stratified proportional quota sampling was employed as the sampling method. The study examined medical students' interest in global health as the dependent variable. Independent variables included whether individuals had taken global health courses as part of their medical curriculum, while covariates included student background and institutional factors. Logistic regression was used to determine if taking a global health class was associated with global health interests. Results: The study included 2450 participants and almost 32% of medical students (n = 781) were interested in global health. Taking global health classes in school was associated with a higher likelihood of developing an interest in global health (OR: 1.29, 95% CI: 1.00-1.67). The likelihood of being interested in global health decreased across the academic year (OR: 0.70, 95% CI: 0.53-0.94). Individuals in graduate entry programs were associated with an interest in global health (OR: 1.32, 95% CI: 1.01-1.72). Conclusion: This study provides the first comprehensive nationwide assessment of medical students' perspectives on global health education in South Korea. The findings underscore the importance of early and sustained exposure to global health topics in medical education in fostering interest in global health. These results can serve as valuable evidence for medical educators seeking to integrate global health education into their curricula. abstract_id: PUBMED:32821807 Diarrhea in the Returning Traveler: A Simulation Case for Medical Students to Learn About Global Health. Introduction: As global travel becomes more prevalent, medical students may be asked to care for patients with unforeseen exposures. We developed a simulation where clerkship medical students interviewed and examined a patient with recent travel who presented with bloody diarrhea and abdominal pain and was diagnosed with amebic colitis. The students had the opportunity to develop a differential diagnosis and discuss the workup of the patient. Methods: We divided students into two groups. Each group took a turn participating in the simulation while the other group observed. Students were expected to interview and examine the patient as well as treat any urgent findings and develop a differential diagnosis. After each simulation, we reconvened with both groups for a faculty-led debriefing session to discuss the learning objectives, including approaches to caring for a patient with diarrhea and the differential diagnosis and workup of bloody diarrhea. Results: To date, five different groups of six to 12 students have completed this simulation. The module has been well received, and 100% of survey respondents have agreed that after completing the activity, they had a better understanding of how to approach a recent traveler with diarrhea and abdominal pain. Discussion: While most medical students will not travel abroad for traditional global health experiences, many will encounter patients with recent travel or immigration and must therefore be prepared to treat diseases typically categorized as global health. We developed this simulation and successfully incorporated workup of a returning traveler into the medical school curriculum for clerkship students. abstract_id: PUBMED:31146756 Medical students as global citizens: a qualitative study of medical students' views on global health teaching within the undergraduate medical curriculum. Background: There is increasing interest in global health teaching among medical schools and their students. Schools in the UK and internationally are considering the best structure, methods and content of global health courses. Academic work in this area, however, has tended to either be normative (specifying what global health teaching ought to look like) or descriptive (of a particular intervention, new module, elective, etc.). Methods: While a number of studies have explored student perspectives on global health teaching, these have often relied on tools such as questionnaires that generate little in-depth evidence. This study instead used qualitative methods to explore medical student perspectives on global health in the context of a new global health module established in the core medical curriculum at a UK medical school. Results: Fifth year medical students participated in a structured focus group session and semi-structured interviews designed to explore their knowledge and learning about global health issues, as well as their wider perspectives on these issues and their relevance to professional development. While perspectives on global health ranged from global health 'advocate' to 'sceptic', all of the students acknowledged the challenges of prioritising global health within a busy curriculum. Conclusions: Students are highly alert to the diverse epistemological issues that underpin global health. For some students, such interdisciplinarity is fundamental to understanding contemporary health and healthcare. For others, global health is merely a topic of geographic relevance. Furthermore, some students appeared to accept global health as a specialist area only relevant to professionals working overseas, while others considered it to be an essential part of working in the globalised world and therefore relevant to all medical professionals. Students also clearly noted that including 'soft' subjects and more discursive approaches to teaching and learning often sits awkwardly in a programme where 'harder' forms of knowledge and didactic methods tend to dominate. This suggests that more work needs to be done to explain the relevance of global health to medical students at the very beginning of their studies. abstract_id: PUBMED:37780838 Strengthening Mentorship in Global Health for US Medical Students. US medical students demonstrate strong interest in receiving global health training. In 2012, the Center for Global Health (CGH) at the University of Illinois College of Medicine (UICOM) developed a Global Medicine (GMED) program to match this interest. From its initiation, mentorship has been a key component of the GMED program. More recently, this has been strengthened by applying additional evidence-informed approaches toward mentoring. These include the "mentor up" approach, a "network of mentors," and an individualized development plan (IDP). Applying these changes were associated with increases in the number of student abstract presentations and peer-reviewed journal publications. Mentorship based upon evidence-informed approaches should be a key component of global health education in academic medical centers. abstract_id: PUBMED:37041577 Global Health in Undergraduate Education: Knowledge, Attitude, and Practice of Sudanese Medical Students towards Global Health Education: a cross-sectional study. Introduction: Globalization and other relevant phenomena such as healthcare workforce, ageing of the population, brain drain etc. all necessitate medical curricula to transcend national medicine and encompass a more global approach. This is especially true in the context of developing countries which currently act as passive recipients of ongoing global decisions, health inequities or pandemics. The aim of this research was to study the Knowledge, Attitude, and Practice of Sudanese medical students towards global health education and the impact of extra-curricular activities on their knowledge and attitudes. Methods: This was a descriptive cross-sectional institutional based study. The study was conducted in five Sudanese Universities and participants were selected using systematic random sampling. An online-based, self-administered questionnaire was used, samples were collected between November 2019 until April 2020 and data was analyzed using SPSS version 25. Results: 1176 medical students were involved. The study revealed a poor level of knowledge among 72.4%, and only 2.3% of respondents revealed a good level of knowledge. Mean knowledge scores between Universities slightly vary and have shown positive correlation according to the grade of the medical student. Regarding attitude, the results demonstrated the high level of interest of medical students in global health, their agreement in including global health in their official medical education curriculum (64.8%) and their consideration of including global health as part of their future career (46.8%). Conclusion: The study concluded that there is a knowledge gap among Sudanese medical students regarding global health education, although students showed good attitudes and willingness to include global health in their official curriculum. Recommendations: Global health education should be implemented in the official curriculums of Sudanese Universities, and Universities should do global partnerships and increase the learning and teaching opportunities in this interesting field. abstract_id: PUBMED:38462238 Analysis of the perceptions, competencies, and educational needs for global health among Korean medical students. Purpose: The purpose of this study was to examine perceptions of global health education (GHE) among medical students and their involvement in global health activities and identify priorities of educational needs for developing GHE programs. Methods: This study was cross-sectional and conducted through an online survey for medical students. The participants were students attending medical schools nationwide, and the final analysis target was 678. The survey developed questionnaires necessary for research purposes regarding global health-related experiences and perceptions, level of awareness of global health competencies (GHC), and needs assessments. The data were analyzed using the frequency analysis, chi-square test, independent t-test, Borich Needs Assessment Model, and the Locus for Focus Model. Results: In total, 60.6% (411/678) agreed on the need for GHE, whereas 12.1% (82/678) agreed on the appropriateness of GHE in the current medical school curriculum, indicating a perception gap between the necessity and the status. At the current level of awareness of global health and GHC, we identified statistically significant differences according to gender, participation in global health activities, and GHE. In the analysis of the educational needs of GHC, all items of GHC had statistically significant differences between the importance level and the current level, and priorities were derived. The competency with the highest priority was domain A (Global Burden of Disease). Conclusion: We expect the findings of this study to be used in Korean medical education as fundamental data to prepare a hereafter research foundation for GHE and discuss systematic GHE based on GHC. abstract_id: PUBMED:24334202 To the point: obstetrics and gynecology global health experiences for medical students. This article, from the To the Point series prepared by the Association of Professors of Gynecology and Obstetrics Undergraduate Medical Education Committee, provides educators with an overview of considerations for obstetrics and gynecology global health experiences for the medical student. Options for integration of obstetrics and gynecology global health into undergraduate medical curricula are discussed. Specific considerations for global health clinical experiences for medical students, including choosing a clinical location, oversight and mentorship, goals and objectives, predeparture preparation, and evaluation, are reviewed. abstract_id: PUBMED:34167532 Global health education for medical students in Italy. Background: Global health education (GHE) in Italy has spread since the first decade of 21st century. The presence of global health (GH) courses in Italy was monitored from 2007 to 2013. In 2019, a new survey was proposed to assess the availability of educational opportunities in Italian medical schools. Methods: An online survey was carried out using a questionnaire administered to a network of interested individuals with different roles in the academic world: students, professors, and members of the Italian Network for Global Health Education. The features of courses were analysed through a score. Results: A total of 61 responses were received from affiliates of 33 out of the 44 medical schools in Italy. The national mean of GH courses for each faculty was 1.2, reflecting an increase from 2007. The courses increased nationwide, resulting in a dispersed GHE presence in northern, central and southern Italy. One of the most critical points was related to the nature of "elective" courses, which were not mandatory in the curricula. Enrollees tended to be students genuinely interested in GH issues. Some community and service-learning experiences, referred to as GH gyms, were also detected at national and international levels. Conclusions: GHE has spreading in Italy in line with the vision of the Italian Network for Global Health Education. Although progress has been made to disperse GH courses around the country, more academic commitment is needed to include GH in the mandatory curricula of medical schools and other health faculties. Answer: Yes, medical students do want to learn about global health. A study revealed a high demand among medical students for global health teaching, particularly on women's and children's health in low-income countries. The study found that 91% of respondents thought such teaching would be important, but only 14% thought they had received sufficient teaching on global health and on women's and children's health in low-income countries (PUBMED:24848658). Another study showed that taking global health classes in school was associated with a higher likelihood of developing an interest in global health among medical students (PUBMED:37821889). Furthermore, a qualitative study of medical students' views on global health teaching within the undergraduate medical curriculum found that students are highly alert to the diverse epistemological issues that underpin global health and acknowledge the challenges of prioritizing global health within a busy curriculum (PUBMED:31146756). Additionally, a cross-sectional study in Sudan concluded that there is a knowledge gap among medical students regarding global health education, although students showed good attitudes and willingness to include global health in their official curriculum (PUBMED:37041577). Lastly, an analysis of Korean medical students' perceptions indicated that 60.6% agreed on the need for global health education, but only 12.1% agreed on the appropriateness of global health education in the current medical school curriculum, suggesting a perception gap between the necessity and the status (PUBMED:38462238). These findings collectively demonstrate that medical students have a strong interest in global health education and recognize its importance in their professional development.
Instruction: Histopathologic examination of routine tonsil and adenoid specimens: is it a necessary approach? Abstracts: abstract_id: PUBMED:9260756 Value-based pathology: a cost-benefit analysis of the examination of routine and nonroutine tonsil and adenoid specimens. To study the cost-effectiveness of the histologic examination of tonsil and adenoid specimens, the histologic diagnoses for all routine (2,700) and nonroutine (71) tonsil and adenoid specimens during a 10-year period were reviewed. There were 27 routine cases (1%) and 56 nonroutine cases (79%) with a diagnosis other than normal, tonsillitis, or hyperplasia. Twelve of the 27 routine cases did not have a significant clinical history, and a potentially significant pathologic diagnosis was made in only 6 cases; in none of these cases did the pathologic diagnosis affect patient care. In all nonroutine cases, the pathologic diagnosis affected patient care. The average charge per case to detect potentially significant disease in routine and nonroutine cases was $64,718 and $525, respectively. We conclude that histologic examination of nonroutine cases is cost-effective, whereas in most routine cases with adequate clinical history, histologic examination is not cost-effective. abstract_id: PUBMED:22548265 Histopathologic examination of routine tonsil and adenoid specimens: is it a necessary approach? Objectives: This study aims to investigate whether it is necessary to perform histopathologic examination of the specimens of tonsillectomy and/or adenoidectomy. Patients And Methods: In this retrospective and multicenter study, 1021 pediatric and adult patients (557 males, 464 females; median age 8 years; range 5 to 13 years) who underwent adenoidectomy and/or tonsillectomy were included. Of the patients, 809 (79.3%) were pediatrics, while 212 (20.7%) were adult. Age, gender, histopathologic diagnosis and risk for malignancy in the patients with malignant diagnosis were reviewed using present patient records. Results: Adenotonsillectomy was performed on 396 patients (38.8%), tonsillectomy on 266 patients (26%) and adenoidectomy on 359 patients (35.2%). Of the 1021 patients, 1011 (99%) received a benign histopathologic diagnosis, while 11 (1%) receieved malignant diagnosis. Malignant diagnosis was present only in adult patients. All patients diagnosed with a malignancy had one or more preoperative risk factors. No unexpected malignant diagnosis was found in any of the patients without preoperative risk factors. Conclusion: We concluded that histopathologic diagnosis may not be required for the patients without preoperative risk factors, particularly pediatric patients. abstract_id: PUBMED:27941543 Is Routine Histopathologic Examination Necessary Following Tonsillectomy and/or Adenoidectomy Procedures in Pediatric Patients? Tonsillectomy and/or adenoidectomy is the most common surgical procedure in ENT practice and the malignancy rate of tonsillar tissue is very rare. The purpose of the present study is to investigate the necessity of routine histopathologic examinations following tonsillectomy and/or adenoidectomy surgeries. A total of 385 patients between the age of 5 and 16 years who were admitted to our ENT clinic and underwent adenoidectomy and/or tonsillectomy between October 2008 and July 2015 participated in the study. Patient data such as age, sex, indication for surgery, and histopathologic findings were recorded prospectively. Histopathologic findings were analyzed for hyperplastic lymphoid parenchyma and malignancy. No malignancies were detected in the studied patients and all had hyperplastic lymphoid parenchyma. Unilateral tonsillar swelling, significant lesions on the tonsil or adenoid tissue, malignancy history in the head and neck region, or a mass in the neck region with weight loss must be examined histopathologically; however, we conclude that there is no need to perform histopathologic examination for patients for whom there is no clinical suspicion. A meta-analysis is needed in this regard. abstract_id: PUBMED:19944467 Optimal utilization of histopathologic analysis of tonsil and adenoid specimens in the pediatric population. Objective: To evaluate the optimal utilization of histopathologic analysis of tonsil and adenoid specimens in the pediatric population. Methods: A retrospective review was performed on 7837 tonsil and adenoid specimens submitted from January 2004 to April 2008. The records were reviewed for the patients' age, sex, and pathologic analysis. The time and cost per analysis of each specimen were determined. Results: Histopathology was performed on 347 specimens based on clinical suspicion by the surgeon, a difference of 0.5 cm or more among tonsils, gross abnormalities, and history of malignancy, transplant, or immunocompromise. Malignancy was diagnosed in 0.026% of patients. Post-transplant lymphoproliferative disease was diagnosed in 6 of 24 immunocompromised patients. The use of these criteria resulted in a savings of $518,088.47 and 461 h of dedicated technician time per year. Conclusions: Histologic examinations in selected specimens should be based on specific criteria that should be determined by each hospital based on hospital size, finances and input from their pathologists and otolaryngogists. Storage of a representative specimen for possible retrospective review may be useful. abstract_id: PUBMED:36167605 Unilateral tonsil enlargement in children and adults: is routine histology tonsillectomy warranted? A multi-centre series of 323 patients. Background: This study aimed to establish whether histology tonsillectomy is justified for unilateral tonsil enlargement. Methods: A retrospective review was conducted of histology tonsillectomies in three health organisations over five years, with strict exclusion criteria, focusing on benign-appearing unilateral tonsil enlargement. Results: Ninety paediatric and 233 adult cases were included. No paediatric cases and five adult cases of malignancy were detected. All malignant cases presented with other symptoms. Using binary logistic regression, a history of rapid unilateral tonsil enlargement was the only factor found to be significantly associated with malignant outcome. Thirty-three per cent of subjectively larger tonsils were smaller on post-operative histological measurement. Of the cases, 12.1 per cent re-presented with post-tonsillectomy bleeding. Conclusion: The authors recommend avoiding histology tonsillectomy for unilateral tonsil enlargement unless 'red flag' signs of malignancy are present, with particular attention to rapid unilateral tonsil enlargement. This study demonstrated discrepancy between clinical examination findings and true tonsil asymmetry; there may be a role for cross-sectional imaging prior to histology tonsillectomy in high-risk patients. abstract_id: PUBMED:14608541 The adequacy of gross pathological examination of routine tonsils and adenoids in patients 21 years old and younger. Most hospitals microscopically examine all routine tonsil and adenoid specimens from healthy pediatric patients with recurrent infections or obstructive sleep apnea. Concern over missing the rare unsuspected, significant diagnosis propagates this practice. Careful gross examination for asymmetry and clinical findings should obviate the need for routine microscopic examination of tonsil and adenoid specimens in patients age 21 years and younger. A retrospective study was conducted using the SNOMED database of 4070 patients age 21 years or younger who underwent tonsillectomy and/or adenoidectomy between 1970 and July 2001 at the University of Florida. The age distribution of the study group was 0 to 5 years (52%), 6 to 12 years (37%), and 13 to 21 years (11%). Specimens consisted of tonsils only (15%), tonsils and adenoids (40%), and adenoids only (45%). Clinically significant diagnoses were diagnoses that impacted the care of patients and included malignancies and some infections. Non-clinically significant diagnoses included normal, acute or chronic tonsillitis, and tonsillar hyperplasia. Clinically significant pathological processes were seen in the tonsil or adenoid specimens of 3 of the 4070 patients. These 3 cases included a 2-year-old male with Burkitt's lymphoma, a 19-year-old male with non-Hodgkin's lymphoma (small noncleaved cell, non-Burkitt's type), and an 11-year-old male with a probable viral process but in whom a lymphoma could not be absolutely excluded. All 3 of these patients had signs and symptoms, including significant cervical lymphadenopathy, meriting microscopic analysis of the specimens. In conclusion, microscopic examination of all routine tonsils and adenoids for individuals 21 years or younger is not indicated. Gross examination is still recommended. Clinical suspicion and specimen asymmetry should be used to determine when thorough histological examination is merited. abstract_id: PUBMED:12738918 Comparison of adenoid and tonsil core cultures in chronic adenotonsillitis Objectives: Tonsil and adenoid core cultures were compared and beta-lactamase producing bacteria were determined in patients with chronic adenotonsillitis. Patients And Methods: Thirty-two patients (21 boys, 11 girls; mean age 5 years) with chronic adenotonsillitis underwent elective adenotonsillectomy. The core swaps of tonsil and adenoid tissues were obtained under sterile conditions and were inoculated in 5% sheep blood agar, eosin methylene blue agar, and chocolate agar plates. The frequency of beta-lactamase producing isolates were assessed. Results: Staphylococcus aureus was the most common pathogen both in tonsil and adenoid core cultures. Among pathogenic bacteria isolated from tonsil (n=27) and adeonid (n=22) cultures, the same strains were isolated in 20 cultures (75%). The frequencies of beta-lactamase producing bacteria were 44% and 41% in tonsil and adenoid core cultures, respectively. S. aureus was associated with beta-lactamase production in 88% and 100% in tonsil and adenoid tissues, respectively. Conclusion: Differences between tonsil surface and core bacterial flora may have implications in the etiopathogenesis and treatment of chronic tonsillitis. The range of species isolated seems to be similar for both chronic tonsillitis and adenoiditis. abstract_id: PUBMED:29605362 Can the number of eosinophils in adenoid and tonsil tissue determine the allergy in children? Objective: Previous reports have indicated the association of allergy with Waldeyer's ring. The aim of this prospective study was to evaluate the correlation between the allergy and the number of eosinophils in the adenoid and tonsil tissue. Methods: 101 children who were underwent adenoidectomy and/or tonsillectomy were prospectively enrolled in this study. 46 children who had positive skin-prick test for at least one of the allergen panel were sensitized and 55 children were nonsensitized as a result of skin-prick test. Hematoxylin-eosin stained sections were examined under 400× magnification in a blinded fashion in 10 random sections for all samples and compared the groups. Results: The number of eosinophils in adenoid and tonsil tissue was significantly higher in sensitized patients. The number of eosinophils in the adenoid and tonsil samples also were interrelated too (p < 0.001, kappa coefficient: 0.617). As a result of ROC analysis, patients with a cut-off of ≥5 eosinophils was in the sensitized group for adenoid samples [Sensitivity value: 72.09 and specifity value: 91.84]. Patients with a cut-off of ≥3 eosinophils are in the sensitized group for tonsil samples [Sensitivity value: 52.94 and specifity value: 92.11]. Conclusion: Presence of sensitization can be distinguished by looking at the number of eosinophils in the adenoid and tonsil tissue. It can be used to determine whether the patient is allergic or not. Examination of the routine hematoxylin-eosin stained adenotonsillary specimen for eosinophilia will guide us the diagnosis and treatment of allergic rhinitis and also reduce the cost considerably. abstract_id: PUBMED:17762791 Routine histologic examination is unnecessary for tonsillectomy or adenoidectomy. Objective: To determine whether the current practice and incurred cost of histologic examination of tonsillectomy and adenoidectomy specimens is warranted. Study Design: Review article based on medical literature. Subjects And Methods: A retrospective PubMed review of all pertinent literature regarding tonsillectomy, adenoidectomy, and related surgical pathology was conducted. References of the articles obtained were reviewed for additional sources. Results: Twenty studies report 54,901 patients and found 54 malignancies (0.087% prevalence). Of these, 48 (88% of the patients) had suspicious features such as tonsillar asymmetry, cervical lymphadenopathy, or abnormal tonsil appearance, preoperatively. The remaining six patients without any suspicious features (better representing true occult malignancy) were 0.011% of the total cases. Conclusion: Submission of tonsillectomy, adenoidectomy, or both specimens is warranted only when patients demonstrate findings associated with malignancy: tonsillar asymmetry, history of cancer, neck mass, tonsil firmness or lesion, weight loss, and constitutional symptoms. abstract_id: PUBMED:27871535 Lingual Tonsil Hypertrophy: rescuing the airway with videolaryngoscopy. Lingual tonsils are lymphatic tissues located at the base of the tongue that may hypertrophy causing difficulty and sometimes inability to ventilate or intubate during anesthesia. Routine airway assessment fails to diagnose lingual tonsil hypertrophy. There is limited experience with use of videolaryngoscopy in cases of lingual tonsil hypertrophy. We present a case of difficult airway due to unanticipated lingual tonsil hypertrophy successfully managed by atypical video laryngoscope positioning. Answer: The necessity of histopathologic examination of routine tonsil and adenoid specimens has been debated in the literature. Several studies have investigated the cost-effectiveness and clinical impact of routine histologic examination of these specimens. One study reviewed the histologic diagnoses of 2,700 routine and 71 nonroutine tonsil and adenoid specimens over a 10-year period and found that in routine cases, a potentially significant pathologic diagnosis was made in only 6 cases, none of which affected patient care. In contrast, all nonroutine cases had a pathologic diagnosis that affected patient care. The study concluded that histologic examination of nonroutine cases is cost-effective, whereas in most routine cases with adequate clinical history, it is not cost-effective (PUBMED:9260756). Another study involving 1021 patients who underwent adenoidectomy and/or tonsillectomy found that 99% received a benign histopathologic diagnosis, and the 1% with a malignant diagnosis were all adults with preoperative risk factors. No unexpected malignant diagnoses were found in patients without preoperative risk factors, leading to the conclusion that histopathologic examination may not be required for patients without preoperative risk factors, particularly in pediatric patients (PUBMED:22548265). A similar study with 385 pediatric patients found no malignancies and suggested that histopathologic examination is not necessary for patients without clinical suspicion (PUBMED:27941543). Another retrospective review recommended that histologic examinations in selected specimens should be based on specific criteria, which could result in significant savings in time and cost (PUBMED:19944467). A study focusing on unilateral tonsil enlargement concluded that histology tonsillectomy should be avoided unless 'red flag' signs of malignancy are present, especially rapid unilateral tonsil enlargement (PUBMED:36167605). Additionally, a retrospective study of 4070 pediatric patients suggested that microscopic examination of all routine tonsils and adenoids is not indicated, and gross examination should suffice unless clinical suspicion and specimen asymmetry warrant thorough histological examination (PUBMED:14608541). In summary, the consensus among these studies is that routine histopathologic examination of tonsil and adenoid specimens may not be necessary, especially in pediatric patients without preoperative risk factors or clinical suspicion of malignancy.
Instruction: Is preoperative computed tomographic density measurement of soft tissues helpful in the diagnosis of cholesteatoma? Abstracts: abstract_id: PUBMED:23342551 Is preoperative computed tomographic density measurement of soft tissues helpful in the diagnosis of cholesteatoma? Objectives: We undertook to verify the usefulness of computed tomography Hounsfield units (HU) in differentiating cholesteatoma from inflammatory tissue. Methods: In 91 enrolled cases, the lesions were classified according to the gross pathology, and the specific locations of each lesion were documented by 1 surgeon within 1 day after the operation. The densities in HU of cholesteatoma and non-cholesteatoma lesions were retrospectively measured 3 times by the same examiner, and the difference between the two groups was analyzed. The interobserver reliability among the 3 examiners was assessed to verify the confidence level of the HU measurements in preoperative detection of cholesteatoma. Results: The mean HU values of cholesteatoma were 35.7 to 66.6 HU, and those of non-cholesteatoma lesions were 32.9 to 51.3 HU. A general linear model-repeated-measures analysis of variance did not show any significant difference between the cholesteatoma and non-cholesteatoma lesions (p = 0.305). The general linear model-repeated-measures analysis of variance showed a significant difference of the measured HU levels among the 3 examiners (p = 0.021). Conclusions: This study showed that the HU values on preoperative computed tomography did not suffice for the detection of cholesteatoma lesions. A clinician's physical examination together with an interpretation of computed tomography is still the "gold standard" method. abstract_id: PUBMED:17419523 Value of temporal bone density measurements in otosclerosis patients with normal-appearing computed tomographic scan. Objectives: Previous studies demonstrated that otosclerosis diagnosis benefits from temporal bone density measurements. We sought to assess bone density measurements of the fissula ante fenestram (FAF) in normal patients, in patients with otosclerosis, and in patients with cholesteatoma. We discuss the value of temporal bone density measurements in patients with otosclerosis who have a normal-appearing computed tomographic (CT) scan. Methods: This was a prospective case-control study in which 219 temporal bones (123 adults, 18 to 84 years of age) were included between November 1, 2002, and April 30, 2004. All patients underwent a CT scan of the temporal bones. Axial views were obtained with density measurement of the FAF. Results: The FAF density was significantly different (p < .0001) in the otosclerosis group (n = 119) compared to the control group (n = 100). There was no significant difference between the otosclerosis group with a normal-appearing CT scan and the control group (p = .64). Conclusions: From our results, it may be suggested that 1) temporal bone density measurements seem not to be strictly comparable between CT scan devices; and 2) temporal bone density measurements of the FAF did not allow the diagnosis of otosclerosis when the CT scan appeared normal. abstract_id: PUBMED:20434800 Usefulness of computed tomography Hounsfield unit density in preoperative detection of cholesteatoma in mastoid ad antrum. Purpose: This study was performed to evaluate the usefulness of Hounsfield unit (HU) to better distinguish cholesteatoma from other inflammatory conditions in the mastoid ad antrum before primary mastoid surgery. Materials And Methods: We enrolled 82 patients who underwent tympanomastoidectomy for treatment of chronic otitis. Forty-one patients were pathologically diagnosed with cholesteatoma, whereas the others were diagnosed with inflammatory granulation. These lesions were confirmed, and HU was measured in preoperative computed tomography. The difference in HU between cholesteatoma and non-cholesteatoma tissues was analyzed, and the improvement in the diagnosis of cholesteatoma after inclusion of HU data was calculated. Results: The HU was calculated as 42.68 ± 24.42 in the cholesteatoma group and 86.07 ± 26.50 in the non-cholesteatoma group. The differences between the 2 groups were statistically different (Student t test, P < .01). By applying the HU, the sensitivity (51.2%-80.5%), specificity (80.5%-87.8%), positive predictive value (72.4%-86.8%), and negative predictive value (62.3%-81.8%) to diagnose cholesteatoma improved. Conclusions: The HU density was found to be statistically different between cholesteatoma and inflammatory granulation tissue in mastoid antrum. An improved diagnosis of cholesteatoma was achieved after adjusting for the HU. abstract_id: PUBMED:30464906 Aural Polyp is not Always Due to Chronic Otitis Media (COM): Preoperative Computed Tomographic Scan is Good Pointer for Sinister Lesions. Twenty five patients of aural polyp who underwent canal wall down mastoidectomy were analysed retrospectively. Histopathological examination revealed cholesteatoma in 22 (88%) patients. However, histopathological diagnosis in 3 of these patients was unusual and rare benign tumors of the middle ear cleft-meningioma, neurilemmoma and capillary hemangioma. Review of the preoperative High Resolution Computed Tomography (HRCT) temporal bone revealed an unusual picture in all of the three cases. Features noted were: widening of the jugular foramen (meningioma), destruction of the anterior wall of mesotympanum (neurilemmoma), enhancing soft tissue density lesion (capillary hemangioma). Further, there was only partial loss of pneumatisation of the mastoid air cells in all of the 3 cases. It was observed that though HRCT temporal bone is a commonly advised investigation in patients of chronic otitis media (COM) with aural polyp, meticulous interpretation may reveal unusual features pointing towards sinister diagnosis. Conclusion: Aural polyp with preservation of pneumatisation of mastoid air cells points towards diagnosis other than COM. abstract_id: PUBMED:6867344 Computed tomography of the middle ear in the evaluation of cholesteatomas and other soft-tissue masses: comparison with pluridirectional tomography. Computed tomographic (CT) scans and tomograms of 60 patients with various soft-tissue masses of the middle ear, including 30 with cholesteatomas, were studied. CT produced excellent images of middle ear soft-tissue masses and appears to be the diagnostic method of choice for cholesteatomas, glomus tympanicum tumors, and other soft-tissue masses. In one patient CT demonstrated pneumolabyrinth resulting from postsurgical fracture of the footplate of the stapes. Pneumolabyrinth is a newly reported CT finding in stapes footplate fracture. abstract_id: PUBMED:3257222 Histiocytosis X of the temporal bone: CT findings. The preoperative diagnosis of temporal bone histiocytosis X has been based traditionally on clinical examination, plain radiography, and pluridirectional tomography. Clinical misdiagnosis is common because otologic findings can mimic those of acute and chronic infectious ear disease. Similarly, plain radiographic and tomographic findings may be confused with those of mastoiditis, cholesteatoma, and temporal bone metastasis. The three cases of histiocytosis X presented here illustrate the advantages of CT compared with traditional radiographic methods in the diagnosis and staging of this disease. Computed tomography clearly delineates osseous involvement, including erosion of the bony labyrinth. Computed tomography also better defines the soft tissue margins of the granulomatous mass in relationship to the central nervous system and extratemporal tissues. abstract_id: PUBMED:23449285 An evaluation of preoperative computed tomography on patients with chronic otitis media. This study aimed to compare the veracity of computed tomography findings on patients undergoing surgery for chronic otitis media (COM) with the surgical findings, and to determine to what extent the preoperative computerized tomography (CT) findings are useful to the surgeon. A series of 56 patients with COM undergoing preoperative CT scanning followed by surgical exploration of the middle ear and mastoid. Operative notes were recorded and data collected on the nature of soft tissue masses, the status of the ossicles, presence or absence of facial canal dehiscence and semicircular canal (SCC) dehiscence and the presence or absence of dural plate erosion, and sigmoid sinus thrombosis. Fifty-six patients were recruited in the study, 30 males and 26 females. The age range was from 16 to 67 years with a mean of 26.51 ± 1.4 years. The preoperative CT scan imaging in cases of cholesteatoma, ossicular chain erosion and SCC dehiscence have good correlation with the intraoperative findings. The specificity of preoperative CT scan in detecting facial canal dehiscence, dural plate erosion and sigmoid sinus thrombosis in patient of COM were weak. Preoperative computed tomography evaluation is fairly useful especially in cases of cholesteatoma. According to the results of this study, CT is of value particularly in the definition of cholesteatoma, and in determining ossicular chain erosion and semicircular canal fistula. abstract_id: PUBMED:6727511 Computed tomography in suppurative ear disease: a correlation of surgical and radiographic findings. Forty-two patients with chronic otitis media underwent preoperative CT scanning followed by surgical exploration of the middle ear and mastoid. The CT finding of abnormal soft tissue density associated with bone erosion was highly correlated with the surgical finding of cholesteatoma. By contrast, the total absence of abnormal soft tissue on CT essentially excluded cholesteatoma. However, 50% of all patients had abnormal soft tissue on CT scan not accompanied by bone erosion. In this largest group of patients it was not possible to diagnose or exclude cholesteatoma on the basis of CT findings alone. Also, CT occasionally gave the erroneous impression of lateral semicircular canal fistulization, tegmen tympani erosion, and facial nerve involvement due to volume averaging of these structures with adjacent soft tissues. CT scan has a role in the evaluation of selected patients with chronic otitis media, but must be interpreted cautiously in view of its limitations and numerous pitfalls. abstract_id: PUBMED:12664235 The role of high-resolution computed tomography (HRCT) and magnetic resonance imaging (MRI) in the diagnosis of preoperative and postoperative complications caused by acquired cholesteatomas The role of high-resolution computed tomography (HRCT) and magnetic resonance imaging (MRI) in the diagnosis of preoperative and postoperative complications caused by acquired cholesteatomas will be described in this paper. The pre- and postoperative imaging of the temporal bone was performed with HRCT and MRI.HRCT and MRI were performed in the axial and coronal plane. MRI was done with T2 weighted and T1 weighted sequences both before and after the intravenous application of contrast material. All imaging findings were confirmed clinically or surgically. The preoperative cholesteatoma-caused complications depicted by HRCT included bony erosions of the ossicles, scutum, facial canal in the middle ear, tympanic walls including the tegmen tympani, and of the labyrinth. The preoperative cholesteatoma-caused complications depicted by MRI included signs indicative for labyrinthitis, and brain abscess. Postoperative HRCT depicted bony erosions caused by recurrent cholesteatoma,bony defects of the facial nerve and of the labyrinth, and a defect of the tegmen tympani with a soft tissue mass in the middle ear. Postoperative MRI delineated neuritis of the facial nerve, labyrinthitis, and a meningo-encephalocele protruding into the middle ear. HRCT and MRI are excellent imaging tools to depict either bony or soft tissue complications or both if caused by acquired cholesteatomas. According to our findings and to the literature HRCT and MRI are complementary imaging methods to depict pre- or postoperative complications of acquired cholesteatomas if these are suspected by clinical examination. abstract_id: PUBMED:7745552 Preoperative X-ray computed tomographic evaluation of major aplasia of the ear in children Purpose: To describe computed tomography (CT) findings in congenital aural atresia (CAA) and to illustrate the impact of these findings in the preoperative evaluation. Materials And Methods: Sixty-seven congenital aural atresia (bilateral: 10) in 57 children were studied using high resolution CT. Sections 1.5 mm thick were removed from the coronal and the axial plans without sedation (mean age: 9, 6 years). Results And Discussion: A narrow bony external auditory canal (EAC) was present in 24% of the cases. In one of these cases, the EAC contained a cholesteatoma and was consequently a clear indication of surgery. An hyperpneumatized mastoïd (22%), a posterior position of the temporo mandibular joint (16%), and a bulging medial temporal fossa (12%) made the operation much more difficult. The tympanic cavity was small in 68% of the cases, normal in 28% and absent in 4% of the cases without any detectable ossicular chain. Ossicular chain anomalies were present in 91% of the cases. This consisted most frequently of a fusion of the malleus and the incus (76%) with or without fusion to the tympanic wall (33%). Lateral and anterior displacement of the descending portion of the facial nerve was present in 62%. Oval and round windows were normal in 86% of the cases. A soft tissue opacity (33%) in the tympanic cavity made it difficult to evaluate the stapes, the incudo stapedial articulation, and the facial nerve. Simultaneous abnormalities of inner ear were exceptional (1 case). Conclusion: High resolution CT is the best method in CAA evaluation and for guiding the planning of the surgical correction. Answer: The usefulness of preoperative computed tomographic (CT) density measurement of soft tissues in the diagnosis of cholesteatoma has been investigated in several studies, but the results are mixed. One study found that the Hounsfield unit (HU) values on preoperative CT did not significantly differ between cholesteatoma and non-cholesteatoma lesions, suggesting that HU values alone may not be sufficient for the detection of cholesteatoma (PUBMED:23342551). Another study also reported that temporal bone density measurements did not allow the diagnosis of otosclerosis when the CT scan appeared normal, indicating that bone density measurements might not be reliable for certain conditions (PUBMED:17419523). However, another study showed that the HU density was statistically different between cholesteatoma and inflammatory granulation tissue in the mastoid antrum, and that the diagnosis of cholesteatoma improved after adjusting for the HU, with increased sensitivity, specificity, positive predictive value, and negative predictive value (PUBMED:20434800). Additionally, a study on patients with aural polyps suggested that preoperative high-resolution CT (HRCT) temporal bone can reveal unusual features that may point towards a diagnosis other than chronic otitis media (COM), such as cholesteatoma or rare benign tumors (PUBMED:30464906). In contrast, another study indicated that CT scans and tomograms were highly correlated with surgical findings of cholesteatoma, suggesting that CT is the diagnostic method of choice for cholesteatomas and other soft-tissue masses (PUBMED:6867344). Overall, while some studies suggest that CT density measurements can be helpful in differentiating cholesteatoma from other conditions, the reliability of this method may vary, and it should not be used as the sole diagnostic tool. Physical examination and interpretation of CT images in conjunction with clinical findings remain the "gold standard" for diagnosing cholesteatoma (PUBMED:23342551).
Instruction: Are the recent trends in liver cirrhosis mortality affected by the changes in alcohol consumption? Abstracts: abstract_id: PUBMED:9273913 Are the recent trends in liver cirrhosis mortality affected by the changes in alcohol consumption? Analysis of latency period in European countries. Objective: A striking reduction of alcohol-related problems, such as liver cirrhosis death rates, has been observed in many western countries in recent years. It might be expected that these declines are accompanied by a lagged reduction in per capita alcohol consumption, since many years of intake are required for cirrhosis of the liver to develop (lag theory). The main aim of this study is to verify the validity of the lag theory, taking as target populations those resident in European countries. Method: Changes in liver cirrhosis death rates in 22 European countries between 1970 and 1089 were regressed onto changes in per capita alcohol consumption (1961-89) to evaluate the latency period between trends of these variables. Results: Eastern countries had a latency period between trends in alcohol consumption and in mortality rates of many years, whereas in northern, western and southern Europe cirrhosis mortality rates were explained by their relationship with per capita alcohol consumption, which lagged a few years. In some countries, an immediate contemporary appearance of the two phenomena was observed. Conclusions: These results suggest that the lag theory cannot fully explain the relationship between changing alcohol consumption and cirrhosis mortality and that other factors, such as alcoholism treatment rates, are involved in the wide geographical variability of the latency periods. abstract_id: PUBMED:35636480 Alarming Trends: Mortality from Alcoholic Cirrhosis in the United States. Background: Alcoholic cirrhosis is an advanced form of alcohol-related liver disease. In the United States, between 2010 and 2016, alcohol-related liver disease was the primary cause of nearly 1 in 3 liver transplants, surpassing hepatitis C. Methods: We utilized the US Centers for Disease Control and Prevention's Wide-ranging Online Data for Epidemiologic Research database to compare trends in mortality from alcoholic cirrhosis in the United States in 1999 and 2019. We defined mortality from alcoholic cirrhosis as International Classification of Diseases code K70.3 (alcoholic cirrhosis of liver). We calculated mortality rates and mortality rate ratios (MRRs) per 100,000 from alcoholic cirrhosis in 10-year age groups from 25 to 85+ as measures of effect and 95% confidence intervals to test for significance. Results: In 1999, there were 6007 deaths from alcoholic cirrhosis among 180,408,769 aged 25-85+ years, yielding a mortality rate of 3.3 per 100,000. In 2019, there were 23,780 deaths from alcoholic cirrhosis among 224,981,167 aged 25-85+ years, yielding a mortality rate of 10.6 per 100,000. The overall MRR of 3.2 was statistically significant. (P < .001), and was apparent in each 10-year age group. Conclusions: These alarming trends in mortality from alcoholic cirrhosis in the United States contribute to the formulation of many hypotheses. These require testing in analytic studies designed a priori to do so. Meanwhile, clinical and public health efforts are necessary to curb the epidemics of heavy alcohol consumption and overweight and obesity in the United States that may be contributing to these alarming trends. abstract_id: PUBMED:9126509 Trends of liver cirrhosis mortality in Europe, 1970-1989: age-period-cohort analysis and changing alcohol consumption. Background: Since the mid 1970s, a striking reduction in alcohol-related problems has been observed in many Western countries. Liver cirrhosis mortality is considered to be a major indicator of alcohol-related problems in the general population. The aim of the present study is to describe liver cirrhosis mortality trends in European countries between 1970 and 1989. Methods: This is a descriptive study on liver cirrhosis mortality in 25 European countries, and in four grouped European regions. A Poisson log-linear age-period-cohort model is used to clarify whether the recent trend in mortality represents a short-term fluctuation or an emerging long-term trend. In addition, a descriptive comparison between trends in per capital alcohol consumption and liver cirrhosis mortality is conducted. Results: In the whole European population and in that of Western and Southern Europe increasing period effects were observed until the second half of the 1970s followed by a decline in the next periods. In Eastern Europe the decline in period effects started in the first half of the 1980s, whereas in Northern Europe an increasing period effect was observed until the second half of the 1970s, followed by a stabilization. Similar trends were observed for per capita alcohol consumption. The age effect analysis showed a continuously rising effect in Eastern Europe, whereas an attenuation of the effect at around age 65 years was observed in Western Europe. Intermediate patterns were observed in Southern and Northern Europe. The birth cohort effect suggested that in the Western and Southern populations mortality could continue to decrease over the next decade, while in Eastern and Northern mortality is still rising and this will probably continue for the next decade. Conclusions: The age-period-cohort analysis allows targeting of health care and prevention programmes based on future trends. Aetiological and prognostic factors act differently in Europe. A better understanding of the trends would require more detailed information on alcoholism treatment rates, alcohol habits, viral hepatitic infections and other factors involved in the aetiopathogenesis of the disease. abstract_id: PUBMED:3141952 Trends in alcohol-related morbidity and mortality. Two major trends regarding alcohol use and consequences of alcohol abuse in the United States are showing significant improvement. Continued declines are evident in age-adjusted rates of liver cirrhosis mortality, and per capita alcohol consumption is at its lowest level in 15 years. Two other trends, however, are less clear. After declining in 1982 and continuing through 1984, alcohol-related morbidity--as measured by principal diagnoses listed on short-stay, community hospital discharges--showed a slight increase in 1985. Similarly, after declining every year but one since 1981, alcohol-related motor vehicle fatalities showed a significant increase in 1986. The downward trends suggest that progress is being made in efforts to reduce alcohol-related deaths and morbidity, but there are no easy explanations for any of the trends. Reductions in liver cirrhosis death rates may reflect coding changes in liver disease categories, less chronic heavy drinking, or better medical care. Lower per capita alcohol consumption may indicate the public's increased awareness of drinking risks or the aging of the U.S. population. Ironically, the recent increase in alcohol-related motor vehicle fatalities may reflect stronger enforcement of drunk driving laws and increased BAC (blood alcohol content) testing. abstract_id: PUBMED:26735116 Liver cirrhosis mortality trends in Eastern Europe, 1970-1989. Analyses of age, period and cohort effects and of latency with alcohol consumption. There is evidence that in some Eastern European countries alcohol-related deaths have an important impact on mortality. In the whole European population increasing trends in mortality were observed until the second half of the 1970s, followed by a decline in the following decades. By contrast, in Eastern Europe continuously rising trends have been observed. The aim of the present study is to describe cirrhosis mortality trends in Eastern European countries between 1970 and 1989. This is a descriptive study in seven European countries, compared with Europe as a whole. A Poisson's log-linear age-period-cohort model is used to ascertain whether the recent trend in mortality represents a short-term fluctuation or an emerging long-term trend. In addition, changes in cirrhosis death rates were regressed onto changes in per capita alcohol consumption (1961-89) in order to evaluate the latency period between trends. The birth-cohort effects suggested that in Eastern Europe as a whole, and in particular in Hungary, Bulgaria and Romania, the mortality will probably increase in the next decade. Eastern European countries showed a latency period between trends in alcohol consumption and in mortality rates of many years, whereas in Europe as a whole the cirrhosis mortality rates were explained by their relationship with per capita alcohol consumption which lagged by only a few years. Further increases in cirrhosis mortality, and probably in other alcohol-related problems, are expected in several Eastern European countries. Epidemiological studies aimed to estimate the proportion of disease attributable to known risk factors of cirrhosis, and monitoring programmes of viral infections and of alcohol consumption and alcohol-related problems, should be implemented to address the planning of public health programmes. abstract_id: PUBMED:15980061 Trends in mortality after hospital admission for liver cirrhosis in an English population from 1968 to 1999. Background: Population based mortality rates from liver cirrhosis, and alcohol consumption, have increased sharply in Britain in the past 35 years. Little is known about the long term trends over time in mortality rates after hospital admission for liver cirrhosis. Aims: To analyse time trends in mortality in the year after admission for liver cirrhosis from 1968 to 1999. Subjects: A total of 8192 people who were admitted to hospital in a defined population of Southern England. Methods: Analysis of hospital discharge statistics linked to death certificate data. The main outcome measures were case fatality rates (CFRs) and standardised mortality ratios (SMRs). Results: At 30 days after admission, CFR was 15.9% and the SMR was 93 (86 in men and 102 in women, compared with 1 in the general population). At one year, the overall CFR was 33.6% and SMR was 16.3. There was no improvement from 1968 to 1999 in mortality rates. SMRs were highest for alcoholic cirrhosis of the liver (27.4 at one year) but lower for biliary cirrhosis (11.4) and chronic hepatitis (10.0). Mortality from most of the main causes of death, including accidents and suicides, was increased. Conclusions: The high mortality rates after hospital admission, and the fact that they have not fallen in the past 30 years, show that liver cirrhosis remains a disease with a very poor prognosis. Increased mortality from accidents, suicides, and mental disorders, particularly among those with alcoholic cirrhoses, indicates that prognosis is influenced by behavioural as well as by physical pathology. abstract_id: PUBMED:10532899 Cirrhosis mortality and per capita consumption of distilled spirits, United States, 1949-1994: trend analysis. Objective: To describe, evaluate, and suggest interpretations for an observed aggregate-level relation between trends in mortality from cirrhosis and per capita consumption of distilled spirits in the United States. Design: Trend analysis using data on US cirrhosis mortality and per capita alcohol consumption. Results: There is a consistent long-term trend relation between mortality from cirrhosis and per capita consumption of distilled spirits in the United States from 1949 to 1994. Two instances of comparatively sharp drops in the consumption of spirits in the 1940s generated mixed results in predicting changes in cirrhosis mortality. Conclusions: An aggregate-level relation between trends in long-term cirrhosis mortality and the consumption of spirits falls considerably short of establishing a direct causal link between the two for individuals. Moreover, two sharp drops in the consumption of spirits generated only mixed results with respect to the short-term trend in cirrhosis. Nevertheless, the observed relation between the consumption of spirits and cirrhosis mortality merits further investigation. abstract_id: PUBMED:35836704 Trends of Readmissions for Gastrointestinal Bleeding After Alcoholic Hepatitis: Analysis of the Nationwide Readmission Database. Background: Alcohol consumption is associated with numerous hepatic manifestations, including alcoholic fatty liver disease, alcoholic hepatitis (AH), and liver cirrhosis. AH is a common and serious complication of alcohol use. Gastrointestinal bleeding (GIB) remains one of the most common causes of death in these patients. In this article, we studied the trends of GIB after AH. Methods: This was a retrospective interrupted trend study. We analyzed the 2010, 2012, 2014, 2016, and 2018 Nationwide Readmission Databases. The first AH hospitalization in the year was marked as index hospitalization. We identified subsequent hospitalizations with GIB within 30 days and marked them as readmissions. A multivariate regression analysis was used to calculate the risk-adjusted odds of trends for GIB readmissions, including esophageal varices bleeding (EVB), upper GIB, lower GIB, and all GIB. Results: The volume of index hospitalizations increased from 10,248 in 2010 to 16,479 in 2018. Similarly, all readmissions increased from 1,838 in 2010 to 3,908 in 2018. Of all readmissions, EVB increased from 3.9% in 2010 to 5.9% in 2018 (odds ratio (OR) trend 1.10; P < 0.001). Readmissions for upper GIB increased from 2.4% in 2010 to 7.8% in 2018 (OR trend 1.22; P < 0.001). On the other hand, lower GIB readmissions decreased from 7.2% in 2010 to 4.7% in 2018 (OR trend 0.95; P = 0.015). There was no statistically significant trend for all GIB readmissions (OR trend 1; P = 0.915). Conclusion: Further studies are needed to evaluate the patterns of lower GIB in patients with liver disease and the recent trends of corticosteroids use in AH patients. abstract_id: PUBMED:15798662 Trends in liver disease prevalence in Mexico from 2005 to 2050 through mortality data. Background And Aims: The epidemiology of liver cirrhosis differs across sex, ethnic groups, and geographic regions. In 2000, chronic liver disease was the fifth leading cause of death in Mexico. Accurate knowledge of the demographics of liver disease is essential in formulating health-care policies. Our main aim was to project the trends in liver disease prevalence in Mexico from 2005 to 2050 based on mortality data. Methods: Data on national mortality reported for the year 2002 in Mexico were analyzed. Specific-cause mortality rates were calculated for a selected age population (> 25 years old) and classified by sex and projected year (2005-2050). The following codes of the International Classification of Diseases for liver diseases were included: non-alcoholic chronic liver disease and cirrhosis, alcoholic liver disease, liver cancer, and acute and chronic hepatitis B and C infection. The projected prevalence of a chronic liver disease was estimated using the following equation: P = (ID x T) / [(ID xT) + 1], where P = prevalence, ID = incidence density (mortality rate multiplied by 2), T = median survival with the disease (= 20 years). Results: Nearly two million cases of chronic liver disease are expected. Alcohol-related liver diseases remain the most important causes of chronic liver disease, accounting for 996,255 cases in 2050. An emergent syndrome is non-alcoholic liver disease, which will be more important that infectious liver diseases (823,366 vs 46,992 expected cases, respectively). Hepatocellular carcinoma will be the third leading cause of liver disease. Conclusions: Chronic liver disease will be an important cause of morbidity and mortality in the future. Preventive strategies are necessary, particularly those related to obesity and alcohol consumption, to avoid catastrophic consequences. abstract_id: PUBMED:23256091 Alcohol in Greenland 1951-2010: consumption, mortality, prices. Background: Fluctuations in alcohol consumption in Greenland have been extreme since alcohol became available to the Greenland Inuit in the 1950s, increasing from low levels in the 1950s to very high levels in the 1980s - about twice as high as alcohol consumption in Denmark. Since then, consumption has declined, and current consumption is slightly below alcohol consumption in Denmark, while alcohol prices are far above Danish prices. Objective: Description of historical trends and possible causal connections of alcohol prices, alcohol consumption and alcohol-related mortality in Greenland 1951-2010 as a background for the evaluation of the impact of various types of policy. Design: Time series for Greenland 1951-2010 for alcohol prices, consumption and mortality are compiled, and variation and correlations are discussed in relation to various policies aimed at limiting alcohol consumption. Corresponding time series for Denmark 1906-2010 are presented for comparison. Results: The trends in alcohol prices and consumption followed each other rather closely until the 1990s in Greenland and the 1980s in Denmark. At this time, consumption stabilised while prices decreased further, but the effect of prices upon consumption is strong, also in recent years. A trend in Greenlandic mortality similar to consumption is discernible, but not significant. Among alcohol-related deaths cirrhosis of the liver is less prevalent whilst accidents are more prevalent than in Denmark. Conclusions: The effect of alcohol excise taxes and rationing upon consumption is evident. The stabilisation and subsequent decline in consumption since the mid-1990s, while alcohol prices decreased persistently, does not preclude continued effects of prices. On the contrary, price effects have been neutralised by other stronger causes. Whether these are government anti-alcohol campaigns or a cultural change is not clear. Answer: Yes, the recent trends in liver cirrhosis mortality are affected by changes in alcohol consumption. Studies have shown that there is a relationship between per capita alcohol consumption and liver cirrhosis mortality rates, with variations in the latency period between changes in alcohol consumption and subsequent changes in mortality rates across different regions in Europe (PUBMED:9273913). In Eastern Europe, there is a longer latency period, while in other parts of Europe, such as northern, western, and southern regions, the relationship between alcohol consumption and cirrhosis mortality rates is more immediate or lags by only a few years (PUBMED:9273913; PUBMED:26735116). In the United States, there has been a significant increase in mortality from alcoholic cirrhosis between 1999 and 2019, with the mortality rate more than tripling. This alarming trend suggests a need for further investigation into the contributing factors, which may include heavy alcohol consumption (PUBMED:35636480). The age-period-cohort analysis of liver cirrhosis mortality trends in Europe indicates that alcohol consumption trends and liver cirrhosis mortality are related, with varying patterns observed across different European regions. The analysis suggests that mortality from liver cirrhosis could continue to decrease in Western and Southern Europe, while it may continue to rise in Eastern and Northern Europe (PUBMED:9126509). Overall, the relationship between alcohol consumption and liver cirrhosis mortality is complex and influenced by various factors, including alcoholism treatment rates, alcohol habits, viral hepatitis infections, and other factors involved in the etiopathogenesis of the disease (PUBMED:9126509; PUBMED:9273913). Public health efforts and clinical interventions are necessary to address the trends in heavy alcohol consumption that may be contributing to the observed changes in liver cirrhosis mortality (PUBMED:35636480).
Instruction: The evolution of therapy for dehydration: should deficit therapy still be taught? Abstracts: abstract_id: PUBMED:8692613 The evolution of therapy for dehydration: should deficit therapy still be taught? Objective: To describe the evolution of rehydration therapy for cholera and diarrheal dehydration from its beginning in 1832 to the present. To reaffirm the central role for extracellular fluid (ECF) expansion and question the continued teaching of deficit therapy in many current pediatric texts. Methodology: I reviewed the rationale underlying three treatment strategies: rapid parenteral infusions of saline solutions to restore ECF; deficit therapy to replace specific electrolyte and water losses; and oral rehydration therapy (ORT) to effect both. I used crude mortality rates as the measure of outcomes. Results: (1) Beginning in 1832 for cholera and 1918 for infant diarrheal dehydration, parenteral saline infusions were infused to replace losses of salt and water; they were very effective in salvaging moribund dehydrated patients by quickly restoring ECF volume and renal perfusion. Mortality rates dropped from more than 60% to less than 30%. (2) Deficit therapy as it evolved in the 1950s defined potassium and other fluid and electrolyte deficits and replaced them using specific but complicated fluid and electrolyte replacement regimens. Mortality rates dropped to single digits. (3) ORT, with intravenous expansion of ECF volume when indicated, rapidly corrected specific fluid and electrolyte disorders with a very simple therapeutic regimen. Mortality rates dropped to less than 1%. Conclusions: The simpler, more effective ORT regimen should be taught as standard therapy for diarrheal dehydration. Principles of body fluid physiology should be taught in their own right. abstract_id: PUBMED:34117886 Current and emerging biological therapy in adult-onset Still's disease. Adult-onset Still's disease (AOSD) is a rare, but characteristic non-familial, multi-genic systemic auto-inflammatory disorder, characterized by high spiking fever, salmon-like evanescent skin rash, polyarthritis, sore throat, hyperferritinemia and leucocytosis. The hallmark of AOSD is a cytokine storm triggered by dysregulation of inflammation. Nowadays, with advances in anti-cytokine biologic agents, the treatment of AOSD is no longer limited to NSAIDs, glucocorticoids or conventional synthetic DMARDs. In this review, we focussed on the roles of these cytokines in the pathogenesis of AOSD and summarized the current and emerging biological therapy. abstract_id: PUBMED:29696932 Origin and evolution of the thunder-fire moxibustion therapy Through analyzing the origin and evolution of the thunder-fire moxibustion therapy, the mysteries and misunderstandings of it were revealed. As a result, a more objective and comprehensive recognition of this ancient therapy was displayed to the people nowadays. The thunder-fire moxibustion therapy maybe originate from the Taoism magic arts before the Yuan Dynasty and became matured in the middle of the Ming Dynasty. Two categories were divided during the long-term evolution in the history. In one category, the peach twig was taken as the moxa material rather than moxa stick and the incantation of Taoism remained. Regarding the other category, the herbal medicine was mixed in the moxa stick and the herbal composition was relatively specified. The incantation was removed. The moxibustion in the first category vanished at the end of the Ming Dynasty. The latter one kept on development from the middle of the Ming Dynasty through the modern times. Additionally, the herbal composition of moxa material has been modified; the indication enlarged and the operation improved. This therapy is still developed and applied by many doctors at the present times. abstract_id: PUBMED:29416343 Biological therapy of traditional therapy-resistant adult-onset Still's disease: an evidence-based review. Background: Biotherapy is becoming increasingly important in the treatment of adult-onset Still's disease (AOSD). The aim of our study was to evaluate the efficacy and safety of biological therapy for AOSD resistant to traditional therapy. Patients And Methods: Database of Library of Congress, the PubMed, and Web of Science Core Collection were used to retrieve relevant articles published in English language until March 2017. Only studies published in English language were included, and the additional references quoted in these articles were also checked. Articles concerning the efficacy and safety of all the biotherapies in refractory AOSD were evaluated. Results: There were 112 articles available in total; 422 AOSD patients were given at least one biologic. We found that 293 patients (69.43%) had received TNF-α blocking agents (infiliximab, etanercept, and adalimumab), 194 patients (45.97%) were treated with IL-1 receptor antagonists (anakinra, rilonacept, and canakinumab), 163 patients (38.63%) were given IL-6 inhibitor (tocilizumab), and 24 patients (5.69%) received rituximab and abatacept. The efficacy of biological therapy and overall tolerance of biological therapy for refractory AOSD were good. Thirty two of 271 patients given anti-TNF-α therapies (11.81%), 116 patients receiving IL-1 inhibitors (65.54%), 124 patients receiving tocilizumab (76.07%), and 13 patients given other biological therapies (36.11%) achieved remission. Side effects of biologic therapy were infections such as urinary tract infections and soft tissue abscess. Conclusion: Our findings suggest that anakinra and tocilizumab may be good choices for the treatment of refractory AOSD considering the effectiveness and safety. abstract_id: PUBMED:31147820 Autoinflammatory Pathogenesis and Targeted Therapy for Adult-Onset Still's Disease. Adult-onset Still's disease (AOSD) is a rare multisystem autoinflammatory disorder of unknown etiology. AOSD is generally characterized by high spiking fever, arthralgia or arthritis, skin rash, leukocytosis, and hyperferritinemia. Traditionally, AOSD has been treated with non-steroidal anti-inflammatory drugs, corticosteroids, and immunosuppressants. An increasing number of studies have shown that proinflammatory cytokines, such as interleukin-1β, -18, -6, and tumor necrosis factor-α, play key roles in AOSD and may serve as therapeutic targets. In the current review, we provided insights into the roles of these cytokines in the pathogenesis of AOSD and also provided a commentary on the clinical studies of biologic therapy against AOSD. abstract_id: PUBMED:9365737 Still disease Adult onset Still's disease is a systemic disorder of unknown etiology. The diagnosis is difficult and based upon Yamaguchi's criteria after exclusion of infectious diseases, hematologic process or autoimmune diseases. Clinical manifestations are various. Functional prognosis depends essentially on articular involvement. Vital prognosis depends on either hepatic failure or hematological or infectious complications, or amyloidosis. Ferritinemia is an important biological parameter which is not included in current criteria. Treatment is not well codified but steroids represent the most efficient therapy to control fever and systemic manifestations. Search for new treatments and specific markers of adult onset Still's disease are needed. abstract_id: PUBMED:35726695 Mindfulness as taught in Dialectical Behaviour Therapy: A scoping review. This scoping review considers 11 studies that have focussed on the effect of teaching the mindfulness element of Dialectical Behaviour Therapy (DBT) on clinical outcomes. These articles utilized either mindfulness skills as embedded into the full DBT-Skills programme or a stand-alone mindfulness skills module (DBT-M), as treatment for clinical populations. The review of the research found that clinical application of mindfulness as taught in DBT leads to increases in self-reported mindfulness-especially non-judgemental awareness along with psychological measures that suggests an increase in mindfulness, for example, improved attention. The studies demonstrated that an increase in mindfulness had a positive effect on some clinical symptoms such as symptoms of Borderline Personality Disorder. Not all of the studies controlled for the effect of group, amount of practice or other elements of DBT therapy. The findings suggest that more needs to be done to establish the underlying mechanisms of change when being taught mindfulness in DBT. abstract_id: PUBMED:8023009 Still's disease in the adult The adult Still's disease (ASD) is an uncommon inflammatory systemic disorder which affects the young adult. It is characterized by high spiking fever, vanishing rash, oligopolyarthritis, neutrophilic leucocytosis, negative titers for rheumatoid factor and antinuclear antibodies. Polyserositis, sore throat, uveitis are sometimes present and in one third of the cases it is possible to find hepato-splenomegaly with lymph node enlargement. G. Still first described the disease in child, in 1897, and in the adult it was recognized as a nosologic entity more than 70 years later. The ASD diagnosis is difficult and it is possible after the exclusion of many other diseases. Clinical manifestation are all nonspecific. In particular the presence of adenopathy, hepato-splenomegaly may suggest the possibility of a malignant lymphoma. Important exclusions include many other diseases such as the rheumatic fever, periodic fever, Lyme disease. At the same time a probable diagnosis of ASD should be considered in all the cases of high fever with rash, arthritis, neutrophilic leucocytosis or in the cases of fever of unknown origin (FUO). The prognosis is considered overall benign. The disease is usually sensible to salicylate treatment, even but the association with corticosteroids or, sometimes, with cytotoxic therapy is often required. abstract_id: PUBMED:26231832 Adult onset Still's disease with small vessel vasculitis This article presents a particularly severe case of adult onset Still's disease aggravated by small vessel vasculitis. A satisfactory therapy was concluded 1.5 years after onset of the disease. The small vessel vasculitis was difficult to treat: methotrexate (MTX), cyclophosphamide and rituximab were not sufficiently effective. Tocilizumab in combination with intravenous immunoglobulin (IVIG) induced remission and maintenance therapy was carried out with tocilizumab. abstract_id: PUBMED:37257482 Update on Adult-Onset Still's Disease: Diagnosis, Therapy and Guideline Adult-onset Still's disease (ASOD) is an autoinflammatory disease of unknown etiology which is pathogenetically characterized by an involvement of the innate immune response with activation of neutrophils and an increased secretion of IL-1, IL-6, IL-18, type 1 interferons. Still's disease may occur at any age with distinct variability in signs and symptoms. Recently, the German Society of Rheumatology (DGRh) has issued an AOSD guideline which recommends diagnosing AOSD based on a characteristic combination of symptoms including intermittent fever, rash, arthralgia, and arthritis after exclusion of infections, neoplasms and other rheumatological conditions. Classification criteria according to Yamaguchi may support the clinical diagnosis. Therapy is recommended to include glucocorticosteroids and methotrexate or ciclosporin, at higher activity levels IL1-receptor antagonist anakinra, IL-1β antibody canakinumab, or IL6-receptor antibody tocilizumab. At a high disease activity, anakinra or canakinumab may be employed primarily. Local drug licensing policies may have to be considered, as these substances are not universally approved in these scenarios. Important complications to consider consist in perimyocarditis, a multi-faceted pulmonary involvement, and macrophage activation syndrome (MAS). MAS features multi-organ involvement and cytopenias. Besides supportive measures often requiring intensive care, high dose glucocorticosteroids as well as above named biologics, and if necessary, also etoposide based therapeutic regimen are used. Answer: The evolution of therapy for dehydration has seen significant changes over time, with different strategies being employed to address the condition. Initially, rapid parenteral infusions of saline solutions were used to restore extracellular fluid (ECF) volume and renal perfusion, which effectively reduced mortality rates from over 60% to less than 30% (PUBMED:8692613). Later, deficit therapy emerged in the 1950s, focusing on replacing specific electrolyte and water losses with more complicated regimens, further reducing mortality rates to single digits (PUBMED:8692613). However, the advent of oral rehydration therapy (ORT), sometimes combined with intravenous ECF expansion when necessary, simplified the treatment and led to a further decrease in mortality rates to less than 1% (PUBMED:8692613). Given these developments, the question arises whether deficit therapy should still be taught. The evidence suggests that ORT, due to its simplicity and effectiveness, should be considered the standard therapy for diarrheal dehydration (PUBMED:8692613). While the principles of body fluid physiology are important and should be taught in their own right, the emphasis in clinical practice should be on the more effective ORT regimen. Therefore, while deficit therapy has historical significance and may have educational value, it appears that ORT has largely superseded it as the preferred method of rehydration therapy in current medical practice.
Instruction: Do criminals go to the hospital when they are shot? Abstracts: abstract_id: PUBMED:12226123 Do criminals go to the hospital when they are shot? Objectives: To determine whether criminals go to the hospital when they are shot. Such information is needed to check on the accuracy of using hospital emergency room data to estimate non-fatal gunshot wounds. Setting: Five jails across the US. Methods: A survey of inmates being booked into jail, administered by in-house health care staff. Results: Over 90% of over 300 criminals who had been wounded sometime before their incarceration reported going to a hospital for treatment after being shot. These results are consistent with previous findings from one jail. Conclusions: Jail inmates who had previously been shot were likely to have been treated in a hospital. This limited finding is consistent with the proposition that hospital/emergency department data may miss only a small percentage of gunshot wounds to criminals. abstract_id: PUBMED:11104447 When criminals are shot: A survey of Washington, DC, jail detainees. Introduction: Criminals are at high risk of being victims of violence, but little is known about their victimization. Methods: A screen of Washington, DC, detainees found that 1 in 4 had been wounded in events that appear unrelated to their incarceration. Extensive interviews were conducted with 79 men entering the city jail from March through June 1997; the men reported 93 prior events in which they had been shot within the past 5 years. Results: Eighty-three percent had personally witnessed someone being shot, and 46% had a family member killed with a gun. In the incidents in which they were shot, most were victims of robberies, assaults, and crossfires. The shootings were serious -- 35% were hit by more than 1 bullet, more than 90% went to the hospital, and 40% still had some disability from the wounds. These detainees report being shot by other criminals rather than by law-abiding citizens. Ninety percent would prefer to live in a world without easy access to firearms. Conclusion: These young men live in a violent world of gunplay. The overwhelming majority would prefer that firearms were not so readily available. abstract_id: PUBMED:37711682 External validation of GO-FAR 2 calculator for outcomes after in-hospital cardiac arrest with comparison to GO-FAR and trial of expanded applications. Aim: Externally validate the GO-FAR 2 tool for predicting survival with good neurologic function after in-hospital cardiac arrest with comparison to the original GO-FAR tool. Additionally, we collected qualitative descriptors and performed exploratory analyses with various levels of neurologic function and discharge destination. Methods: Retrospective chart review of all patients who underwent in-hospital resuscitation after cardiac arrest during the calendar years 2016-2019 in our institution (n = 397). GO-FAR and GO-FAR 2 scores were calculated based on information available in the medical record at the time of hospital admission. Cerebral performance category (CPC) scores at the time of admission and discharge were assessed by chart review. Results: The GO-FAR 2 score accurately predicted outcomes in our study population with a c-statistic of 0.625. The original GO-FAR score also had accurate calibration with a stronger c-statistic of 0.726. The GO-FAR score had decreased predictive value for lesser levels of neurologic function (c-statistic 0.56 for alive at discharge) and discharge destination (0.69). Descriptors of functional status by CPC score were collected. Conclusion: Our findings support the validity of the GO-FAR and GO-FAR 2 tools as published, but the c-statistics suggest modest predictive discrimination. We include functional descriptors of CPC outcomes to aid clinicians in using these tools. We propose that information about expected outcomes could be valuable in shared decision-making conversations. abstract_id: PUBMED:29423643 Rehabilitation and Re-socialization of Criminals in Iranian Criminal Law. One of the concerns that always remain for the repentant criminal is the condition for his return to society. This concern may be so strong and effective that the criminal may seclude from the society due to the fear of its consequences and may return to crime. Therefore, paying attention to eliminating the social effect of the criminal conviction of criminals can return security to society and return the repentant criminals to normal life. So, all military and social institutions are effective in the re-socialization, in such a way that the re-socialization of criminals requires the provision of social platforms that starts with their own family and expands to society. The main concern of this research is how we can provide the favorable conditions for the re-socialization of repentant criminals that effectively realize the socialization goals. The Islamic Penal Code initiatives in 2013, despite the gaps in this regard, partly help to achieve such goals, but they are not enough. Therefore, the present article focused on the criminals' re-socialization and tried to raise the criminals' re-socialization both socially and criminally. The method was descriptive analytical. The result showed that the Islamic Penal Code, adopted in 2013 on the period of the subsequent effects, needs to be reformed, and the effective social institutions should be raised orderly in such a case. abstract_id: PUBMED:22218834 Temporal association between federal gun laws and the diversion of guns to criminals in Milwaukee. The practices of licensed gun dealers can threaten the safety of urban residents by facilitating the diversion of guns to criminals. In 2003, changes to federal law shielded gun dealers from the release of gun trace data and provided other protections to gun dealers. The 14-month period during which the dealer did not sell junk guns was associated with a 68% reduction in the diversion of guns to criminals within a year of sale by the dealer and a 43% increase in guns diverted to criminals following sales by other dealers. The laws were associated with a 203% increase in the number of guns diverted to criminals within a year of sale by the gun store, which was the focus of this study. Policies which affect gun dealer accountability appeared to influence the diversion of guns to criminals. abstract_id: PUBMED:37257932 Medical Treatment of Criminals in Premodern China Based on Qing Era Local Archives: Focusing on the Case of Ba County in the late 18th Century. This study aimed to confirm what kind of medical treatment was given to criminals by the local governments in Qing China and the role of the government doctors in charge of the criminals' medical care. Using local administrative and historical materials, I explore the procedures and methods of medical treatment and explain the treatment processes for criminals. The findings demonstrate that, although different from modern ideals, in premodern China under the emperor's rule, there were provisions for ailing criminals to receive medical treatment. In the Qing era, the execution of actual punishments worked as a keynote of judicial policy, and the number of criminals managed by local government offices was larger than before. The government doctors took charge of the criminals' medical treatment, but it seems that their position in the Qing era was not popular due to low salaries and psychological resistance to treating guilty criminals. Moreover, the government doctors dispatched to treat criminals were required to play an additional role. They had to testify that there were no other causes of death other than disease, which demonstrates that the government doctor played a role in determining whether the death was a crime. However, their treatment practices for criminals demonstrate the use of traditional medicine from the Tang and Song era onwards. There are two reasons for this. First, unlike the private market, there was no economic incentives for doctors to use new medicines and prescriptions. Second, because of the fear of being reprimanded for the death of the criminal, using classical prescriptions was a way for doctors to defend the adequacy of their medical practice. From an institutional perspective, medical care for criminals through government doctors was guaranteed during the Qing era. However, government doctors were not selected for their medical competence, nor were they provided with adequate incentives to practice good medical care. Even some government doctors devoted themselves to medical care, the quality of care was not systematically guaranteed. This provides evidence of the poor medical environment surrounding criminals in premodern China. abstract_id: PUBMED:22002326 Increased executive functioning, attention, and cortical thickness in white-collar criminals. Very little is known on white-collar crime and how it differs to other forms of offending. This study tests the hypothesis that white-collar criminals have better executive functioning, enhanced information processing, and structural brain superiorities compared with offender controls. Using a case-control design, executive functioning, orienting, and cortical thickness was assessed in 21 white-collar criminals matched with 21 controls on age, gender, ethnicity, and general level of criminal offending. White-collar criminals had significantly better executive functioning, increased electrodermal orienting, increased arousal, and increased cortical gray matter thickness in the ventromedial prefrontal cortex, inferior frontal gyrus, somatosensory cortex, and the temporal-parietal junction compared with controls. Results, while initial, constitute the first findings on neurobiological characteristics of white-collar criminals. It is hypothesized that white-collar criminals have information-processing and brain superiorities that give them an advantage in perpetrating criminal offenses in occupational settings. abstract_id: PUBMED:23888508 Blood traces on the clothes of the alleged criminals and their significance for the inquiry into the killings The present analysis of blood traces on the clothes of the alleged criminals is based on the results of 109 medical criminalistic expertises. They are illustrated by examples from practical work. abstract_id: PUBMED:32698348 Mouse Tracking to Explore Motor Inhibition Processes in Go/No-Go and Stop Signal Tasks. Response inhibition relies on both proactive and reactive mechanisms that exert a synergic control on goal-directed actions. It is typically evaluated by the go/no-go (GNG) and the stop signal task (SST) with response recording based on the key-press method. However, the analysis of discrete variables (i.e., present or absent responses) registered by key-press could be insufficient to capture dynamic aspects of inhibitory control. Trying to overcome this limitation, in the present study we used a mouse tracking procedure to characterize movement profiles related to proactive and reactive inhibition. A total of fifty-three participants performed a cued GNG and an SST. The cued GNG mainly involves proactive control whereas the reactive component is mainly engaged in the SST. We evaluated the velocity profile from mouse trajectories both for responses obtained in the Go conditions and for inhibitory failures. Movements were classified as one-shot when no corrections were observed. Multi-peaked velocity profiles were classified as non-one-shot. A higher proportion of one-shot movements was found in the SST compared to the cued GNG when subjects failed to inhibit responses. This result suggests that proactive control may be responsible for unsmooth profiles in inhibition failures, supporting a differentiation between these tasks. abstract_id: PUBMED:26884544 Are batterers different from other criminals? An fMRI study. Intimate partner violence (IPV) is a complex and global phenomenon that requires a multi-perspective analysis. Nevertheless, the number of neuroscientific studies conducted on this issue is scarce as compared with studies of other types of violence, and no neuroimaging studies comparing batterers to other criminals have been conducted. Thus, the main aim of this study was to compare the brain functioning of batterers to that of other criminals when they are exposed to IPV or general violence pictures. An fMRI study was conducted in 21 batterers and 20 other criminals while they observed IPV images (IPVI), general violence images (GVI) and neutral images (NI). Results demonstrated that batterers, compared with other criminals, exhibited a higher activation in the anterior and posterior cingulate cortex and in the middle prefrontal cortex and a decreased activation in the superior prefrontal cortex to IPVI compared to NI. The paired t-test comparison between IPVI and GVI for each group showed engagement of the medial prefrontal cortex, the posterior cingulate and the left angular cortices to IPVI in the batterer group only. These results could have important implications for a better understanding of the IPV phenomenon. Answer: Yes, criminals do go to the hospital when they are shot. Over 90% of over 300 criminals who had been wounded sometime before their incarceration reported going to a hospital for treatment after being shot. These results are consistent with previous findings from one jail, suggesting that hospital/emergency department data may miss only a small percentage of gunshot wounds to criminals (PUBMED:12226123). Additionally, a survey of Washington, DC, jail detainees found that in incidents where they were shot, more than 90% went to the hospital, and 40% still had some disability from the wounds (PUBMED:11104447).
Instruction: Does the number of redo mitral valve replacements for structural valve deterioration affect early and late outcomes? Abstracts: abstract_id: PUBMED:25790615 Does the number of redo mitral valve replacements for structural valve deterioration affect early and late outcomes?: experience from 114 reoperative cases. Background And Aim Of The Study: Whether the number of reoperative mitral valve replacements (MVRs) for structural valve deterioration (SVD) affects early or late survival is unclear. Hence, the aim of the present study was to investigate this issue. Methods: A total of 114 consecutive patients (84 females, 30 males; mean age 63.4 +/- 11.9 years) who underwent 141 redo MVRs for SVD during a 20-year period was retrospectively analyzed. Of these patients, 87 (76.3%) had undergone first-time reoperations (87 redos; group A), while the remaining 27 (23.7%) had undergone two or more reoperations (54 redos; group B). The mean follow up period was 8.5 +/- 6.8 years, and the follow up rate was 96.2%. Results: NYHA functional class III or IV was noted in 34 patients (39.1%) of group A, and in 17 patients (63.0%) of group B (p = 0.0292). The EuroSCORE and logistic EuroSCORE in group B were significantly higher than those in group A. The hospital mortality rate was 4.6% in group A and 11.1% in group B (p = 0.1430). Logistic regression analysis revealed risk factors for hospital mortality to be the number of reoperations (OR 4.399; p = 0.0467) and NYHA class III or IV (OR 9.090; p = 0.0483). There were no significant differences in five-, 10- or 15-year survival rates between groups A and B (log-rank test, p = 0.3369). A Cox regression analysis showed advanced age to be the only predictor of late mortality (HR = 1.098; p = 0.0013). Conclusion: Irrespective of whether a patient has undergone two or more reoperative MVRs, a better long-term survival can be expected before progression of the patient's NYHA functional class. abstract_id: PUBMED:38028444 Case Report: Stretching the limits-late valvuloplasty for THV dysfunction following redo mitral valve-in-valve implantation. Late balloon valvuloplasty can be used to treat under-expansion-related transcatheter heart valve (THV) dysfunction. Whether this can be performed following redo-THV implantation is unknown. Herein, we report a case of a 72-year-old male presenting with symptomatic gradient elevation following redo mitral valve-in-valve implantation. The patient was successfully treated with late balloon valvuloplasty with gradient improvement. In conclusion, late valvuloplasty is effective even with several layers of valves. However, larger studies are required to clarify the role of this approach further. abstract_id: PUBMED:38202277 Early and Mid-Term Outcomes of Using the Chimney Technique in Redo Mitral Valve Replacement in Patients with a Small Mitral Annulus. The outcomes of redo mitral valve replacement (Re-MVR) in a small mitral annulus with the use of the chimney technique are not well documented. The purpose of this study is to present our early experience with this group of patients, illustrating the periop-erative complications and mortality outcomes. From 2019 to 2020, 77 consecutive patients underwent Re-MVR with the use of the chimney technique because of a small mitral annulus. To evaluate heart structural integrity and clinical outcomes, postoperative clinical data and echocardiograms were examined. The mean age was 56.7 ± 15.98 years. All patients underwent mitral valve surgery, of which 62 were mitral valve replacements, 7 mitral valve repairs, and 8 double valve replacements. The preoperative mitral valve mean gradient was 18.07 ± 9.40 mmHg, and the postoperative mitral prosthesis size was 28.51 ± 1.22 mm. The median increment of mitral size enlargement was 4 (0, 6) valve sizes. The mean mitral gradient coming out of the operating room was 10.34 ± 2.12 mmHg, and at the follow-up echocardiogram performed at 3 years after the procedure, it was 10.36 ± 1.70 mmHg. One-year survival was 93.3%, while the 4-year survival rate was 89.3%, with no reoperation. The use of the chimney technique in small mitral valve re-mitral valve replacement results in larger valve sizes. Moreover, the mean gradients over the mitral valve are acceptable both intraoperatively and over time. abstract_id: PUBMED:35557522 Five-Year Outcomes of Patients With Mitral Structural Valve Deterioration Treated With Transcatheter Valve in Valve Implantation - A Single Center Prospective Registry. The Valve-in-Valve (ViV) technique is an emerging alternative for the treatment of bioprosthetic structural valve deterioration (SVD) in the mitral position. We report on intermediate-term outcomes of patients with symptomatic SVD in the mitral position who were treated by transcatheter mitral valve-in-valve (TM-ViV) implantation during the years 2010-2019 in our center. Three main outcomes were examined during the follow-up period: NYHA functional class, TM-ViV hemodynamic data per echocardiography, and mortality. Our cohort consisted of 49 patients (mean age 77.4 ± 10.5 years, 65.3% female). The indications for TM-ViV were mainly for regurgitant pathology (77.6%). All 49 patients were treated with a balloon-expandable device. The procedure was performed via transapical access in 17 cases (34.7%) and transfemoral vein/trans-atrial septal puncture in 32 cases (65.3%). Mean follow-up was 4.4 ± 2.0 years. 98% and 91% of patients were in NYHA I/II at 1 and 5 years, respectively. Mitral regurgitation was ≥moderate in 86.3% of patients prior to the procedure and this decreased to 0% (p < 0.001) following the procedure and was maintained over 2 years follow-up. The mean trans-mitral valve gradients decreased from pre-procedural values of 10.1 ± 5.1 mmHg to 7.0 ± 2.4 mmHg at 1 month following the procedure (p = 0.03). Mortality at 1 year was 16% (95%, CI 5-26) and 35% (95%, CI 18-49) at 5 years. ViV in the mitral position offers an effective and durable treatment option for patients with SVD at high surgical risk. abstract_id: PUBMED:35474195 Early Structural Valve Deterioration of Bioprosthetic Mitral Valve in a Dialysis Patient Choice of prosthetic valve during valve replacement in dialysis patients is still controversial. There is a known risk of early structural valve deterioration of bioprosthesis in dialysis patients, whereas mechanical prosthesis is associated with a higher risk of bleeding and thrombotic events. A 68-year-old dialysis-dependent woman, who had undergone bioprosthetic mitral valve replacement at the age of 66, was admitted to our hospital because of general malaise and hypotension during dialysis. Echocardiography revealed severe mitral stenosis and regurgitation due to restricted motion and dense calcification in prosthetic valve leaflets, which indicated early structural valve deterioration. Redo mitral valve replacement using a mechanical valve was performed, and the patient gradually recovered. However, she eventually died of intracranial hemorrhage three months after the surgery. abstract_id: PUBMED:34459226 Redo Surgical Mitral Valve Replacement Versus Transcatheter Mitral Valve in Valve From the National Inpatient Sample. Background Redo mitral valve surgery is required in up to one-third of patients and is associated with significant mortality and morbidity. Valve-in-valve transcatheter mitral valve replacement (ViV TMVR) is less invasive and could be considered in those at prohibitive surgical risk. Studies on comparative outcomes of ViV TMVR and redo surgical mitral valve replacement (SMVR) remain limited. Our study aimed to investigate the real-world outcomes of the above procedures using the National Inpatient Sample database. Methods and Results We analyzed National Inpatient Sample data using the International Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10-CM) from September 2015 to December 2018. A total of 495 and 2250 patients underwent redo ViV TMVR and SMVR, respectively. The patients who underwent ViV TMVR were older (77 versus 68 years, P<0.01). Adjusted mortality was higher in the redo SMVR group compared with the ViV TMVR group (7.6% versus <2.8%, P<0.01). Perioperative complications were higher among patients undergoing redo SMVR including blood transfusions (38% versus 7.6%, P<0.01) and acute kidney injury (36.7% versus 13.9%, P<0.01). Cost of care was higher (USD$57 172 versus USD$52 579, P<0.01), length of stay was longer (10 versus 3 days, P<0.01), and discharge to home was lower (20.3% versus 64.6%, P<0.01) in the SMVR group compared with the ViV TMVR group. Conclusions ViV TMVR is associated with lower mortality, periprocedural morbidity, and resource use compared with patients undergoing redo SMVR. ViV TMVR may be a viable option for some patients with mitral prosthesis dysfunction. Studies evaluating long-term outcomes and durability of ViV TMVR are needed. A patient-centered approach by the heart team, local institutional expertise, and careful preprocedure planning can help decision-making about the choice of intervention for the individual patient. abstract_id: PUBMED:29188430 Undiagnosed mitroflow bioprosthesis deformation causing early structural valve deterioration. Bioprosthesis are commonly used in the elderly population. Structural valve deterioration affects the long-term durability. We report an early deterioration of a Mitroflow valve caused by ring deformation and prosthetic leaflet rupture. The 69-years-old patient underwent successful redo surgery with excision of the bioprosthesis and placement of a mechanical valve. abstract_id: PUBMED:30548701 Redo mitral valve surgery following prior mitral valve repair. Background: The optimal treatment strategy following a failed mitral valve repair remains unclear. This study evaluated early and long-term outcomes of redo mitral valve repair (MVr) and replacement (MVR) after prior mitral valve repair. Methods: Patients undergoing redo mitral valve surgery after prior mitral valve repair at a single institution between 2002 and 2014 were reviewed. Primary outcomes included operative mortality (30-day or in-hospital mortality) and long-term freedom from mitral valve reoperation and death. Secondary outcomes included postoperative complications. Results: 305 patients underwent redo MVr (n = 48) or MVR (n = 257) after prior mitral valve repair. Concomitant procedures included tricuspid valve repair or replacement (23%), aortic valve replacement (6%), and coronary artery bypass grafting (4%), with no differences between cohorts. 18% were performed via right mini-thoracotomy (24% MVr vs 18% MVR, P = 0.31). Unadjusted and risk-adjusted operative mortality were lower with MVr (0% vs 8%, P = 0.04). Rates of postoperative complications were similar except for blood product transfusion (35% MVr vs 59% MVR, P = 0.003) and prolonged mechanical ventilation (8% MVr vs 29% MVR, P = 0.003). Long-term freedom from mortality was comparable: 96% MVr versus 86% MVR at 1 year and 78% MVr versus 68% MVR at 5 years (P = 0.29). Conclusions: When technically feasible, mitral valve re-repair can be safely performed with outcomes comparable to MVR. abstract_id: PUBMED:33048438 Surgical redo versus transseptal or transapical transcatheter mitral valve-in-valve implantation for failed mitral valve bioprosthesis. Background: Redo surgical mitral valve replacement (SMVR) is the current standard of care for patients with failed bioprosthetic mitral valve (MV). Transcatheter mitral valve-in-valve replacement (TMViV) is arising as an alternative to SMVR in high risk patients. We sought to evaluate procedural safety, early and mid-term outcomes of patients who underwent transseptal TMViV (TS-TMViV), transapical TMViV (TA-TMViV), or redo-SMVR. Methods: We identified patients with failed bioprosthetic MV who underwent TS-TMViV, TA-TMViV, or SMVR at four Italian Centers. Clinical and echocardiographic data were codified according to Mitral Valve Academic Research Consortium definition (MVARC), except for significant valve stenosis. Results: Between December 2012 and September 27, 2019 patients underwent TS-TMViV, 22 TA-TMViV, and 29 redo-SMVR. TS-TMViV and TA-TMViV patients presented higher mean age and surgical risk scores compared with SMVR group (77.8 ± 12 years, 77.3 ± 7.3 years, 67.8 ± 9.4 years, p < .001; STS PROM 8.5 ± 7.2; 8.9 ± 4.7; 3.6 ± 2.6, p < .001). TS-TMViV procedure was associated with shorter intensive care unit time and total length of stay (LOS) compared with TA-TMViV and SMVR group. There were no differences in MVARC procedural success at 30-days (74.1, 72.7, and 51.7%, p = .15) and one-year all-cause mortality between groups (14.8, 18.2, and 17.2%, p = 1.0). MV mean gradient was similar between TS-TMViV, TA-TMViV, and SMVR groups at 30 days and 12 months. Conclusions: For the selected patients, TS-TMViV and TA-TMViV are to be considered a valid alternative to redo-SMVR with comparable 1-year survival. TS-TMViV is the less invasive strategy and has the advantage of shortening the LOS compared with TA-TMViV. abstract_id: PUBMED:28204140 Impact of failed mitral valve repair on hospital outcome of redo mitral valve procedures. Objectives: The prognostic impact of failed mitral valve repair (FMR) on in-hospital outcome after redo mitral valve surgery has not been thoroughly investigated. Methods: Hospital outcomes after redo mitral valve surgery because of an FMR in patients from nine European centres were reported. Logistic regressions identified predictors of mortality in combined or isolated redo mitral valve operations. Hospital outcome was compared between propensity-matched cohorts with FMR and native mitral valves in the context of redo surgery and FMR versus failed prostheses. Results: A total of 246 patients with FMR yielded a 6.5% mortality rate at redo surgery. FMR per se did not impact mortality at multivariable analysis ( P = 0.64). A preoperative Global Initiative for Chronic Obstructive Lung Disease (GOLD) score ≥2 chronic obstructive lung disease (COPD) (OR 15.2, P < 0.01), left ventricular ejection fraction <30% (odds ratio (OR) 21.5, P = 0.005), major injury to cardiovascular structures at re-entry (OR 27.2, P < 0.01) or injury to patent left internal mammary artery-coronary artery bypass graft (OR 7.6, P = 0.03) predicted mortality in the whole FMR population. GOLD ≥ 2 COPD (OR 12.3, P = 0.049), age at surgery (OR 1.15 for each incremental year, P = 0.049) and cardiopulmonary bypass duration (OR 1.02, P = 0.022) predicted mortality in isolated redo mitral valve surgery for FMR. The fourth (> 68 years = 13.8% mortality) and the fifth quintiles of age (≥73.4 years = 14.8%) reported the highest mortality (OR 3.8 and 4.2 respectively, P = 0.002) in this subgroup. Propensity-matched cohorts of FMR and native mitral valves in the context of redo surgery showed no differences in terms of mortality ( P = 0.69) and major morbidity (acute myocardial infarction P = 0.31, stroke P = 0.65, acute kidney injury P = 1.0), whereas more perioperative dialysis ( P = 0.04) and transfusions ( P = 0.02) were noted in propensity-matched failed prostheses compared to FMR. Conclusions: A failed mitral repair does not impact hospital outcome of redo surgery. Given the role of severe left ventricular dysfunction and advanced age on hospital mortality rates, an early indication for redo surgery may improve outcome. Answer: The number of redo mitral valve replacements (MVRs) for structural valve deterioration (SVD) does affect early outcomes but not late outcomes. A study with 114 consecutive patients who underwent 141 redo MVRs for SVD found that patients who had undergone two or more reoperations (Group B) had a higher EuroSCORE and logistic EuroSCORE, indicating a higher risk, compared to those who had undergone their first-time reoperation (Group A). The hospital mortality rate was also higher in Group B (11.1%) compared to Group A (4.6%), although this difference was not statistically significant (p = 0.1430). Logistic regression analysis identified the number of reoperations and NYHA class III or IV as risk factors for hospital mortality. However, there were no significant differences in five-, 10-, or 15-year survival rates between the two groups, suggesting that the number of redo MVRs does not significantly affect long-term survival (PUBMED:25790615). In conclusion, while the number of redo MVRs for SVD may impact early outcomes such as hospital mortality, it does not appear to significantly affect long-term survival rates.
Instruction: Can the response to iron therapy be predicted in anemic nondialysis patients with chronic kidney disease? Abstracts: abstract_id: PUBMED:20019121 Can the response to iron therapy be predicted in anemic nondialysis patients with chronic kidney disease? Background And Objectives: Anemia is iron responsive in 30 to 50% of nondialysis patients with chronic kidney disease (CKD), but the utility of bone marrow iron stores and peripheral iron indices to predict the erythropoietic response is not settled. We investigated the accuracy of peripheral and central iron indices to predict the response to intravenous iron in nondialysis patients with CKD and anemia. Design, Setting, Participants, & Measurements: A diagnostic study was conducted on 100 nondialysis patients who had CKD and anemia and were erythropoiesis-stimulating agent and iron naive. Bone marrow iron stores were evaluated by aspiration. Hemoglobin, transferrin saturation index (TSAT), and ferritin were measured at baseline and 1 month after 1000 mg of intravenous iron sucrose. Posttest predictive values for the erythropoietic response (> or =1-g/dl increase in hemoglobin) of peripheral and central iron indices were calculated. Results: The erythropoietic response was noted in a higher proportion in bone marrow iron-deplete than in iron-replete patients (63 versus 30%). Peripheral iron indices had a moderate accuracy in predicting response. The positive (PPV) and negative predictive values (NPV) were 76 and 72% for a TSAT of 15% and 74 and 70% for a ferritin of 75 ng/ml, respectively. In the final logistic regression model, including TSAT and ferritin, the chances of a positive response increased by 7% for each 1% decrease in TSAT. Conclusions: Because an erythropoietic response is seen in half of patients and even one third of those with iron-replete stores responded whereas peripheral indices had only a moderate utility in predicting response, the therapeutic trial to intravenous iron seems to be a useful tool in the management of anemia in nondialysis patients with CKD. abstract_id: PUBMED:29092739 Erythropoietic response to oral iron in patients with nondialysis-dependent chronic kidney disease in the FIND-CKD trial . Aims: To evaluate erythropoietic response rates to oral iron over time in iron-deficient anemic patients with nondialysis-dependent chronic kidney disease (ND-CKD). Materials And Methods: FIND-CKD was a 1-year, randomized, multicenter trial of iron therapy in patients with ND-CKD, anemia, and iron deficiency, without erythropoiesis-stimulating agent (ESA) therapy. Patients with active infection or C-reactive protein > 20 mg/L were excluded. In this post-hoc analysis, response was defined as ≥ 1 g/dL increase in hemoglobin (Hb) from baseline, before initiation of alternative anemia therapy (i.e., ESA, transfusion, or intravenous iron). Results: 308 patients received oral iron (200 mg elemental iron/day). Mean (SD) Hb at baseline was 10.4 (0.7) g/dL. At week 4, Hb data were available from 292 patients without alternative anemia therapy: 63/292 (21.6%) showed a response. Among the 229 nonresponders at week 4, 48.8% showed a cumulative response on ≥ 1 occasion by week 52 (11.1%, 19.9%, 25.9%, and 28.7% had a response at weeks 8, 12, 24, and 52, respectively), and 27.9% had received alternative iron therapy by week 52. Baseline levels of Hb, ferritin, and transferrin saturation were lower in responders than in nonresponders. Neither concomitant medication nor adherence (as assessed by medication count) was substantially different between early responders and nonresponders. Conclusion: Four weeks after starting oral iron therapy, only 21.6% of anemic patients with ND-CKD and iron deficiency showed an Hb increase of at least 1 g/dL. Among early nonresponders, < 30% responded at any subsequent time point. Earlier consideration of alternative therapy could improve anemia management in this population. . abstract_id: PUBMED:17533016 Iron management in nondialysis-dependent CKD. Iron deficiency has been studied extensively in patients with chronic kidney disease on hemodialysis therapy. However, few studies looked at iron treatment in the nondialysis chronic kidney disease population. Limited data suggest that iron deficiency is common in patients with chronic kidney disease with anemia; this lack of iron can hinder the effectiveness of erythropoiesis. The diagnosis of iron deficiency should involve clinical judgment, with an emphasis on clinical characteristics of the patient because of the limited amount of literature examining the interpretation of iron testing results. When iron deficiency is diagnosed in nondialysis patients with chronic kidney disease, a search must be initiated for any sources of blood loss. After addressing any blood loss, the preferred route of iron treatment must be determined. To date, no clear advantage was shown with intravenous versus oral administration in nondialysis patients, as shown in the hemodialysis setting. Thus, oral iron therapy may be a more reasonable option unless oral therapy previously failed. Additional research is needed to support evidence-based guidelines for the treatment of iron deficiency in the nondialysis chronic kidney disease population because this population differs from hemodialysis patients in the decreased extent of blood loss. abstract_id: PUBMED:27185524 Iron Therapy Challenges for the Treatment of Nondialysis CKD Patients. The clinical consequences of untreated, severe anemia in patients with nondialysis CKD can be significant, but disparities exist in the anemia treatment guidelines and position papers issued from working groups and associations across the world. These differ in hemoglobin target and iron levels and their emphasis on various iron markers and other clinical outcomes. Not surprisingly, disparities are observed in anemia treatment strategies among patients with nondialysis CKD across different areas of the world. Over the past decade, the prescription and dosage of both iron therapies and erythropoiesis-stimulating agents have shifted, with notable regional differences observed. Moreover, there is ongoing debate regarding oral versus intravenous administration of iron. Compared with oral iron therapy, which often leads to gastrointestinal adverse events, low patient adherence, and low efficacy, intravenous iron administration has been associated with potential serious adverse events, such as anaphylaxis. New iron-based compounds and drugs currently under development are reviewed to describe their potential benefits in the treatment of anemia in patients with CKD. New oral compounds, including iron-based phosphate binders, heme iron polypeptide, and liposomal iron, show different rates of absorption with possibly different efficacy and improved tolerability. These new potential therapies offer health care providers additional anemia treatment options for their patients with CKD; however, the management of anemia in the CKD population continues to present challenges that require prospective studies to identify the optimal iron therapy for patients. abstract_id: PUBMED:22874106 Factors affecting response and tolerability to ferumoxytol in nondialysis chronic kidney disease patients. Background: Ferumoxytol is a unique intravenous (i.v.) iron therapy. This report examines factors affecting hemoglobin response to i.v. ferumoxytol, and the relationship between hematologic parameters, concomitant erythropoiesis-stimulating agents (ESA), and adverse events (AEs) in nondialysis CKD patients. Methods: A series of post-hoc efficacy and safety analyses were performed using pooled data from two identically designed Phase III studies in 608 nondialysis CKD patients randomized to receive two 510 mg i.v. injections of ferumoxytol within 5 ± 3 days versus oral iron. Results: Ferumoxytol resulted in a significant increase in hemoglobin in the presence and absence of ESA, and across a range of baseline hemoglobin, transferrin saturation, ferritin, and reticulocyte hemoglobin content levels. Adverse event rates with ferumoxytol were similar across quartiles of change in hemoglobin; there were no trends suggesting an increased rate of cardiovascular AEs with higher maximum achieved hemoglobin or faster rate of hemoglobin rise. There was no meaningful difference in the rate of AEs, serious AEs, and cardiovascular AEs between patients receiving or not receiving ESA. Conclusions: These analyses add to the knowledge of predictors of response and safety outcomes associated with i.v. iron therapy in nondialysis CKD patients. abstract_id: PUBMED:19010257 A comprehensive vision for intravenous iron therapy. Commonly used iron indices, such as serum ferritin and transferrin saturation (TSAT), have limited utility in patients with chronic kidney disease. Both dialysis and nondialysis patients may have normal to high serum ferritin levels and little or no iron available for erythropoiesis. Inflammation can result in increased serum ferritin level and low TSAT and restrict the ability to mobilize iron stores. Management of anemia in patients with chronic kidney disease requires recognizing that not only decreased erythropoietin production, but also decreased iron availability, can lead to anemia. The Dialysis Patients' Response to IV Iron with Elevated Ferritin (DRIVE) trial showed the efficacy of intravenous (IV) iron in anemic hemodialysis patients with serum ferritin levels of 500 to 1,200 ng/mL and TSAT of 25% or less receiving adequate erythropoiesis-stimulating agent doses. Withholding iron from these patients resulted in worsening iron-restricted erythropoiesis. To improve anemia management in patients receiving hemodialysis and being treated with ESAs, clinicians should consider the benefits of IV iron. abstract_id: PUBMED:23055815 Decisive indicator for gastrointestinal workup in anemic patients with nondialysis chronic kidney disease. Background: Anemia and iron deficiency are universal problems in patients with chronic kidney disease (CKD). However, decisive indicator to guide the further gastrointestinal (GI) workup has not been determined. Methods: We included 104 anemic patients with nondialysis-dependent CKD stages 3-5 (38 patients at stage 3, 26 patients at stage 4, and 40 patients at stage 5). Hemoglobin, serum ferritin, transferrin saturation (TSAT), mean corpuscular volume (MCV), and corrected reticulocyte count data were assessed to evaluate diagnostic utility for bleeding-related GI lesions, which were identified by esophagogastroduodenoscopy and colonoscopy. Results: Bleeding-related GI lesions were found in 55 (52.9%) patients, and patients with stage 5 CKD had a higher prevalence of gastric lesions than patients with CKD stage 3 or 4 (all p < 0.05). The areas under the receiver operating characteristic curves used to predict bleeding-related lesions were 0.69 for TSAT (p = 0.002) and 0.61 for serum ferritin (p = 0.085). The sensitivity and specificity of a cutoff value for TSAT < 20% were 0.59 and 0.74, respectively. Hemoglobin, MCV, and corrected reticulocyte levels had no significant diagnostic utility. On multivariable logistic regression, the chance of GI lesions increased by 6% for each 1% reduction in TSAT and increased 4.1-fold for patients with CKD stage 5 (all p < 0.05). Conclusions: TSAT is a useful indicator for determining the GI workup in anemic patients with nondialysis-dependent CKD stages 3-5. Stage 5 CKD is independently associated with bleeding-related lesions and TSAT should be used cautiously in these patients. abstract_id: PUBMED:37065610 Efficacy and Safety of Sevelamer Carbonate in Chinese Nondialysis Chronic Kidney Disease Patients with Hyperphosphatemia: A Randomized, Double-Blind, Parallel-Group Study. Introduction: Previous studies suggested that sevelamer carbonate is well tolerated with a favorable efficacy and safety profile in both dialysis and nondialysis patients in Europe; however, the efficacy remains controversial, and few studies have examined sevelamer carbonate therapy in other ethnic nondialysis CKD patients. This study assessed the efficacy and safety of sevelamer carbonate in Chinese nondialysis CKD patients with hyperphosphatemia. Methods: The multicenter, randomized, double-blind, parallel-group, placebo-controlled, and phase 3 clinical trial enrolled 202 Chinese nondialysis CKD patients with serum phosphorus ≥1.78 mmol/L. Patients were randomly assigned 1:1 to receive sevelamer carbonate (2.4-12 g per day) or placebo for 8 weeks. The primary outcome was the change in serum phosphorous between baseline and week 8. Results: Totally 482 Chinese patients were screened and 202 were randomized (sevelamer carbonate, n = 101; placebo, n = 101). The mean serum phosphorous decreased significantly in patients treated with sevelamer carbonate compared with placebo (-0.22 ± 0.47 vs. 0.05 ± 0.44 mmol/L, p < 0.0001). Significantly (p < 0.0001), decreases of serum total cholesterol, low-density lipoprotein cholesterol, and calcium-phosphorus (Ca × P) product levels from baseline to week 8 were shown in sevelamer carbonate group compared with placebo group. Serum intact parathyroid hormone was not significantly changed in the sevelamer carbonate group (p = 0.83). Patients in the sevelamer carbonate group experienced similar adverse events as the placebo group. Conclusion: Sevelamer carbonate is an effective and well-tolerated phosphate binder in advanced nondialysis CKD Chinese patients with hyperphosphatemia. abstract_id: PUBMED:27236130 Iron Treatment Strategies in Nondialysis CKD. In contrast to managing patients on hemodialysis in whom iron strategies are more focused on intravenous iron, nondialysis chronic kidney disease (CKD) patients may receive either oral or intravenous iron. There are advantages and disadvantages for both strategies. Oral iron is simple and cheap to administer and does not require hospital visits, but is poorly absorbed in advanced CKD and is associated with unpleasant gastrointestinal side effects. Intravenous iron, on the other hand, guarantees iron bioavailability and avoids problems of variable absorption of iron from the gastrointestinal tract, but requires specialist clinic services. Intravenous iron also is associated with hypersensitivity reactions, albeit very rarely. The efficacy of intravenous iron in improving hemoglobin, ferritin, and transferrin saturation is well established, and superior to oral iron, but the long-term safety of this route of administration has not been established and there are theoretical concerns that patients may be exposed to increased oxidative stress and exacerbation of infections. The final choice of iron management strategy will depend on individual physician preference, as well as the facilities that are available. abstract_id: PUBMED:26816477 Sodium thiosulfate for the treatment of warfarin-induced calciphylaxis in a nondialysis patient. Calciphylaxis or uremic arteriolopathy is a complex process typically seen in patients with end-stage renal disease, but has also been reported in patients with normal renal function. However, therapies for calciphylaxis are based on reports of traditional patients (i.e., end-stage renal disease). A mainstay of therapy, sodium thiosulfate (STS), has been shown to be effective for the treatment of calciphylaxis. Without a standardized therapy reported for nondialysis patients there is a need for evidence-based therapy. Here, we report a case of a 63-year-old woman with an acute injury on chronic kidney disease (CrClBaseline = 48 mL/min, CrClAKI = 36 mL/min), not requiring dialysis, with warfarin-induced calciphylaxis. After 4 weeks of therapy with STS, sevelamer, alendronate, and enzymatic debridement the patient subjectively reported slight improvement of the necrotic ulcers but developed cellulitis on her nonaffected limb. Additionally, after 12 weeks of therapy she was readmitted for renal failure and subsequently required dialysis. Answer: The response to iron therapy in anemic nondialysis patients with chronic kidney disease (CKD) can be somewhat predicted using various iron indices, but the accuracy of these predictions is moderate and not definitive. A study found that the erythropoietic response to intravenous iron was higher in bone marrow iron-deplete patients compared to iron-replete patients (63% versus 30%). Peripheral iron indices such as transferrin saturation index (TSAT) and ferritin had moderate accuracy in predicting response, with positive predictive values (PPV) of 76% for a TSAT of 15% and 74% for a ferritin of 75 ng/ml, respectively. The chances of a positive response increased by 7% for each 1% decrease in TSAT (PUBMED:20019121). Another study on oral iron therapy in nondialysis-dependent CKD patients found that only 21.6% showed a hemoglobin increase of at least 1 g/dL after four weeks of treatment. Among early nonresponders, less than 30% responded at any subsequent time point, suggesting that alternative therapy should be considered earlier for better anemia management (PUBMED:29092739). The utility of common iron indices like serum ferritin and TSAT is limited in CKD patients, and inflammation can affect these levels, making it difficult to assess iron availability for erythropoiesis. Intravenous iron therapy has shown efficacy in anemic hemodialysis patients with elevated ferritin levels and low TSAT, but the benefits of intravenous iron in nondialysis CKD patients are still debated (PUBMED:19010257). In summary, while certain iron indices can provide some predictive value for the response to iron therapy in anemic nondialysis CKD patients, they are not entirely reliable, and a therapeutic trial of intravenous iron may be a useful tool in managing anemia in this population. Additionally, the choice between oral and intravenous iron therapy depends on individual circumstances, including physician preference and available facilities (PUBMED:27185524, PUBMED:26816477).
Instruction: Are biochemistry interpretative comments helpful? Abstracts: abstract_id: PUBMED:36447805 Importance of Interpretative Comments in Clinical Biochemistry - a Practitioner's Report. Interpretative comment (IC) from the clinical biochemist is a professional obligation. Most of the Nepalese clinical laboratories use only predefined comments on the report, while few laboratories do not provide comments at all. Apart from doctors, other healthcare professionals and sometimes patients themselves seek laboratory expert opinion in the interpretation of obtained results. The non-availability of patient's medical record or limited communication with physicians as well as insufficient professional knowledge impacts the quality of interpretative comments in Nepal. This report is intended to emphasize that the task of providing IC is becoming more important in the context of Nepal. Similarly, this report also guides those who provide interpretative comments. abstract_id: PUBMED:34955674 Interpretative comments - need for harmonization? Results of the Croatian survey by the Working Group for Post-analytics. Introduction: Interpretation of laboratory test results is a complex post-analytical activity that requires not only understanding of the clinical significance of laboratory results but also the analytical phase of laboratory work. The aims of this study were to determine: 1) the general opinion of Croatian medical biochemistry laboratories (MBLs) about the importance of interpretative comments on laboratory test reports, and 2) to find out whether harmonization of interpretative comments is needed. Materials And Methods: This retrospective study was designed as a survey by the Working Group for Post-analytics as part of national External Quality Assessment (EQA) program. All 195 MBLs participating in the national EQA scheme, were invited to participate in the survey. Results are reported as percentages of the total number of survey participants. Results: Out of 195 MBLs, 162 participated in the survey (83%). Among them 59% MBLs implemented test result comments in routine according to national recommendations. The majority of laboratories (92%) state that interpretative comments added value to the laboratory reports, and a substantial part (72%) does not have feedback from physicians on their significance. Although physicians and patients ask for expert opinion, participants stated that the lack of interest of physicians (64%) as well as the inability to access patient's medical record (62%) affects the quality of expert opinion. Conclusion: Although most participants state that they use interpretative comments and provide expert opinions regarding test results, results of the present study indicate that harmonization for interpretative comments is needed. abstract_id: PUBMED:25543064 Anatomy and history of an external quality assessment program for interpretative comments in clinical biochemistry. The provision of clinical interpretation of results, either verbally or in the printed report, may be considered an integral part of clinical biochemistry diagnostic service. Proficiency testing or external quality assessment (EQA) of such activity may be useful in education, training, continuing professional development and ensuring the quality of such service. Details of the Patient Report Comments Program (RPCProgram) developed by the Royal College of Pathologists of Australasia (RCPA) Chemical Pathology Quality Assurance Programs Pty Ltd (QAP) is described in this review. The program is aimed at pathologists, clinical scientists and trainees. Registered participants are provided a report with case details and a set of clinical biochemistry results at monthly intervals and submit an interpretative comment for the report. Comments received are broken up into components that are translated into common key phrases. An expert panel evaluates the key phrases, classifies them according to appropriateness and drafts a suggested comment, a case summary and a rationale, which are included in a summary report returned to participants. There is considerable diversity in the quality of interpretative comments received from participants of the PRCProgram. The primary purpose of EQA of interpretative commenting is educational self-assessment, and they are recognized as a continuing professional development activity. Whilst there is some evidence for the utility of interpretative comments in improving patient outcomes, evidence for the utility of EQA in improving quality of comments is awaited. abstract_id: PUBMED:18275680 Are biochemistry interpretative comments helpful? Results of a general practitioner and nurse practitioner survey. Background: Adding or incorporating clinical interpretative comments on biochemistry results is widespread in UK laboratories; although this consumes considerable human resource, there is still little evidence to suggest that it is either effective or appreciated by our clinical colleagues. I therefore decided to survey our local general practitioners (GPs) and nurse practitioners to analyse whether they found biochemistry comments on reports helpful. Methods: A simple questionnaire was designed and sent to 159 GPs and 81 nurse practitioners asking them whether they found this activity useful for the limited range of test groups that we routinely comment on and also whether they would like to see commenting on more groups of tests. Results: Overall, 49.6% of questionnaires were returned. Of these, there was overwhelming support for commenting on reports and 77% would like to see comments on a greater range of tests. Conclusions: Although adding clinical interpretative comments is very time-consuming for senior laboratory staff, there is overwhelming support of this activity among our GPs and nurse practitioner users; therefore, our local policy of routinely adding clinical comments will remain for the foreseeable future. abstract_id: PUBMED:32369399 Do reflex comments on laboratory reports alter patient management? Introduction: Laboratory comments appended on clinical biochemistry reports are common in the UK. Although popular with clinicians and the public, there is little evidence that these comments influence the clinical management of patients. Methods: We provided reflex automated laboratory comments on all primary care lipid results including, if appropriate, recommendation of direct referral to the West Midlands Familial Hypercholesterolaemia service (WMFHS). Over a two-year period, the number GP referrals from the Wolverhampton City Clinical Commissioning Group (CCG) to the WMFHS were compared with four comparator CCGs of similar population size, who were not provided with reflex laboratory comments. Results: Over the study period, the WMFHS received more referrals from Wolverhampton GPs (241) than any other comparator CCG (range 8-65) and greater than the combined referrals (172) from all four comparator CCGs. Conclusion: Targeted reflex laboratory comments may influence the clinical management of patients and may have a role in the identification of individuals with familial hypercholesterolaemia. abstract_id: PUBMED:34286057 Interpretative commenting in clinical chemistry with worked examples for thyroid function test reports. Correct interpretation of pathology results is a requirement for accurate diagnosis and appropriate patient management. Clinical Pathologists and Scientists are increasingly focusing on providing quality interpretative comments on their reports and these comments are appreciated by clinicians who receive them. Interpretative comments may improve patient outcomes by helping reduce errors in application of the results in patient management. Thyroid function test (TFT) results are one of the areas in clinical chemistry where interpretative commenting is practised by clinical laboratories. We have provided a series of TFT reports together with possible interpretative comments and a brief explanation of the comments. It is felt that this would be of help in setting up an interpretative service for TFTs and also assist in training and continuing education in their provision. abstract_id: PUBMED:33753978 Multicenter Survey of Physicians' Perception of Interpretative Commenting and Reflective Testing in Nigeria. Background: Interpretative commenting (IC) and reflective testing have recently generated interest because of their potential for adding value to Clinical laboratory testing. Physicians' perception to this post-testing service in Nigeria is unknown. This study examined the practices and physician's disposition regarding IC and reflective testing. Methods: This cross-sectional study was conducted among 232 doctors working in public and private hospitals across eight purposively selected states in Nigeria. Doctors who have worked and/or currently working in a health facility within their state of residence and who consented to participating in this survey were given a structured questionnaire to fill and return. Results: Paper-based reporting (213; 91.8%) was the most commonly practiced reporting method. One hundred and thirty-three (57.4%) doctors responded that interpretative comments were added to laboratory reports. "Free-handed text" (85/133; 63.9%) was the most commonly practiced form of IC; 184/232 (79.3%) and 166/232 (71.6%) doctors respectively considered comments on "potential implication of results" and "suggestions on further investigation" as the most "helpful" aspect of IC. Also, 192/232 (82.7%) doctors strongly agreed/agreed that IC influences patient's management. Only 125 (53.7%) doctors responded that they welcomed reflective testing. Concerns about cost implications (68/107;63.6%) and delays in release of result (48/107; 44.9%) were among reasons for not supporting reflective testing. Conclusion: Nigerian doctors generally have a positive disposition towards addition of interpretative comments but less so concerning reflective testing. However, challenges such as lack of LIS, EQA schemes for IC and gaps in physicians' education should be addressed to improve this aspect of laboratory services in Nigeria. abstract_id: PUBMED:30152298 Helpful and Hindering Events in Internet-Delivered Cognitive Behavioural Treatment for Generalized Anxiety. Background: Anxiety disorders are a highly prevalent cause of impairment globally with generalized anxiety disorder (GAD) sharing many features with other anxiety disorders. Aims: The present study investigated the helpful and hindering events and impacts for individuals with generalized anxiety who engaged with a supported 6-week online intervention based on cognitive behavioural therapy (iCBT). Method: Participants (n = 36) completed the Helpful and Hindering Aspects of Therapy (HAT) for each session. A descriptive-interpretative framework was used to analyse the data. Results: Helpful events were identified by participants as CBT techniques including psychoeducation, monitoring, cognitive restructuring and relaxation, and found supporter interaction, mindfulness and reading personal stories helpful. The associated impacts were identified as support and validation; behavioural change/applying coping strategies; clarification, awareness, and insight; reassurance/relief; and self-efficacy/empowerment. Hindering events were identified as treatment content/form; and amount of work/technical issues, which led to impacts such as frustration/irritation; increased anxiety; and isolation. Conclusion: The implications of the results, potential future directions of research and limitations of the study are discussed. abstract_id: PUBMED:12038600 Review of a pilot quality-assessment program for interpretative comments. Background: Interpretative commenting constitutes an important aspect of the post-analytical phase in chemical pathology, but has only recently been the subject of quality assessment. The Royal College of Pathologists of Australasia (RCPA)-Australasian Association of Clinical Biochemists (AACB) Chemical Pathology Patient Report Comments Program is currently in its third year, having started in 2000 as a pilot program. We present a review of the pilot program. Methods: The program is aimed at individuals rather than laboratories. Two cases were circulated to participants of the Chemical Pathology Quality Assurance Program every month over a 6-month period. The case report contained the age and sex of the patient, together with brief clinical notes, the biochemistry results for commenting and other information of relevance. Three lines of space were given for the comment. The comments received from participants were broken down into their components and translated into common key phrases for the purpose of summarization and analysis. A histogram of the frequency of use of the common key phrases was generated. The comments or the key phrases were not given scores or marks, nor was any other indication given as to the appropriateness of their comments. Results: This approach of simple peer-group comparison of comments without any assessment of the appropriateness of the comments was found to be inadequate; thus, when the program continues, key phrases will be classified according to degree of appropriateness and a suggested comment for each case will be proposed by an 'expert' panel. Conclusions: The program can serve a useful role in continuing education. Clinical biochemists and trainees who add interpretative comments to results produced by their laboratory, or give interpretative advice over the telephone, may potentially benefit from participating in this program. abstract_id: PUBMED:27641826 Assuring the quality of interpretative comments in clinical chemistry. The provision of interpretative advice on laboratory results is a post-analytic activity and an integral part of clinical laboratory services. It is valued by healthcare workers and has the potential to prevent or reduce errors and improve patient outcomes. It is important to ensure that interpretative comments provided by laboratory personnel are of high quality: comments should be patient-focused and answer the implicit or explicit question raised by the requesting clinician. Comment providers need to be adequately trained and qualified and be able to demonstrate their proficiency to provide advice on laboratory reports. External quality assessment (EQA) schemes can play a part in assessing and demonstrating the competence of such laboratory staff and have an important role in their education and continuing professional development. A standard structure is proposed for EQA schemes for interpretative comments in clinical chemistry, which addresses the scope and method of assessment including nomenclature and marking scales. There is a need for evidence that participation in an EQA program for interpretative commenting facilitates improved quality of comments. It is proposed that standardizing goals and methods of assessment as well as nomenclature and marking scales may help accumulate evidence to demonstrate the impact of participation in EQA for interpretative commenting on patient outcome. Answer: Biochemistry interpretative comments are considered helpful by many healthcare professionals. A survey of general practitioners (GPs) and nurse practitioners showed overwhelming support for the inclusion of clinical interpretative comments on biochemistry reports, with 77% of respondents expressing a desire to see comments on a broader range of tests (PUBMED:18275680). This suggests that such comments are valued and can be beneficial in clinical practice. Moreover, targeted reflex laboratory comments have been shown to potentially influence the clinical management of patients. For instance, a study found that reflex comments on lipid results led to a significant increase in GP referrals to a specialized service for familial hypercholesterolaemia, compared to areas where such comments were not provided (PUBMED:32369399). In the context of Nigeria, physicians generally have a positive disposition towards the addition of interpretative comments to laboratory reports, with a majority agreeing that these comments influence patient management (PUBMED:33753978). However, there is less support for reflective testing due to concerns about cost and delays in result release. The importance of interpretative comments is also recognized in other regions. In Croatia, a survey indicated that most medical biochemistry laboratories (MBLs) use interpretative comments and believe they add value to laboratory reports, although there is a need for harmonization (PUBMED:34955674). Similarly, in Nepal, there is a growing recognition of the importance of providing interpretative comments, despite challenges such as limited access to patient medical records and insufficient professional knowledge (PUBMED:36447805). External quality assessment programs for interpretative comments, such as the one developed by the Royal College of Pathologists of Australasia, aim to improve the quality of these comments through education and self-assessment (PUBMED:25543064). The need for high-quality interpretative comments is further emphasized by the proposal of a standard structure for external quality assessment schemes to ensure the proficiency of laboratory staff providing such comments (PUBMED:27641826). In conclusion, biochemistry interpretative comments are generally found to be helpful by healthcare professionals and can have a positive impact on patient management. However, there is a need for standardization, training, and quality assurance to maximize their effectiveness.
Instruction: Does the doctors' professional experience reduce referral rates? Abstracts: abstract_id: PUBMED:8725089 Does the doctors' professional experience reduce referral rates? Evidence from the Finnish referral study. Objective: To examine hospital referral rates in Finnish health centres according to doctors' and health centres characteristics. Design: Survey of all general practice hospital referrals over one week. Setting: Central and northern part of Finland, in a region comprising 72% of the area of the country and one-third of the population. Participants: 851 health centre doctors. Outcome Measures: Referral rates in terms of characteristics of doctors and health centres. Results: During the study period, the 851 health centre doctors had 58 760 consultations (mean 69 patients/week), and 2 921 (5%) patients were referred to secondary care. The variation between the highest and lowest referral quintile of the doctors was almost 17-fold, and of the health centres 2.4-fold. Relatively more referrals were made by locums, young, not yet graduated and female doctors than by specialized, more experienced and male doctors. Conclusion: A low referral rate is closely connetected with the extent of the general practitioners' professional experience and specialist training. Investing in specialist training and continuing medical education seems to be the best way to reduce high referral rates. abstract_id: PUBMED:38382180 'I don't belong anywhere': Identity and professional development in SAS doctors. Specialist, associate specialist and specialty (SAS) doctors constitute a marginalised professional group who can struggle to achieve the professional development they desire. Our primary objective was to understand, from a theoretically informed perspective, the ways in which the professional identity of SAS doctors influences their professional development opportunities, including through appraisal. Ten UK SAS doctors participated in in-depth, narrative interviews. Participants were drawn from six medical specialities, and ranged in experience (2.5-15 years) and country of primary medical qualification. Interview transcripts were analysed via critical discourse analysis using Figured Worlds theory. The position of SAS doctors within the Figured World was at times unstable, ambiguous and context dependent. They were often relative outsiders, not immediately trusted by colleagues. Some found their development needs eclipsed by the priorities of colleagues, typically consultants and postgraduate trainees. Appraisal was often not perceived to have successfully addressed these issues. This study enhances our understanding of the lived experience of SAS doctors, which is often in stark contrast to formal policy on the range of roles that they can fulfil. The struggles and successes of SAS doctors described here suggest that there is scope to improve the professional status and professional development opportunities for SAS doctors, including through appraisal. abstract_id: PUBMED:28373247 Practices and attitudes of doctors and patients to downward referral in Shanghai, China. Objectives: In China, the rate of downward referral is relatively low, as most people are unwilling to be referred from hospitals to community health systems (CHSs). The aim of this study was to explore the effect of doctors' and patients' practices and attitudes on their willingness for downward referral and the relationship between downward referral and sociodemographic characteristics. Methods: Doctors and patients of 13 tertiary hospitals in Shanghai were stratified through random sampling. The questionnaire surveyed their sociodemographic characteristics, attitudes towards CHSs and hospitals, understanding of downward referral, recognition of the community first treatment system, and downward referral practices and willingness. Descriptive statistics, χ2 test and stepwise logistic regression analysis were employed for statistical analysis. Results: Only 20.8% (161/773) of doctors were willing to accept downward referrals, although this proportion was higher among patients (37.6%, 326/866). Doctors' willingness was influenced by education, understanding of downward referral, and perception of health resources in hospitals. Patients' willingness was influenced by marital status, economic factors and recognition of the community first treatment system. Well-educated doctors who do not consider downward referral would increase their workloads and those with a more comprehensive understanding of hospitals and downward referral process were more likely to make a downward referral decision. Single-injury patients fully recognising the community first treatment system were more willing to accept downward referral. Patients' willingness was significantly increased if downward referral was cost-saving. A better medical insurance system was another key factor for patients to accept downward referral decisions, especially for the floating population. Conclusions: To increase the rate of downward referral, the Chinese government should optimise the current referral system and conduct universal publicity for downward referral. Doctors and patients should promote understandings of downward referral. Hospitals should realise the necessity of downward referral, effectively reduce workloads and provide continuing education for doctors. Increasing monetary reimbursement is urgent, as is improving the medical insurance system. abstract_id: PUBMED:37063498 Early identification and referral of organ donors in five private hospitals: A survey to determine the knowledge and views of critical care professional nurses pre and post a PowerPoint training intervention. Background: Internationally, the declining number of organ donors does not meet the rising demand for life-saving solid organ transplant operations. Critical care professional nurses play an important role in the identification of organ donors. A deeper understanding is needed of the missed opportunities of identification and referral of potential organ donors to organ transplant co-ordinators in the critical care environment. Objectives: To describe the knowledge and views of critical care professional nurses (CCPNs) in South Africa concerning the early identification and referral of organ donors, and to describe the short-term effect of implementing a PowerPoint training intervention on nurses' knowledge in this domain. Methods: A mixed method, experimental, exploratory, descriptive study design was followed. Critical care professional nurses in seven intensive care units at five private hospitals completed a data collection instrument comprising 11 quantitative and three qualitative questions, before and after completing a PowerPoint training intervention. The post test was done directly after the training intervention due to the environment of nursing shift work. Results: A total of 173 (79%) CCPNs participated, from a population of 218. The median (interquartile range) score for all participants increased from 60% (48 - 76) prior to the PowerPoint training intervention, to 96% (88 - 96) after the intervention (p<0.0001). Three main themes emerged from the qualitative questions: stress experienced by the CCPNs during the organ donor process (62%); inadequate collaboration between doctors and nurses (34%); and the need for a policy (9%) to guide the donor referral process. Conclusion: CCPNs' knowledge regarding the early identification and referral of potential organ donors increased significantly following a targeted PowerPoint training intervention. The identified themes suggest that CCPNs require support in order to effectively communicate with the donor's family and medical staff. Contributions Of The Study: The findings of this study will contribute to the increase in organ donor referrals. Increased education of critical care professional nurses and doctors on effective communication, and the early identification and referral of organ donors, will expectedly help to resolve the critical barriers in the organ donor process. abstract_id: PUBMED:7589938 Variability in GPs' referral rates in Spain. The purpose of this study is to explore the causes that may influence the variations on referral rates in a sample of 242 general practitioners (GPs) in Spain. We applied Poisson multivariant regression modeling to analyze the role played by different variables related to doctors, patients and practices. The mean referral rate was 6.92 +/- 0.22 with a variant coefficient of 50.6%. The results of the Poisson model showed a statistically significant variation on the following variables: 1) doctor gender; 2) proportion of consultations to the practice made by male patients; 3) proportion of consultations made by patients over 65 years of age; 4) list size, 5) number of doctors in the PCT; 6) number of practice consultations to each doctor per week; 7) accredited practice for VT; 8) location of practice; 9) proportion of outpatient referrals; 10) proportion of private referrals; 11) proportion of emergency referrals; 12) proportion of referrals in which the patient's attitude did not influence the doctor for the referral; 13) proportion of new referrals. The statistical significance for the final model was very high (P < 0.00001). The study draws attention to the influence of some structural characteristics of health care system on the referral rates that could be modified to reduce the number of referrals from Primary to Secondary Care. abstract_id: PUBMED:32955230 Stress and occupational burnout in a population of Polish doctors - Organizational-professional and non-professional-social predictors. Introduction And Objective: Numerous studies have found that doctors are exposed to stress and burnout at work. In consequence, these processes lead to a decrease in the quality of life. The study aimed to determine whether professional burnout, understood as a cognitive existential process, is related to stress at work and if any of the four dimensions of burnout are its predictor. The study also analyzed relations between organizational-professional and non-professional-social factors and burnout and stress at work. Material And Methods: 318 doctors (210 women, 108 men), aged 27 - 8, participated in the study. Link Burnout Questionnaire (LBQ) was used to measure burnout, and the Perceived Stress Scale (PSS-10) to measure the variable describing the current level of stress. Results: Occupational burnout is related to stress. Two of its symptoms: exhaustion and lack of a sense of professional effectiveness, are important predictors for the sense of stress experienced by the examined group of doctors. The individuals who are in a partner relationship (but not marriage) experienced a stronger sense of non-professional effectiveness than those who were single. Being in an informal relationship is related to the level of stress and lack of a sense of professional effectiveness. The level of stress, as well as all burnout indicators, are connected with the function of a team leader and having passion. The frequency of taking leave and using social networks is related to the level of stress and burnout. Conclusions: Problems related to burnout in doctors, and therefore people professionally involved in helping and treating, must not be underestimated, as evidenced by the results of the presented study Professional burnout of doctors leads to somatic and psychological problems. Doctors suffering from occupational burnout need support and psychological assistance the same as any other professional group. abstract_id: PUBMED:28711898 Professional burnout syndrome in doctors of surgical specialties in Ukraine: causes, consequences, labor optimization ways. Introduction: The professional burnout syndrome (PBS) affects quality of medical care provision for people, which is acquires the special actuality in terms of reforming the health care system. Aim: To study ways to improve the efficiency of doctors of surgical specialties based on analyzes of PBS and its consequences. Materials And Methods: A survey of psychological tests and 62 surgical doctors was carried out. Results: It was found out that the PBS reaches a peak after 11 to 15 years of working experience. Anesthesiologists have high levels of PBS, emotional exhaustion, cynicism, low desire for career growth, frequent misunderstanding with the administration, they prefer 8-hour working day, and relieve stress by sleeping and consuming alcohol. Obstetrician-gynecologists show moderate level of PBS and emotional exhaustion, high degree of cynicism, strong desire for career growth, frequent misunderstandings with patients and their relatives, prefer 8-hour working day, relieve stress by smoking and socializing with family and friends. Traumatic surgeons have moderate level of PBS, emotional exhaustion, high degree of cynicism, strong desire for career growth, frequent misunderstandings with their colleagues of related specialties, prefer the 24-hour working day, and reli eve their stress with alcohol and sports. Surgeons have moderate level of PBS, emotional exhaustion, low degree of cynicism, moderate desire for career growth, frequent misunderstandings with their colleagues of related specialties, prefer the 8-hour working day, and relieve stress by smoking and sleeping. Conclusions: PBS is most expressed in doctors having working experience of 11 to 15 years and in anesthesiologists. They get professional deformations. These features should be considered in course of organization of working process of medical teams. abstract_id: PUBMED:28343203 Developing the professional competence of future doctors in the instructional setting of higher medical educational institutions. Introduction: The main objectives of higher medical education is the continuous professional improvement of physicians to meet the needs dictated by the modern world both at undergraduate and postgraduate levels. In this respect, the system of higher medical education has undergone certain changes - from determining the range of professional competences to the adoption of new standards of education in medicine. Aim: The article aims to analyze the parameters of doctor's professionalism in the context of competence-based approach and to develop practical recommendations for the improvement of instruction techniques. Materials And Methods: The authors reviewed the psycho-pedagogical materials and summarized the acquired experience of teachers at higher medical institutions as to the development of instruction techniques in the modern educational process. The study is based on the results of testing via the technique developed by T.I. Ilyina. Analytical and biblio-semantic methods were used in the paper. Results: It has been found that the training process at medical educational institution should be focused on the learning outcomes. The authors defined the quality parameters of doctors' training and suggested the model for developing the professional competence of medical students. This model explains the cause-and-effect relationships between the forms of instruction, teaching techniques and specific components of professional competence in future doctors. Conclusions: The paper provides practical recommendations on developing the core competencies which a qualified doctor should master. The analysis of existing interactive media in Ukraine and abroad has been performed. It has been found that teaching the core disciplines with the use of latest technologies and interactive means keeps abreast of the times, while teaching social studies and humanities to medical students still involves certain difficulties. abstract_id: PUBMED:3448226 Variation in general practitioners' referral rates to consultants. The variation in the number of patients general practitioners refer to hospital is a source of concern because of the costs generated and the implications for quality and quantity of care This paper compares 32 general practitioners with high referral rates with 35 doctors with low referral rates drawn from a study of 201 doctors. The mean referral rate for all 201 doctors was 6.6 per 100 consultations - for those with high referral rates the mean was 11.8 and for those with low referral rates 2.9. Differences between doctors with high and low referral rates with respect to age, sex, social class and diagnostic case mix of patients consulting were small. Doctors with high referral rates referred more patients in all categories. There were also few differences between the two groups with respect to the characteristics of the doctors themselves or their practices. The findings are discussed in the context of proposals to provide general practitioners with information on their own referral rates compared with those of other doctors. abstract_id: PUBMED:25161972 A qualitative study on working experience of rural doctors in malappuram district of kerala, India. Background: Improving the working conditions of rural doctors is an important issue to increase the quality of health services to target groups. Objectives: To assess the working experience of rural doctors at primary health care level. Materials And Methods: This qualitative study was conducted among 30 medical officers from 21 primary health centers in Malappuram district of Kerala, India. In-depth interview was conducted, and content analysis was performed with the identification of themes based on the responses obtained. Results: There were 19 males and 11 females belonged to 25 to 55 years age group. About 70% (21) of them were graduates with MBBS qualification, and the rest were postgraduates. About 2/3(rd) of them (20) had experience of less than 5 years. They expressed difficulty in managing the work in stipulated time period. However, this had never affected their OP management in anyway. They told that higher authorities were supportive, but they faced some opposition from the public in implementation of national program. Few opined that the training received was grossly insufficient in running the administrative affairs of the health center. Most of them satisfied with physical infrastructure, but manpower including medical officers and supporting staff were not sufficient. Some opined that the age of retirement is too early and should be increased. They participated in Continuing Medical Education, but expressed that it's content should suit to primary health care level. Conclusion: This study highlighted their concern to patient care and time, field work, administrative work, infrastructure, professional development, and future prospects. Further large scale evaluation studies will explore the situational analysis of it. Answer: Yes, doctors' professional experience appears to reduce referral rates. Evidence from the Finnish referral study indicates that a low referral rate is closely connected with the extent of the general practitioners' professional experience and specialist training. The study found that more experienced and specialized doctors had lower referral rates compared to locums, young, not yet graduated, and female doctors (PUBMED:8725089). Additionally, a study in Spain showed that referral rates varied among general practitioners and highlighted the influence of doctor gender and experience on these rates (PUBMED:7589938). These findings suggest that investing in specialist training and continuing medical education may be effective ways to reduce high referral rates (PUBMED:8725089).
Instruction: Is there a role for computerized decision support for drug dosing in general practice? Abstracts: abstract_id: PUBMED:11123489 Is there a role for computerized decision support for drug dosing in general practice? A questionnaire survey. Objective: To determine: (i) whether general practitioners have difficulty with drug dosing; (ii) what information sources they currently use to help them with drug dosing; (iii) their views on the potential value of decision support software for drug dosing. Design: Questionnaire survey. Setting: Nottingham, U.K. Participants: 263 general practitioners (GPs). Results: The response rate was 78% (263/336). Most GPs reported difficulties with drug dosing for children, the elderly and patients with renal impairment. Compared with 'patients in general', GPs had particular difficulties in drug dosing for these specific groups (P < 0.001). Paper-based formularies were the most common source of information for help with drug doses. Nevertheless, most GPs had positive views on the potential usefulness of computerized decision support. Conclusion: GPs commonly have problems in drug dosing for certain groups of patients. The development and use of computerized decision support might help GPs in these situations. abstract_id: PUBMED:37041631 SupportPrim-a computerized clinical decision support system for stratified care for patients with musculoskeletal pain complaints in general practice: study protocol for a randomized controlled trial. Background: Musculoskeletal disorders represented 149 million years lived with disability world-wide in 2019 and are the main cause of years lived with disability worldwide. Current treatment recommendations are based on "one-size fits all" principle, which does not take into account the large degree of biopsychosocial heterogeneity in this group of patients. To compensate for this, we developed a stratified care computerized clinical decision support system for general practice based on patient biopsychosocial phenotypes; furthermore, we added personalized treatment recommendations based on specific patient factors to the system. In this study protocol, we describe the randomized controlled trial for evaluating the effectiveness of computerized clinical decision support system for stratified care for patients with common musculoskeletal pain complaints in general practice. The aim of this study is to test the effect of a computerized clinical decision support system for stratified care in general practice on subjective patient outcome variables compared to current care. Methods: We will perform a cluster-randomized controlled trial with 44 general practitioners including 748 patients seeking their general practitioner due to pain in the neck, back, shoulder, hip, knee, or multisite. The intervention group will use the computerized clinical decision support system, while the control group will provide current care for their patients. The primary outcomes assessed at 3 months are global perceived effect and clinically important improvement in function measured by the Patient-Specific Function Scale (PSFS), while secondary outcomes include change in pain intensity measured by the Numeric Rating Scale (0-10), health-related quality of life (EQ-5D), general musculoskeletal health (MSK-HQ), number of treatments, use of painkillers, sick-leave grading and duration, referral to secondary care, and use of imaging. Discussion: The use of biopsychosocial profile to stratify patients and implement it in a computerized clinical decision support system for general practitioners is a novel method of providing decision support for this patient group. The study aim to recruit patients from May 2022 to March 2023, and the first results from the study will be available late 2023. Trial Registration: The trial is registered in ISRCTN 11th of May 2022: 14,067,965. abstract_id: PUBMED:21109619 Electronic information and clinical decision support for prescribing: state of play in Australian general practice. Background: Investments in eHealth worldwide have been mirrored in Australia, with >90% of general practices computerized. Recent eHealth incentives promote the use of up to date electronic information sources relevant to general practice with flexibility in mode of access. Objective: To determine GPs' access to and use of electronic information sources and computerized clinical decision support systems (CDSSs) for prescribing. Methods: Semi-structured interviews were conducted with 18 experienced GPs and nine GP trainees in New South Wales, Australia in 2008. A thematic analysis of interview transcripts was undertaken. Results: Information needs varied with clinical experience, and people resources (specialists, GP peers and supervisors for trainees) were often preferred over written formats. Experienced GPs used a small number of electronic resources and accessed them infrequently. Familiarity from training and early clinical practice and easy access were dominant influences on resource use. Practice time constraints meant relevant information needed to be readily accessible during consultations, requiring integration or direct access from prescribing software. Quality of electronic resource content was assumed and cost a barrier for some GPs. Conclusions: The current Australian practice incentives do not prescribe which information resources GPs should use. Without integration into practice computing systems, uptake and routine use seem unlikely. CDSS developments must recognize the time pressures of practice, preference for integration and cost concerns. Minimum standards are required to ensure that high-quality information resources are integrated and regularly updated. Without standards, the anticipated benefits of computerization on patient safety and health outcomes will be uncertain. abstract_id: PUBMED:21824384 Computerized clinical decision support systems for therapeutic drug monitoring and dosing: a decision-maker-researcher partnership systematic review. Background: Some drugs have a narrow therapeutic range and require monitoring and dose adjustments to optimize their efficacy and safety. Computerized clinical decision support systems (CCDSSs) may improve the net benefit of these drugs. The objective of this review was to determine if CCDSSs improve processes of care or patient outcomes for therapeutic drug monitoring and dosing. Methods: We conducted a decision-maker-researcher partnership systematic review. Studies from our previous review were included, and new studies were sought until January 2010 in MEDLINE, EMBASE, Evidence-Based Medicine Reviews, and Inspec databases. Randomized controlled trials assessing the effect of a CCDSS on process of care or patient outcomes were selected by pairs of independent reviewers. A study was considered to have a positive effect (i.e., CCDSS showed improvement) if at least 50% of the relevant study outcomes were statistically significantly positive. Results: Thirty-three randomized controlled trials were identified, assessing the effect of a CCDSS on management of vitamin K antagonists (14), insulin (6), theophylline/aminophylline (4), aminoglycosides (3), digoxin (2), lidocaine (1), or as part of a multifaceted approach (3). Cluster randomization was rarely used (18%) and CCDSSs were usually stand-alone systems (76%) primarily used by physicians (85%). Overall, 18 of 30 studies (60%) showed an improvement in the process of care and 4 of 19 (21%) an improvement in patient outcomes. All evaluable studies assessing insulin dosing for glycaemic control showed an improvement. In meta-analysis, CCDSSs for vitamin K antagonist dosing significantly improved time in therapeutic range. Conclusions: CCDSSs have potential for improving process of care for therapeutic drug monitoring and dosing, specifically insulin and vitamin K antagonist dosing. However, studies were small and generally of modest quality, and effects on patient outcomes were uncertain, with no convincing benefit in the largest studies. At present, no firm recommendation for specific systems can be given. More potent CCDSSs need to be developed and should be evaluated by independent researchers using cluster randomization and primarily assess patient outcomes related to drug efficacy and safety. abstract_id: PUBMED:29250222 Effects of Computerized Decision Support Systems on Management of Atrial Fibrillation: A Scoping Review. Background: Potential role of computerized decision support system on management of atrial fibrillation is not well understood. Objectives: To systematically review studies that evaluate the effects of computerized decision support systems and decision aids on aspects pertaining to atrial fibrillation. Data Sources: We searched Medline, Scopus and Cochrane database. Last date of search was 2016, January 10. Selection Criteria: Computerized decision support systems that help manage atrial fibrillation and decision aids that provide useful knowledge for patients with atrial fibrillation and help them to self-care. Data Collection And Analysis: Two reviewers extracted data and summarized findings. Due to heterogeneity, meta-analysis was not feasible; mean differences of outcomes and confidence intervals for a difference between two Means were reported. Results: Seven eligible studies were included in the final review. There was one observational study without controls, three observational studies with controls, one Non-Randomized Controlled Trial and two Randomized Controlled Trials. The interventions were three decision aids that were used by patients and four computerized decision support systems. Main outcomes of studies were: stroke events and major bleeding (one article), Changing doctor-nurse behavior (three articles), Time in therapeutic International Normalized Ratio range (one article), decision conflict scale (two articles), patient knowledge and anxiety about stroke and bleeding (two articles). Conclusions: A computerized decision support system may decrease decision conflict and increase knowledge of patients with atrial fibrillation (AF) about risks of AF and AF treatments. Effect of computerized decision support system on outcomes such as changing doctor-nurse behavior, anxiety about stroke and bleeding and stroke events could not be shown.We need more studies to evaluate the role of computerized decision support system in patients with atrial fibrillation. abstract_id: PUBMED:22052899 Guided medication dosing for elderly emergency patients using real-time, computerized decision support. Objective: To evaluate the impact of a real-time computerized decision support tool in the emergency department that guides medication dosing for the elderly on physician ordering behavior and on adverse drug events (ADEs). Design: A prospective controlled trial was conducted over 26 weeks. The status of the decision support tool alternated OFF (7/17/06-8/29/06), ON (8/29/06-10/10/06), OFF (10/10/06-11/28/06), and ON (11/28/06-1/16/07) in consecutive blocks during the study period. In patients ≥65 who were ordered certain benzodiazepines, opiates, non-steroidals, or sedative-hypnotics, the computer application either adjusted the dosing or suggested a different medication. Physicians could accept or reject recommendations. Measurements: The primary outcome compared medication ordering consistent with recommendations during ON versus OFF periods. Secondary outcomes included the admission rate, emergency department length of stay for discharged patients, 10-fold dosing orders, use of a second drug to reverse the original medication, and rate of ADEs using previously validated explicit chart review. Results: 2398 orders were placed for 1407 patients over 1548 visits. The majority (49/53; 92.5%) of recommendations for alternate medications were declined. More orders were consistent with dosing recommendations during ON (403/1283; 31.4%) than OFF (256/1115; 23%) periods (p≤0.0001). 673 (43%) visits were reviewed for ADEs. The rate of ADEs was lower during ON (8/237; 3.4%) compared with OFF (31/436; 7.1%) periods (p=0.02). The remaining secondary outcomes showed no difference. Limitations: Single institution study, retrospective chart review for ADEs. Conclusion: Though overall agreement with recommendations was low, real-time computerized decision support resulted in greater acceptance of medication recommendations. Fewer ADEs were observed when computerized decision support was active. abstract_id: PUBMED:17238415 Workflow technology to enrich a computerized clinical chart with decision support facilities. Literature results and personal experience show that intrusive modalities of presenting suggestions of computerized clinical practice guidelines are detrimental to the routine use of an information system. This paper describes a solution for smoothly integrating a guideline-based decision support system into an existing computerized clinical chart for patients admitted to a Stroke Unit. Since many years, the healthcare personnel were using a commercial product for the ordinary patients' data management, and they were satisfied with it. Thus, the decision support system has been integrated keeping attention to minimize changes and preserve existing human-computer interaction. Our decision support system is based on workflow technology. The paper illustrates the middleware layer developed to allow communication between the workflow management system and the clinical chart. At the same time, the consequent modification of the graphical users' interface is illustrated. abstract_id: PUBMED:21417501 The role of computerized decision support in reducing errors in selecting medicines for prescription: narrative review. This narrative review includes a summary of research examining prescribing errors, prescription decision making and the role computerized decision support plays in this decision-making process. A reduction in medication prescribing errors, specifically a reduction in the selection of inappropriate medications, is expected to result from the implementation of an effective computerized decision support system. Previous research has investigated the impact of the implementation of electronic systems on medication errors more broadly. This review examines the specific characteristics of decision support systems that may contribute to fewer knowledge-based mistakes in prescribing, and critically appraises the large volume of information available on the decision-making process of selecting medicines for prescription. The results highlight a need for work investigating what decision strategies are used by doctors with different levels of expertise in the prescribing of medications. The nature of the relationship between decision support and decision performance is not well understood and future research is needed to determine the mechanisms by which computerized decision support influences medication selection. abstract_id: PUBMED:18999099 Improving aminoglycoside dosing through computerized clinical decision support and pharmacy therapeutic monitoring systems. Dosing errors and inadequate laboratory monitoring of aminoglycosides may lead to significant adverse drug events. Correctly prescribing aminoglycosides requires familiarity with multiple mathematically complicated dosing and laboratory monitoring protocols. We developed a clinical decision support system for ordering aminoglycosides which is integrated into a computerized provider order entry system. To complement the ordering tool, we implemented a real-time tracking application to allow the pharmacist-staffed therapeutic drug monitoring service to follow patients prescribed aminoglycosides. abstract_id: PUBMED:8591356 Decision support in the United Kingdom for general practice: past, present, and future. This paper looks at three aspects of Decision Support in the UK in relation to primary health care: 1) the past history; 2) the extent and use of clinical computing in general practice today, with the growing appearance of decision support systems; and 3) a brief report from a vision building workshop hosted by 'DECSI' to look at the need doctors will have for decision support in primary health care in the future. Answer: Yes, there is a role for computerized decision support for drug dosing in general practice. General practitioners (GPs) often encounter difficulties with drug dosing, especially for specific patient groups such as children, the elderly, and patients with renal impairment. Most GPs reported these difficulties and expressed positive views on the potential usefulness of computerized decision support to assist with drug dosing (PUBMED:11123489). Computerized clinical decision support systems (CCDSSs) have been shown to improve the process of care for therapeutic drug monitoring and dosing, particularly for insulin and vitamin K antagonist dosing. However, the effects on patient outcomes are less certain, and more robust studies are needed to evaluate these systems further (PUBMED:21824384). Additionally, real-time computerized decision support has been demonstrated to result in greater acceptance of medication recommendations and a lower rate of adverse drug events (ADEs) when active (PUBMED:22052899). The development of CCDSSs for stratified care based on patient biopsychosocial phenotypes in general practice is a novel method that shows promise for providing decision support for patients with musculoskeletal pain complaints (PUBMED:37041631). Furthermore, the integration of decision support systems into existing clinical workflows and charts can enrich the decision-making process without disrupting routine use (PUBMED:17238415). Overall, the literature suggests that computerized decision support can play a significant role in reducing errors in selecting medicines for prescription and improving the safety and efficacy of drug dosing in general practice (PUBMED:21417501). However, the successful implementation of such systems requires careful consideration of workflow integration, ease of access, cost, and the need for regular updates to ensure high-quality and up-to-date information (PUBMED:21109619).
Instruction: Experience as a doctor in the developing world: does it benefit the clinical and organisational performance in general practice? Abstracts: abstract_id: PUBMED:20003532 Experience as a doctor in the developing world: does it benefit the clinical and organisational performance in general practice? Background: Many physicians have medical experience in developing countries early in their career, but its association with their medical performance later is not known. To explore possible associations we compared primary care physicians (GPs) with and without professional experience in a developing country in performance both clinical and organisational. Methods: A retrospective survey using two databases to analyse clinical and organisational performance respectively. Analysis was done at the GP level and practice level.517 GPs received a questionnaire regarding relevant working experience in a developing country. Indicators for clinical performance were: prescription, referral, external diagnostic procedures and minor procedures. We used the district health insurance data base covering 570.000 patients. Explorative secondary analysis of practice visits of 1004 GPs in 566 practices in the Netherlands from 1999 till 2001. We used a validated practice visit method (VIP; 385 indicators in 51 dimensions of practice management) to compare having experience in a developing country or not. Results: Almost 8% of the GPs had experience in a developing country of at least two years.These GPs referred 9,5% less than their colleagues and did more surgical procedures. However, in the multivariate analysis 'experience in a developing country' was not significantly associated with clinical performance or with other GP- and practice characteristics. 16% of the practices a GP or GPs with at least two years experience in a developing country. They worked more often in group and rural practices with less patients per fte GP and more often part-time. These practices are more hygienic, collaborate more with the hospital and score better on organisation of the practice. These practices score less on service and availability, spend less time on patients in the consultation and the quality of recording in the EMD is lower. Conclusions: We found interesting differences in clinical and organisational performance between GPs with and without medical experience in developing countries and between their practices. It is not possible to attribute these differences to this experience, because the choice for medical experience in a tropical country probably reflects individual differences in professional motivation and personality. Experience in a developing country may be just as valuable for later performance in general practice as experience at home. abstract_id: PUBMED:30509873 Lessons for 'large-scale' general practice provider organisations in England from other inter-organisational healthcare collaborations. Policymakers in England are increasingly encouraging the formation of 'large-scale' general practice provider collaborations with the expectation that this will help deliver better quality services and generate economies of scale. However, solid evidence that these expectations will be met is limited. This paper reviews evidence from other inter-organisational healthcare collaborations with similarities in their development or anticipated impact to identify lessons. Medline. SSCI, Embase and HMIC database searches identified a range of initiatives which could provide transferable evidence. Iterative searching was undertaken to identify further relevant evidence. Thematic analysis was used to identify areas to consider in the development of large-scale general practice providers. Framework analysis was used to identify challenges which may affect the ability of such providers to achieve their anticipated impact. A narrative approach was used to synthesise the evidence. Trade-offs exist in 'scaling-up' between mandated and voluntary collaboration; networks versus single organisations; small versus large collaborations; and different types of governance structures in terms of sustainability and performance. While positive impact seems plausible, evidence suggests that it is not a given that clinical outcomes or patient experience will improve, nor that cost savings will be achieved as a result of increasing organisational size. Since the impact and potential unintended consequences are not yet clear, it would be advisable for policymakers to move with caution, and be informed by ongoing evaluation. abstract_id: PUBMED:8343704 Developing organisational vision in general practice. Vision is a fashionable but ill defined term in management circles. Nevertheless, it embodies a significant concept related to guiding an organisation from present realities, through opportunities and hazards, to a viable future. Until recently a typical general practice could assume a stable external environment, but now it is caught up in the uncertainties stemming from the NHS reforms. For such a practice to undertake effective strategic planning it will have to develop a vision connecting the present with aspirations for the future. While vision is usually considered to be an individual talent, it is possible to develop a collective organisational vision within a general practice, and the small size of general practices makes this relatively easy. The vision needs to be broad; it needs to be continuous; and its capacity to predict the future needs to be monitored. abstract_id: PUBMED:37269443 Evaluating the potential outcomes of pharmacist-led activities in the Australian general practice setting: a prospective observational study. Background: Pharmacists have been co-located in general practice teams to support the quality use of medicines and optimise patient health outcomes. Evidence of the impact of pharmacist-led activities in Australian general practices is sparse. Aim: This study aimed to evaluate the potential outcomes of pharmacist-led activities in Australian general practices. Method: A prospective observational study was conducted in eight general practices in the Australian Capital Territory, where each general practice employed a pharmacist on a part-time basis for 18 months. A recommended, but flexible, list of activities was provided for pharmacists. Descriptive information on general practice pharmacist-led activities, collected with an online diary, was analysed. The potential clinical, economic, and organisational impact of pharmacist-led clinical activities was evaluated using the CLinical Economic Organisational (CLEO) tool, with a modified economic dimension. Results: Nine pharmacists reported 4290 activities over 3918.5 work hours in general practice. Medication management services were the primary clinical activity of pharmacists. In medication reviews, 75% of the pharmacists' recommendations were fully accepted by general practitioners. Conducting clinical audits, updating patients' medical records, and providing information to patients and staff were other major activities of pharmacists. Of 2419 clinical activities, around 50% had the potential for a moderate or major positive clinical impact on patients. Sixty-three per cent of activities had the potential to decrease healthcare costs. Almost all the pharmacist-led clinical activities had a positive organisational impact. Conclusion: Most pharmacist-led clinical activities in general practice had the potential for a positive impact on patients and reduction in healthcare costs, supporting the expansion of this model in Australia. abstract_id: PUBMED:31635536 Transitioning to a general practice placement for the foundation doctor. Time spent training in general practice can be highly beneficial for junior doctors irrespective of their future specialty choice. A large number of foundation year two doctors from the United Kingdom will undertake time in general practice as part of the compulsory Foundation Programme for new medical graduates following recommendations for all such rotations to include a community placement. For the majority, this will be their first time working in primary care post-qualification and this role will bring significant new clinical and professional challenges. In this article we give thirty points of advice for foundation doctors starting a general practice rotation and additional insight for their clinical supervisors, grouped into clinical, consultation related and general points, as informed by the authors' experience and an electronic survey of foundation doctors and general practice trainers. abstract_id: PUBMED:28449716 The role of informal dimensions of safety in high-volume organisational routines: an ethnographic study of test results handling in UK general practice. Background: The handling of laboratory, imaging and other test results in UK general practice is a high-volume organisational routine that is both complex and high risk. Previous research in this area has focused on errors and harm, but a complementary approach is to better understand how safety is achieved in everyday practice. This paper ethnographically examines the role of informal dimensions of test results handling routines in the achievement of safety in UK general practice and how these findings can best be developed for wider application by policymakers and practitioners. Methods: Non-participant observation was conducted of high-volume organisational routines across eight UK general practices with diverse organisational characteristics. Sixty-two semi-structured interviews were also conducted with the key practice staff alongside the analysis of relevant documents. Results: While formal results handling routines were described similarly across the eight study practices, the everyday structure of how the routine should be enacted in practice was informally understood. Results handling safety took a range of local forms depending on how different aspects of safety were prioritised, with practices varying in terms of how they balanced thoroughness (i.e. ensuring the high-quality management of results by the most appropriate clinician) and efficiency (i.e. timely management of results) depending on a range of factors (e.g. practice history, team composition). Each approach adopted created its own potential risks, with demands for thoroughness reducing productivity and demands for efficiency reducing handling quality. Irrespective of the practice-level approach adopted, staff also regularly varied what they did for individual patients depending on the specific context (e.g. type of result, patient circumstances). Conclusions: General practices variably prioritised a legitimate range of results handling safety processes and outcomes, each with differing strengths and trade-offs. Future safety improvement interventions should focus on how to maximise practice-level knowledge and understanding of the range of context-specific approaches available and the safeties and risks inherent in each within the context of wider complex system conditions and interactions. This in turn has the potential to inform new kinds of proactive, contextually appropriate approaches to intervention development and implementation focusing on the enhanced deliberation of the safety of existing high-volume routines. abstract_id: PUBMED:28096897 The impact of organisational change and fiscal restraint on organisational culture. Background: Strategies to implement evidence-based practice have highlighted the bidirectional relationship of organisational change on organisational culture. The present study examined changes in perceptions of organisational culture in two community mental health services implementing cognitive therapies into routine psychosis care over 3 years. During the time of the study there were a number of shared planned and unplanned changes that the mental health services had to accommodate. One service, Metro South, had the additional challenge of embarking on a major organisational restructure. Methods: A survey of organisational culture was administered to clinical staff of each service at yearly intervals over the 3 years. Results: At baseline assessment there was no significant difference between the two services in organisational culture. At the midpoint assessment, which was conducted at the time the Metro South restructure was operationalized, there were less positive ratings of organisational culture recorded in Metro South compared to the other service. Organisational culture returned to near-baseline levels at endpoint assessment. Conclusions: These findings are consistent with the literature that organisational culture is relatively robust and resilient. It is also consistent with the literature that, at any one time, a service or organisation may have a finite capacity to absorb change. Consequently this limitation needs to be taken into account in the timing and planning of major service reform where possible. The results also extend the literature, insofar as external factors with a high impact on the operation of an organisation may impact upon organisational culture albeit temporarily. abstract_id: PUBMED:27078787 Trial of the Primary Care Practice Improvement Tool: building organisational performance in Australian general practice and primary health care. Objective: To nationally trial the Primary Care Practice Improvement Tool (PC-PIT), an organisational performance improvement tool previously co-created with Australian primary care practices to increase their focus on relevant quality improvement (QI) activities. Design: The study was conducted from March to December 2015 with volunteer general practices from a range of Australian primary care settings. We used a mixed-methods approach in two parts. Part 1 involved staff in Australian primary care practices assessing how they perceived their practice met (or did not meet) each of the 13 PC-PIT elements of high-performing practices, using a 1-5 Likert scale. In Part 2, two external raters conducted an independent practice visit to independently and objectively assess the subjective practice assessment from Part 1 against objective indicators for the 13 elements, using the same 1-5 Likert scale. Concordance between the raters was determined by comparing their ratings. In-depth interviews conducted during the independent practice visits explored practice managers' experiences and perceived support and resource needs to undertake organisational improvement in practice. Results: Data were available for 34 general practices participating in Part 1. For Part 2, independent practice visits and the inter-rater comparison were conducted for a purposeful sample of 19 of the 34 practices. Overall concordance between the two raters for each of the assessed elements was excellent. Three practice types across a continuum of higher- to lower-scoring practices were identified, with each using the PC-PIT in a unique way. During the in-depth interviews, practice managers identified benefits of having additional QI tools that relate to the PC-PIT elements. Conclusions: The PC-PIT is an organisational performance tool that is acceptable, valid and relevant to our range of partners and the end users (general practices). Work is continuing with our partners and end users to embed the PC-PIT in existing organisational improvement programs. abstract_id: PUBMED:20194220 Readiness for organisational change among general practice staff. Background: Increasing demands on general practice to manage chronic disease may warrant organisational change at the practice level. Staff's readiness for organisational change can act as a facilitator or barrier to implementing interventions aimed at organisational change. Objectives: To explore general practice staff readiness for organisational change and its association with staff and practices characteristics. Methods: This is a cross-sectional study of practices in three Australian states involved in a randomised control trial on the effectiveness of an intervention to enhance the role of non-general practitioner staff in chronic disease management. Readiness for organisational change, job satisfaction and practice characteristics were assessed using questionnaires. Results: 502 staff from 58 practices completed questionnaires. Practice characteristics were not associated with staff readiness for change. A multilevel regression analysis showed statistically significant associations between staff readiness for organisational change (range 1 to 5) and having a non-clinical staff role (vs general practitioner; B=-0.315; 95% CI -0.47 to -0.16; p<0.001), full-time employment (vs part-time; B=0.175, 95% CI 0.06 to 0.29; p<0.01) and lower job satisfaction (B=-0.277, 95% CI -0.40 to -0.15; p<0.001). Conclusions: The results suggest that different approaches are needed to facilitate change which addresses the mix of practice staff. Moderately low job satisfaction may be an opportunity for organisational change. abstract_id: PUBMED:26856800 Organisational support for evidence-based practice: occupational therapists perceptions. Background: Barriers to the use of evidence-based practice extend beyond the individual clinician and often include organisational barriers. Adoption of systematic organisational support for evidence-based practice in health care is integral to its use. This study aimed to explore the perceptions of occupational therapy staff regarding the influence of organisational initiatives to support evidence-based practice on workplace culture and clinical practice. Methods: This study used semi-structured interviews with 30 occupational therapists working in a major metropolitan hospital in Brisbane, Australia regarding their perceptions of organisational initiatives designed to support evidence-based practice. Results: Four themes emerged from the data: (i) firmly embedding a culture valuing research and EBP, (ii) aligning professional identity with the Research and Evidence in Practice model, (iii) experiences of change: pride, confidence and pressure and (iv) making evidence-based changes to clinical practices. Conclusion: Organisational initiatives for evidence-based practice were perceived as influencing the culture of the workplace, therapists' sense of identity as clinicians, and as contributing to changes in clinical practice. It is therefore important to consider organisational factors when attempting to increase the use of evidence in practice. Answer: The study described in abstract PUBMED:20003532 investigated whether having professional experience in a developing country is associated with better clinical and organisational performance in general practice. The study found that general practitioners (GPs) with experience in a developing country referred 9.5% less than their colleagues and performed more surgical procedures. However, the experience in a developing country was not significantly associated with clinical performance or other GP- and practice characteristics in the multivariate analysis. Practices with GPs who had at least two years of experience in a developing country were more hygienic, collaborated more with hospitals, and scored better on the organization of the practice. On the other hand, these practices scored lower on service and availability, spent less time on patients in the consultation, and the quality of recording in the Electronic Medical Record (EMD) was lower. The study concluded that while there were differences in clinical and organisational performance between GPs with and without medical experience in developing countries, it was not possible to attribute these differences directly to the experience itself. The choice for medical experience in a tropical country likely reflects individual differences in professional motivation and personality, suggesting that experience in a developing country may be just as valuable for later performance in general practice as experience at home. Therefore, based on the findings from this study, it can be inferred that while there are interesting differences in performance, the benefits of experience as a doctor in the developing world on clinical and organisational performance in general practice are not definitively established and may be influenced by individual characteristics and motivations.
Instruction: Revisiting the effectiveness of standard antidepressants in bipolar disorder: are monoamine oxidase inhibitors superior? Abstracts: abstract_id: PUBMED:19629023 Revisiting the effectiveness of standard antidepressants in bipolar disorder: are monoamine oxidase inhibitors superior? Objective: The role of antidepressants in treating bipolar disorder is controversial, and the comparative effectiveness of specific drugs is insufficiently studied.We report here a comparison of monoamine oxidase inhibitors (MAOIs) with the serotonin reuptake inhibitor paroxetine (PAROX). Experimental Design: We conducted a retrospective analysis of data from a larger study, using the first antidepressant trial administered either after entry (n = 46) or after a recurrent episode during study participation (n = 6).Twenty two patients were treated with PAROX and 30 with an MAOI. Durable recovery was determined from Hamilton depression and Young mania scores, based on published criteria. Principal Observations: PAROX treatment led to durable recovery in 27% of patients, a result very similar to the 24% recovery rate found in a recent STEP-BD trial. In contrast, patients treated with an MAOI had a 53% durable recovery rate. Survival analysis showed a significantly faster onset of durable recovery with MAOIs (x2 = 4.77, p = 0.029). Among subjects who were able to complete an adequate treatment trial of at least four weeks duration, durable recovery was attained in a significantly greater proportion of those treated with an MAOI (16 of 23, 70%) as compared to PAROX (6 of 18, 33%)(Fisher's Exact Test, p = 0.023). Conclusions: In these patients with bipolar depression, the antidepressant effectiveness of PAROX was unacceptably low, but rates of recovery with MAOIs were significantly higher. abstract_id: PUBMED:1449122 Abuse of monoamine oxidase inhibitors. Monoamine oxidase inhibitors, like other antidepressants, generally are considered free of risk for abuse. There is, however, some evidence that MAOIs possess dependence and abuse potential for some patients. We will review the available literature and describe three current cases. Recommendations for treatment are discussed briefly. abstract_id: PUBMED:7028797 Monoamine oxidase inhibitors in bipolar endogenous depressives. Clinical lore suggests that monoamine oxidase inhibitors (MAOIs) are not effective in endogenous depression. A review of previous placebo-controlled trials of MAOI in patients with endogenous depression neither refutes nor confirms their utility in this patient group. We present patients in the depressive phase of bipolar illness refractory to treatment with tricyclics who have responded to MAOI. These open trials require confirmation in controlled studies. Bipolar illness may be a heterogeneous disorder. Presence or absence of X-linkage, low platelet MAO, or response to MAOI may indicate different forms of the disorder. abstract_id: PUBMED:23934742 Current place of monoamine oxidase inhibitors in the treatment of depression. This paper reviews the discovery and history of the use of irreversible monoamine oxidase (MAO) inhibitors (MAOIs) such as phenelzine, tranylcypromine and isocarboxazid, as well as the second generation selective and reversible MAOIs such as the MAO-A inhibitor, moclobemide and the MAO-B inhibitor, selegiline. Data for review were identified from a literature search of OvidSP Medline and PsycInfo performed in July 2012, using the subject terms and keywords of 'monoamine oxidase inhibitors', 'major depression', 'depressive disorder' and 'depression (emotion)'. The search was limited to papers published in the English language and from 2007 onward only. Irreversible MAOIs have the potential to treat the most challenging mood disorder patients including those with treatment-resistant depression, atypical depression and bipolar depression. Unfortunately, the use of irreversible MAOIs has been declining sharply due to lack of marketing and the excessive fears of clinicians. Moreover, few clinicians now have any experience, let alone comfort, in prescribing this class of antidepressants. The newer MAOIs are available as another option for the treatment of major depression but have not replaced the irreversible MAOIs for the specific sub-types of depression for which they are now recommended in most consensus guidelines and treatment algorithms. The pharmacology, drug interactions and dietary recommendations associated with the use of MAOIs are reviewed. With the appropriate dietary restrictions and attention to potential drug interactions with serotonin and noradrenaline agents this class of drugs can be used effectively and safely. The MAOIs still represent an important element in our therapeutic armamentarium. Despite recommendations by opinion leaders and consensus guidelines for the use of MAOIs in specific sub-types of depression, the prescription rate of MAOIs is far less than expected and is decreasing. The "bad reputation" and the lack of industry support for this class of agents (especially the irreversible MAOIs) must be overcome in order to continue to provide a potentially useful treatment for a very vulnerable yet substantial sub-population of mood disorder patients. abstract_id: PUBMED:22951241 The use of monoamine oxidase inhibitors in primary care. Although primary care clinicians have developed considerable expertise in managing patients with major depressive disorder, and a range of treatment strategies is currently available, some patients still fail to reach remission. Two strategies have fallen out of common use: treating patients with monoamine oxidase inhibitors (MAOIs) and subgrouping patients by diagnosis when selecting antidepressant treatment. Monoamine oxidase inhibitors became less popular because other treatments were perceived to be safer and easier to use. However, a newer transdermal formulation of an MAOI that limits the need for the dietary restrictions of oral MAOIs may make it worthwhile to consider using this class of medication in patients who have failed several treatment trials. Although adverse events due to patients' diets are less likely with the transdermal MAOI, clinicians should still be alert for drug interactions and observe recommended washout periods. Patients who may benefit from MAOI treatment include those with treatment-resistant depression, atypical depression, anxiety, or anergic bipolar depression and those who have experienced intolerable metabolic or sexual side effects with other medications. abstract_id: PUBMED:372690 Series on pharmacology in practice: 1. Drugs that alter mood. I. Tricyclic agents and monoamine oxidase inhibitors. In the last 20 years, the treatment of mood disorders has advanced immeasurably. We now have relatively safe and effective agents for the treatment of depression and mania. This review discusses two types of agents that elevate mood--tricyclic antidepressants and monoamine oxidase inhibitors--including the indications for their use and their modes of action, pharmacokinetics, side effects, and drug interactions. abstract_id: PUBMED:36331516 Effectiveness and safety of monoamine oxidase inhibitor treatment for bipolar depression versus unipolar depression: An exploratory case cohort study. Objective: Patients with bipolar disorder spend most of their clinical lifetime in the depressive phase of their illness. However, antidepressants are discouraged in the treatment of bipolar depression due to concerns over manic induction and drug ineffectiveness. Some reports suggest that monoamine oxidase inhibitors (MAOIs) may be safe and effective compared to other antidepressants in treating bipolar depression. The present study compared the safety and effectiveness of MAOI therapy in patients with bipolar versus unipolar depression. Methods: Data were collected from approximately 2500 clinical research charts of patients treated with MAOI therapy at a university mood disorder clinic between 1983 and 2015. A mixed-effects model was created with patient entered as the random effect. The model included the primary diagnosis (i.e., either unipolar or bipolar depression) and other clinical covariates as fixed-effect predictors. Results: Patients with bipolar depression demonstrated lower post-treatment clinical global impressions/severity scores versus patients with unipolar depression (p = 0.04). Neither group demonstrated a full syndromal manic or hypomanic episode. A higher proportion of patients with bipolar depression reported myoclonic tics and tremors, which may have resulted from concomitant lithium use. Amongst the covariates, only the number of prior antidepressant trials predicted poorer outcomes from MAOI therapy. Conclusion: MAOIs may be more effective-and as safe-for patients with bipolar depression versus unipolar depression. Future studies should explore this possible advantage using a larger sample size. abstract_id: PUBMED:22951237 The role of monoamine oxidase inhibitors in depression treatment guidelines. Monoamine oxidase inhibitors (MAOIs) have proven efficacy for treating depression and for decades have been a preferred treatment for patients with atypical depression, high levels of anxiety, anergic bipolar depression, and treatment-resistant depression. However, MAOIs are infrequently used due to safety and tolerability concerns and the need for dietary restrictions. Current guidelines, which are reviewed here, recommend MAOIs as third-, fourth-, or fifth-line treatments due to these concerns. However, a transdermal formulation of selegiline limits the need for dietary restrictions and has fewer side effects than many more widely used antidepressants. The availability of a safer and more tolerable formulation gives clinicians another option in their armamentarium for treating depression. abstract_id: PUBMED:2056139 CNS stimulant potentiation of monoamine oxidase inhibitors in treatment-refractory depression. We report on our clinical experience with a combination of a CNS stimulant (either pemoline or dextroamphetamine) and a monoamine oxidase inhibitor (MAOI) for treating 32 depressed patients (mainly outpatients) refractory to standard antidepressant pharmacotherapy. This combination, though not approved by the FDA, appears to be safe and effective. Twenty-five (78%) of these patients experienced at least 6 months of symptom remission with a stimulant + MAOI combination. Many patients required adjunctive antidepressant treatment, including tricyclics and lithium. Side effects were not excessive, though 6 patients (3 unipolar and 3 bipolar) cycled to mania (N = 1) or hypomania (N = 5). None developed hypertensive crises. With properly motivated and complaint patients and careful clinical monitoring by the prescribing psychiatrist, stimulant potentiation of MAOIs may be a viable option for treatment-resistant depressed patients. abstract_id: PUBMED:7559374 Addition of monoamine oxidase inhibitors to carbamazepine: preliminary evidence of safety and antidepressant efficacy in treatment-resistant depression. Background: Depression is often resistant to treatment with mood stabilizers. The antidepressant effects of carbamazepine can be potentiated by lithium supplementation, but some patients fail to respond to the combination. Although monoamine oxidase inhibitors (MAOIs) appear useful in atypical and bipolar depressions, concerns have been raised regarding safety and pharmacokinetic interactions when they are combined with carbamazepine. Method: Ten inpatients (7 bipolar, 3 unipolar) with refractory DSM-III-R major depression, also resistant to double-blind treatment with carbamazepine, plus lithium augmentation in 8, received double-blind MAOI augmentation (phenelzine in 4, tranylcypromine in 6). Results: All 10 patients tolerated the addition of an MAOI well, and mean self-rated side effect scores did not change significantly. Four of 10 patients improved substantially and became euthymic, allowing discharge from hospital on the carbamazepine +/- lithium plus MAOI combination. These 4 patients improved in spite of prior inadequate responses to the same MAOI without carbamazepine and carbamazepine without an MAOI. Conclusion: This preliminary evidence suggests that the addition of MAOIs to carbamazepine +/- lithium may be well tolerated, may not affect carbamazepine and lithium pharmacokinetics, and may provide relief of refractory depressive symptoms in some patients. Further studies are needed to establish the safety and efficacy of combining carbamazepine with MAOIs. Answer: The effectiveness of standard antidepressants in bipolar disorder is a subject of ongoing debate, and there is particular interest in the potential superiority of monoamine oxidase inhibitors (MAOIs) over other antidepressants. A retrospective analysis comparing MAOIs with the serotonin reuptake inhibitor paroxetine (PAROX) found that MAOIs led to a significantly higher rate of durable recovery in patients with bipolar depression (53% vs. 27% with PAROX). Additionally, survival analysis showed a faster onset of durable recovery with MAOIs (PUBMED:19629023). Despite the clinical lore suggesting that MAOIs are not effective in endogenous depression, there is evidence that they may be beneficial for patients with bipolar depression who are refractory to treatment with tricyclics (PUBMED:7028797). The current place of MAOIs in the treatment of depression acknowledges their potential to treat challenging mood disorder patients, including those with treatment-resistant depression, atypical depression, and bipolar depression. However, their use has declined due to excessive fears and lack of marketing, and few clinicians have experience prescribing them (PUBMED:23934742). In primary care, MAOIs have become less popular, but newer formulations, such as transdermal MAOIs, may be considered for patients who have failed multiple treatment trials (PUBMED:22951241). An exploratory case cohort study found that MAOIs may be more effective and as safe for patients with bipolar depression compared to unipolar depression, with no full syndromal manic or hypomanic episodes reported (PUBMED:36331516). Treatment guidelines often recommend MAOIs as third- or later-line treatments due to safety concerns and dietary restrictions, but the transdermal formulation of selegiline may offer a safer and more tolerable option (PUBMED:22951237). In conclusion, evidence suggests that MAOIs may be superior to standard antidepressants like paroxetine in the treatment of bipolar depression, offering higher rates of durable recovery and faster onset of improvement. However, their use is limited by safety concerns, dietary restrictions, and a lack of familiarity among clinicians.
Instruction: Does continuous mucosal partial carbon dioxide pressure measurement predict leakage of intrathoracic esophagogastrostomy? Abstracts: abstract_id: PUBMED:12643373 Does continuous mucosal partial carbon dioxide pressure measurement predict leakage of intrathoracic esophagogastrostomy? Background: Gastroplasty after esophagectomy is associated with relevant morbidity due to anastomotic leakage of the esophagogastrostomy. The aim of this study was to find out whether continuous partial carbon dioxide pressure (pCO2) measurement of the gastric mucosa is an adequate method of monitoring the gastric tube during the postoperative course and of detecting patients with an anastomotic leakage. Methods: Forty-seven patients with esophageal cancer underwent esophagectomy and gastric tube formation with intrathoracic esophagogastrostomy. Postoperatively, mucosal pCO2 of the gastric tube (pCO2i) was measured using continuous tonometry (TONOCAP, Datex Ohmeda). pCO2i was related to the arterial pCO2 (delta pCO2 = pCO2i - pCO2a). Results: A total of 4,338 delta pCO2 measurements were recorded. On average, the pCO2i of each patient was monitored over a period of 92 hours. In 5 patients an anastomotic leakage of the esophagogastrostomy developed. The mean delta pCO2 of this group was 31.7 mm Hg (+/-19.3 SD) and significantly higher (p < 0.0001) than that of patients without anastomotic leakage (20.7 mm Hg +/- 12.8 SD). With a delta pCO2 cut-off point of 56 mm Hg measured for 5 hours, the sensitivity was 0.8, the specificity 0.9, and the positive predictive value 0.5. In patients with anastomotic leakage, the peak delta pCO2 preceded clinical symptoms. False positive delta pCO2 measurements (n = 4) were mainly due to severe pneumonia with long-term ventilation. Conclusions: Mucosal pCO2 measurement of the gastric tube can be used as an early indicator of a complicated postoperative course predicting anastomotic leakage of the esophagogastrostomy. abstract_id: PUBMED:25105442 Continuous carbon dioxide partial pressure monitoring in lung transplant recipients. Background: Living-donor lobar lung transplantation (LDLLT) recipients often have hypercapnia preoperatively, which confers a risk of worsened hypercapnia intraoperatively. We reviewed our experience with continuous carbon dioxide partial pressure (PtcCO2) monitoring in LDLLT to evaluate its accuracy and feasibility. We also assessed preoperative and intraoperative carbon dioxide levels in LDLLT recipients. Material And Methods: Twenty-six LDLLT recipients without pulmonary hypertension underwent preoperative nocturnal and intraoperative PtcCO2 monitoring, determined with a TOSCA-500 monitor. Results: Maximal preoperative nocturnal PtcCO2 (72.7 ± 19.3 mmHg) was significantly correlated with preoperative resting arterial carbon dioxide partial pressure (PaCO2; 55.1 ± 11.6 mmHg, r(2)=0.84). PtcCO2 was more correlated with PaCO2 (range, 39-192 mmHg) during LDLLT (r(2)=0.93) than with end-tidal carbon dioxide partial pressure (r(2)=0.38). Intraoperative continuous PtcCO2 monitoring was useful for evaluating real-time carbon dioxide levels. Intraoperative PtcCO2 did not exceed maximal preoperative nocturnal PtcCO2 in 13 recipients (50%) but temporarily exceeded it in 11 recipients (42%). PtcCO2 was further elevated in 2 recipients (8%) requiring the early establishment of cardiopulmonary bypass. There were no complications related to PtcCO2 monitoring. Conclusions: PtcCO2 monitoring in LDLLT recipients is useful as a means for determining intraoperative carbon dioxide levels, which increase dramatically and can be predicted preoperatively and assessed in a timely manner. abstract_id: PUBMED:34511170 Effect of electrode temperature on measurements of transcutaneous carbon dioxide partial pressure and oxygen partial pressure in very low birth weight infants. Objectives: To evaluate the accuracy and safety of measurements of transcutaneous carbon dioxide partial pressure (TcPCO2) and transcutaneous oxygen partial pressure (TcPO2) at electrode temperatures lower than the value used in clinical practice in very low birth weight infants. Methods: A total of 45 very low birth weight infants were enrolled. TcPCO2 and TcPO2 measurements were performed in these infants. Two transcutaneous monitors were placed simultaneously for each subject. One electrode was set and maintained at 42℃ used in clinical practice for neonates (control group), and the other was successively set at 38℃, 39℃, 40°C, and 41℃ (experimental group). The paired t-test was used to compare the measurement results between the groups. A Pearson correlation analysis was used to analyze the correlation between the measurement results of the experimental group and control group, and between the measurement results of experimental group and arterial blood gas parameters. Results: There was no significant difference in TcPCO2 between each experimental subgroup (38-41℃) and the control group. TcPCO2 in each experimental subgroup (38-41℃) was strongly positively correlated with TcPCO2 in the control group (r>0.9, P<0.05) and arterial carbon dioxide partial pressure (r>0.8, P<0.05). There were significant differences in TcPO2 between each experimental subgroup (38-41℃) and the control group (P<0.05), but TcPO2 in each experimental subgroup (38-41℃) was positively correlated with TcPO2 in the control group (r=0.493-0.574, P<0.05) and arterial oxygen partial pressure (r=0.324-0.399, P<0.05). No skin injury occurred during transcutaneous measurements at all electrode temperatures. Conclusions: Lower electrode temperatures (38-41℃) can accurately measure blood carbon dioxide partial pressure in very low birth weight infants, and thus can be used to replace the electrode temperature of 42°C. Transcutaneous measurements at the lower electrode temperatures may be helpful for understanding the changing trend of blood oxygen partial pressure. abstract_id: PUBMED:6793315 Transcutaneous measurements of carbon dioxide partial pressure in sick neonates. The authors measured the partial pressure of carbon dioxide transcutaneously (PtcCO2) in 15 sick newborns and compared the PtcCO2 to simultaneously measured partial pressure of arterial carbon dioxide (PaCO2). The PtcCO2 values reflected changes in the PaCO2 values. A linear regression on 106 paired PtcCO2 and PaCO2 values produced a correlation coefficient of 4 = 0.91 with a slope of 1.89 and a y-intercept of --9.4. The authors found the measurement of PtcCO2 to be a clinically useful method of continuously following the trend of PaCO2. abstract_id: PUBMED:34817074 Targeting arterial partial pressure of carbon dioxide in acute respiratory distress syndrome patients using extracorporeal carbon dioxide removal. Background: A retrospective analysis of SUPERNOVA trial data showed that reductions in tidal volume to ultraprotective levels without significant increases in arterial partial pressure of carbon dioxide (PaCO2 ) for critically ill, mechanically ventilated patients with acute respiratory distress syndrome (ARDS) depends on the rate of extracorporeal carbon dioxide removal (ECCO2 R). Methods: We used a whole-body mathematical model of acid-base balance to quantify the effect of altering carbon dioxide (CO2 ) removal rates using different ECCO2 R devices to achieve target PaCO2 levels in ARDS patients. Specifically, we predicted the effect of using a new, larger surface area PrismaLung+ device instead of the original PrismaLung device on the results from two multicenter clinical studies in critically ill, mechanically ventilated ARDS patients. Results: After calibrating model parameters to the clinical study data using the PrismaLung device, model predictions determined optimal extracorporeal blood flow rates for the PrismaLung+ and mechanical ventilation frequencies to obtain target PaCO2 levels of 45 and 50 mm Hg in mild and moderate ARDS patients treated at a tidal volume of 3.98 ml/kg predicted body weight (PW). Comparable model predictions showed that reductions in tidal volumes below 6 ml/kg PBW may be difficult for acidotic highly severe ARDS patients with acute kidney injury and high CO2 production rates using a PrismaLung+ device in-series with a continuous venovenous hemofiltration device. Conclusions: The described model provides guidance on achieving target PaCO2 levels in mechanically ventilated ARDS patients using protective and ultraprotective tidal volumes when increasing CO2 removal rates from ECCO2 R devices. abstract_id: PUBMED:19966348 Concordance between side-stream end-tidal carbon dioxide and arterial carbon dioxide partial pressure in respiratory service setting. Objective: To explore the correlation and concordance between end-tidal carbon dioxide and arterial carbon dioxide partial pressure, and confirm the experience of the general consensus among service environments. Design: A prospective cross-sectional analysis. Setting: Two respiratory service units in Hong Kong. Participants: Two hundred respiratory patients were recruited, in whom 219 sets of observations were recorded. Patients deemed to require arterial blood gas determination also had their end-tidal carbon dioxide partial pressure measured at that time, using two LifeSense LS1-9R Capnometers. Main Outcome Measures: The agreement of end-tidal carbon dioxide partial pressure and arterial carbon dioxide partial pressure was studied by correlation coefficients, mean and standard deviation of their difference, and the Bland-Altman plot. Results: Overall, the correlation was low and insignificant (r=0.1185, P=0.0801). The mean of the difference was 7.2 torr (95% confidence interval, 5.5-8.9) and significant (P<0.001). The limits of agreement by Bland-Altman analysis were -18.1 to 32.5 torr, which were too large to be acceptable. In the sub-group on room air, the mean difference was reduced to 2.26 torr, the correlation between end-tidal carbon dioxide partial pressure and arterial carbon dioxide partial pressure was 0.2194 (P=0.0068), though statistically significant, the extent of correlation was still low. Conclusion: End-tidal carbon dioxide partial pressure did not show significant correlation or concordance with arterial carbon dioxide partial pressure, especially when supplemental oxygen was used. End-tidal carbon dioxide partial pressure currently cannot replace arterial blood gas measurement as a tool for monitoring arterial carbon dioxide partial pressure. Possible reasons for the discrepancy with previous studies include small sample size in previous studies, lack of research facilities in service settings, and publication bias against negative studies. abstract_id: PUBMED:35321653 Cardiorespiratory impact of intrathoracic pressure overshoot during artificial carbon dioxide pneumothorax: a randomized controlled study. Background: The aim of this study is to evaluate cardiovascular and respiratory effects of intrathoracic pressure overshoot (higher than insufflation pressure) in patients who underwent thoracoscopic esophagectomy procedures with carbon dioxide (CO2) pneumothorax. Methods: This prospective research included 200 patients who were scheduled for esophagectomy from August 2016 to July 2020. The patients were randomly divided into the Stryker insufflator (STR) group and the Storz insufflator (STO) group. We recorded the changes of intrathoracic pressure, peak airway pressure, blood pressure, heart rate and central venous pressure (CVP) during artificial pneumothorax. The differences in blood gas analysis, the administration of vasopressors and the recovery time were compared between the two groups. Results: We found that during the artificial pneumothorax, intrathoracic pressure overshoot occurred in both the STR group (8.9 mmHg, 38 times per hour) and the STO group (9.8 mmHg, 32 times per hour). The recorded maximum intrathoracic pressures were up to 58 mmHg in the STR group and 51 mmHg in the STO group. The average duration of intrathoracic pressure overshoot was significantly longer in the STR group (5.3 ± 0.86 s) vs. the STO group (1.2 ± 0.31 s, P < 0.01). During intrathoracic pressure overshoot, a greater reduction in systolic blood pressure (SBP) (5.6 mmHg vs. 1.1 mmHg, P < 0.01), a higher elevation in airway peak pressure (4.8 ± 1.17 cmH2O vs. 0.9 ± 0.41 cmH2O, P < 0.01), and a larger increase in CVP (8.2 ± 2.86 cmH2O vs. 4.9 ± 2.35 cmH2O, P < 0.01) were observed in the STR group than in the STO group. Vasopressors were also applied more frequently in the STR group than in the STO group (68% vs. 43%, P < 0.01). The reduction of SBP caused by thoracic pressure overshoot was significantly correlated with the duration of overshoot (R = 0.76). No obvious correlation was found between the SBP reduction and the maximum pressure overshoot. Conclusions: Intrathoracic pressure overshoot can occur during thoracoscopic surgery with artificial CO2 pneumothorax and may lead to cardiovascular adverse effects which highly depends on the duration of the pressure overshoot. Trial Registration: Clinicaltrials.gov ( NCT02330536 ; December 24, 2014). abstract_id: PUBMED:1214770 Measurement of partial pressure of gases in liquids by mass spectrometry. The experimental setup of mass spectrometric determination of gas contents in liquids has been modified for continuous and discontinuous measurement of partial pressure of gases in liquids. The inlet system consists of a stainless steel capillary with slits covered by a silicone rubber membrane. Several gases can be measured simultaneously under static conditions and in flowing liquids. The measurement and calibration procedure is described. Results of the analysis of the test criteria, reproducibility, detection limit, response time, and depletion are presented. The difficulties in discontinuous measurement of oxygen in blood are explained by the complex permeation-diffusion process at the membrane and the form of the dissociation curve. With regard to solubility, physically dissolved gases can be determined without problems down to tensions of about 0.01 mm Hg. Continuous measurement of oxygen and carbon dioxide partial pressure in liquids, including blood, is possible with the described system. abstract_id: PUBMED:6423348 Comparison of transcutaneous and alveolar partial pressure of carbon dioxide during carbon dioxide breathing in healthy children. In 18 healthy children three to 13 years of age, the transcutaneous partial pressure of carbon dioxide (PtcCO2) (Radiometer electrode) and the alveolar partial pressure of carbon dioxide (PACO2) (Beckman analyzer) were measured simultaneously during the breathing of room air and 5 percent carbon dioxide. The PtcCO2 electrode was placed on the anterior thorax and heated to 42 degrees C. The PACO2 was calculated on the 4/5 part of the carbon dioxide expired trace. Minute ventilation (VE) was measured in 11 cases. There was a significant correlation between PtcCO2 (in millimeters of mercury) and PACO2 (in millimeters of mercury) while breathing room air (PtcCO2 = 0.82 PACO2 + 19.7; r = 0.55; p less than 0.02) and while breathing 5 percent carbon dioxide (PtcCO2 = 0.77 PACO2 + 22.5; r = 0.61; p less than 0.01); however, the ratio of PtcCO2 over PACO2 was significantly lower while breathing 5 percent carbon dioxide (p less than 0.01) than while breathing room air. When considering the relationship between the increase in VE (delta VE while breathing 5 percent carbon dioxide and the changes in PACO2 (delta PACO2) or in PtcCO2 (delta PtcCO2), a significant correlation was found only between delta VE and delta PACO2, ie, delta VE = 0.41 delta PACO2 + 0.44 (r = 0.63; p less than 0.01). These results suggest that breathing carbon dioxide modified the factors acting on PtcCO2, possibly by changes in the vasomotor tone of cutaneous blood vessels. These modifications appeared to be variable from subject to subject. Therefore, we conclude that PtcCO2 does not appear to be an accurate quantitative index to assess ventilatory response to carbon dioxide. abstract_id: PUBMED:7764354 Dissolved carbon dioxide measurement and its correlation with operating parameters in fermentation processes. A silicone tubing probe was used to measure dissolved carbon dioxide in two industrial fermentation processes. The two processes differed markedly in their broth viscosities: one had high viscosity (2000-3000 cP) and the other had low viscosity (< 100 cP). The goal was to study changes in dissolved carbon dioxide concentration in fermentation broth when operating parameters, such as total pressure, air flow rate and, agitation rate, were modified. It was shown that in high-viscosity fermentation broths the dissolved carbon dioxide concentration can reach as high as 190% of that calculated by assuming an equilibrium between the air coming out of the fermentor and the fermentation broth. For low-viscosity broth, the highest dissolved carbon dioxide concentration was found to be about 133% of the equilibrium value. On the basis of the material balance over the entire fermentor, the ratio of actual dissolved carbon dioxide partial pressure and carbon dioxide partial pressure in air leaving the fermentor was correlated to the operating parameters of pressure, air flow, and carbon dioxide transfer coefficient. Since carbon dioxide transfer coefficients are not readily available, we recommend a method of using oxygen transfer coefficients to develop a correlation that can be used on-line to predict changes in dissolved carbon dioxide concentration when operating parameters are changed or when process improvements and scale-up are undertaken. Answer: Yes, continuous mucosal partial carbon dioxide pressure (pCO2) measurement can predict leakage of intrathoracic esophagogastrostomy. A study involving 47 patients with esophageal cancer who underwent esophagectomy and gastric tube formation with intrathoracic esophagogastrostomy found that postoperative mucosal pCO2 of the gastric tube was significantly higher in patients who developed anastomotic leakage compared to those without leakage. The mean delta pCO2 (pCO2i - pCO2a) of the group with anastomotic leakage was 31.7 mm Hg, which was significantly higher than that of patients without leakage (20.7 mm Hg). By using a delta pCO2 cut-off point of 56 mm Hg measured for 5 hours, the sensitivity was 0.8, the specificity 0.9, and the positive predictive value 0.5. Moreover, the peak delta pCO2 preceded clinical symptoms of leakage, suggesting that mucosal pCO2 measurement of the gastric tube can be used as an early indicator of a complicated postoperative course predicting anastomotic leakage of the esophagogastrostomy (PUBMED:12643373).
Instruction: Can nonstandardized bitewing radiographs be used to assess the presence of alveolar bone loss in epidemiologic studies? Abstracts: abstract_id: PUBMED:15239778 Can nonstandardized bitewing radiographs be used to assess the presence of alveolar bone loss in epidemiologic studies? Objective: To compare periodontitis-associated alveolar bone loss assessment by standardized and nonstandardized radiographs in clinical and epidemiologic studies. Methods: Participants included 37 patients aged 21-66 years with prior nonstandardized bitewing radiographs scheduled to receive bitewing radiographs as part of their next routine dental care visit. Standardized bitewing radiographs were taken with a Rinn film holder to position the film in the mouth and align the X-rays so that they were at 90 degrees to the film. Before taking the radiograph the bite was registered in centric relation using a polyether impression material. One registered dental hygienist took and processed all the standardized radiographs. One dentist read all radiographs using a viewing box, magnifying lens, and periodontal probe with William's markings. Radiographic bone loss was measured to the closest millimeter at mesial and distal sites of the posterior teeth excluding third molars. The examining dentist was blinded to the participant's name, age, gender, or if the radiograph was standardized or nonstandardized. Results: Mean bone loss (+/- SD) was similar in the standardized and nonstandardized groups (1.60 +/- 0.72 mm versus 1.64 +/- 0.85 mm), and the correlation was high (r = 0.95). Periodontitis was defined as present if the participant had at least one site with 3, 4 and 5 mm bone loss. The Kappa statistics for concordance using these three cutoffs were good and ranged from 0.60 to 0.65. The sensitivity ranged from 72.7 to 80.8% and specificity from 88.5 to 90.9%. Conclusions: Periodontitis assessed as mean alveolar bone loss or the prevalence of disease based on alveolar bone loss can be accurately and reliably evaluated from nonstandardized radiographs. abstract_id: PUBMED:32022927 Diagnosing periodontal and dental indicators with horizontal and vertical bitewing radiographs. Objective: The aim of this study is to compare information provided by the 2 orientations of bitewing radiographs, horizontal (HBW) and vertical (VBW) taken in a dental school. Methods And Materials: Radiographic records were reviewed at Tufts University School of Dental Medicine (TUSDM) for patients showing posterior bone loss who had both HBW and VBW. 320 records were reviewed with 6 criteria: visibility of crestal bone from the distal of the cuspids to the distal of the most posterior tooth, visibility of horizontal or angular bone loss, the crestal density of bone, visibility of interproximal contact areas, visibility of the entire anatomical crown, and visibility of furcations. Results: Significantly higher number of VBW compared with HBW (P < 0.0001) showed the levels of alveolar bone loss (52.81% vs. 3.75%), the type of loss (angular or horizontal) (50.94% vs. 3.75%), the crestal bone density (28.75% vs. 0.63%), the contact areas (20.63% vs. 14.38%), and the furcations (43.44% vs. 1.25%). A greater number of HBW showed the entire anatomical crown compared with VBW. No significant difference was detected in the number of radiographs taken per HBW and VBW set. Conclusion: For patients with alveolar bone loss, VBW are superior to HBW when assessing bone levels, density, morphology, tooth furcations, and evaluating interproximal contact areas for caries. It is recommended that the vertical bitewing technique be taught as a standard in dental, dental hygiene, and dental assisting schools for adult patients showing evidence of posterior interdental bone loss. abstract_id: PUBMED:8951623 Testing the consistency of measurements of the distance between the cemento-enamel junction and the alveolar bone crest on bitewing radiographs. Unlabelled: The purpose of the research described in this paper is to test the consistency of measurements of the distance between the cemento-enamel junction and the alveolar bone crest (CJ-AC) on bitewing radiographs. The present paper shows how the consistency tests were organized and which statistical analyses were used. Since the actual characteristic (CJ-AC) could not alter, the variations of the results had to be due to one more of the 5 m-factors (man, material, machine, method and manipulation). To test the consistency of the measurements of CJ-AC, a pretest and an en route test were performed. Both the bitewing radiographs for the pretest and the en route test were sampled at random from the principal material. The principal material consisted of 856 sets of bitewing radiographs from persons born 1930-1949. The pretest of 40 sets of bitewing radiographs were read twice by 2 independent readers. The en route test consisted of 85 sets of bitewing radiographs which were read twice by 1 reader. Neither the results of the pretest nor the results of the en route test showed statistical or clinical differences of significance. In Conclusion: the consistency of this method of measuring is satisfactory for the 2 readers. abstract_id: PUBMED:25368810 Evaluation of alveolar crest bone loss via premolar bitewing radiographs: presentation of a new method. Purpose: This study aimed to assess the interdental bone level in premolar bitewing radiographs while retracting the cheeks. Methods: Seventy-two horizontal bone defects were created on dried mandibles and maxillae. The distance from the bone level to the cement-enamel junction of premolars was detected by a modified digital caliper (considered the gold standard). The reliability of all radiographs was assessed by intraclass correlation coefficient (ICC), and the validity was compared to the gold standard using the analysis of variance test. P-values less than 0.05 were considered statistically significant. Results: This study showed that the reliability of radiographs without a cheek simulator and with 0.16 second exposure time was significantly higher than that of the two other groups (ICC=0.96 compared to 0.93 and 0.88, respectively). The results from the radiographs without a cheek simulator and with 0.16 second exposure time were more similar to the gold standard measures than those of the two other groups, although the difference was not statistically significant. Conclusions: Retracting the buccal soft tissue plays an important role in increasing the accuracy of radiographs in detecting the interdental alveolar bone level and produces more accurate results than increasing the exposure time, although it does not have a significant role in reliability of results. abstract_id: PUBMED:37021254 A comparison of the horizontal and vertical bitewing images in detecting approximal caries and interdental bone loss in posterior teeth: A diagnostic accuracy randomized cross over clinical trial. Background/purpose: Horizontal bitewing radiographs are widely and frequently used in dentistry and are very reliable in diagnosing proximal caries and interproximal alveolar bone level. However, it is challengeable in detecting interproximal root caries, horizontal and/or vertical alveolar bone loss, and furcation involvements. The aim of this article was to assess the accuracy of vertical bitewing images in the diagnosis of caries and alveolar bone level compared to the horizontal bitewing technique. Materials And Methods: Each one of the 20 patients had eight bitewing radiographs to get four horizontal bitewing (control) and four vertical bitewing (experimental) images for the same posterior area; a steel wire (3 mm) was used on the sensor plate to help measure the magnification later on. The radiographs were processed digitally and were evaluated for caries by two expert restorative specialists and for bone loss by two experienced periodontists. They were also compared to the "gold standard," which is using of both clinical and radiographic examination for diagnosis. They were blinded to each other during images evaluation. Results: Of the 20-patient sample size, 70% were male and 30% were female, with a mean age of 29.9. The average number of radiographs taken to achieve four standard bitewing radiographs was 5.9 ± 1.7 for vertical bitewings and 5.3 ± 1.3 for horizontal bitewing radiograph. The measurements from the cementoenamel junction (CEJ) to the level of crestal bone didn't show a significant difference between the horizontal and vertical bitewing radiographs. The detection of furcation area in the molar teeth was much higher in the vertical bitewing (100%) compared to the horizontal bitewing (57.5%) (P < 0.0001). Conclusion: The vertical bitewing radiograph has the upper hand over the horizontal bitewing radiograph in the detection of furcation involvement, caries detection, and alveolar bone loss. Therefore, it is highly recommended to use vertical bitewing in caries and patients with periodontal disease rather than the conventional horizontal bitewing. abstract_id: PUBMED:8276985 Inter-relationships between bone mineral content measures. Dual energy radiography (DER) and bitewing radiographs (BWX). In vitro periodontal alveolar bone mineral content (BMC) measurements obtained with dual-energy radiography (DER) were compared with assessments based on bitewing radiographs (BWX). In addition, in patients, the relationship between bitewing and several postcranial dual-energy-radiographic measures were evaluated. Dual-energy-radiographic and bitewing measurements were made on 2 cadaver mandibles initially and after 2 incremental bone reductions at 4 sites. Rank-order correlations between dual-energy-radiographic and bitewing measures for the 4 sites ranged from 0.7 to 1.00. Bitewing measures indicated true bone loss with a sensitivity of 1.00. For patients, correlations between bitewing measures and dual-energy-radiographic scans suggested the strongest relationships were in the distal sections of the radius and ulna and in the intertrochanteric and Ward's areas of the femur. Correlations, in the 0.5-0.6 range, were not statistically significant (p > 0.05), but were quite robust considering the small sample size and preliminary nature of this investigation. Results suggest that the bitewing measure is sufficiently sensitive to detect clinically meaningful (5% or greater) changes in alveolar BMC and, further, that alveolar bone mineral content may reflect postcranial BMC. The implications of postcranial bone mineral changes being reflected in alveolar bone would enhance both our understanding and treatment of alveolar bone loss. The use of bitewing measures to facilitate identification of patients with postcranial bone loss is discussed. abstract_id: PUBMED:16687909 Comparison between Scanora panoramic radiography and bitewing radiography in the assessment of marginal bone tissue. Objectives: To compare panoramic radiographs made with the Scanora dental program and bitewing radiographs in the assessment of marginal bone tissue. Materials And Methods: Panoramic and posterior bitewing radiographs were made for 96 consecutive patients. Six observers assessed marginal bone level, and five observers identified the presence or absence of vertical bone defects and furcation involvements. Observer agreement was calculated. Results: Assessments of the marginal bone level with the two methods were identical for 57% of the sites. If a difference of one score was allowed, assessments for 95% of the sites were in agreement. Vertical bone defects and furcation involvement were detected with an agreement of 93%. Ranges of kappa indexes for intra-observer agreement on assessments of the marginal bone level were 0.37-0.46 in panoramic radiography and 0.31-0.55 in bitewing radiography. Ranges for agreement on detection of vertical bone defects were 0.52-0.63 and 0.47-0.56 and on detection of furcation involvements 0.64-0.79 and 0.66-0.77, respectively. The kappa index for inter-observer agreement on marginal bone level was 0.28 for panoramic and 0.29 for bitewing radiography. Corresponding figures for detection of vertical bone defects were 0.38 and 0.35 and for detection of furcation involvement 0.56 and 0.67. Conclusions: For those sites or teeth that are possible to assess, the diagnostic information available with Scanora panoramic radiography is comparable to that with bitewing radiography for marginal bone tissue. Therefore, Scanora dental panoramic radiography is a valuable diagnostic alternative in the primary examination of the periodontal status. abstract_id: PUBMED:8617391 Reproducibility of repeat bitewing radiographs determined by measurement of the distance between the amelocemental junction and the alveolar crest: an ex vivo study using human skulls. Objective: To assess the reproducibility of repeat bitewing radiographs. Methods: Five right and left pairs of posterior bitewing radiographs were taken without the use of beam-aiming or film-holding devices in five dry skulls with complete dentitions. The amelocemental junction (ACJ) and alveolar crest (AC) of every interdental site was traced on projected radiographs and the linear distance between the ACF and AC determined. Results: There was close concordance between the ability to read the ACJ on the approximal tooth surface and the AC. The overall readability of interdental sites was 71% although sites at the extremities of radiographs were often unreadable. Kappa analysis of the readability of alveolar bone height from repeat radiographs was 0.52. There were significant differences in the reproducibility of ACJ and AC measurements from repeated radiographs (p<0.001). Intra-examiner variability was significantly less (p<0.001) than that due to repeat radiography with no significant differences between sites. The standard deviation for all surfaces from repeat readings of the same radiographs was 0.12 mm compared with 0.51 mm from repeat radiography. Conclusions: In this study a freehand technique produced a degree of imprecision in readings of alveolar bone height from successive films of the same site. In clinical terms, a difference of more than 1.4 mm between pairs of serial measurements from the same site would have to occur in order to be certain that the difference was not measurement error but real bone loss. abstract_id: PUBMED:1960240 The effect of alignment errors on bitewing-based bone loss measurements. Alveolar bone loss measurements made using digitized dental radiographs have been shown to be precise and accurate. We determined the influence of alignment errors introduced when exposing the radiographs on the precision and accuracy of bone loss measurements. Bitewing radiographs of 15 day skulls were obtained using a positioning device (modified after the methods of Duckworth and associates). The sequence of radiographs was taken while the alignment of the main Roentgen beam was systematically varied within a 10 degree cone. Angular displacements were calculated from the digitized radiographs. Space variant digital image enhancement was performed to improve visualization of the cementoenamel junctions and alveolar crests. Corresponding bone loss measurements were made on dry skulls and radiographs. The criterion for truth in these measurements was established by 2 expert observers who measured the skulls with periodontal probes. Radiographic measurements were statistically compared with the criterion using validity coefficients. We found beam positioning errors of up to 10 degrees do not substantively affect alveolar bone loss measurements of mandibular molars taken from enhanced digital images of bitewing radiographs. abstract_id: PUBMED:7769522 An assessment of approximal bone height in the posterior segments of 15-16-year-old children using bitewing radiographs. The alveolar bone height in a total of 713 15-16-year-old children consisting of 364 males and 349 females was assessed using posterior bitewing radiographs. The distance between the alveolar crest (AC) and amelo-cemental junction (ACJ) on all molar and premolar teeth was determined from magnified images of the radiographs. Analyses of variance models were fitted in order to assess the dependence of the overall ACJ to AC distance on the inter-subject variables of gender and social class and the intrasubject variables of site, missing adjacent tooth, condition of the approximal tooth surface, and for mesial surfaces, gingivitis and pocket depth. The overall geometric mean for bone height for the whole population was 0.542 mm. The ACJ to AC distance for the vast majority of sites was below 2 mm. Site had a highly significant effect on bone height (P < 0.001): ACJ to AC distances were greater in the maxilla compared to the mandible (P < 0.001), greater for distal sites compared to mesial (P < 0.001) and significantly affected by tooth type (P < 0.001). Sites with missing adjacent teeth also had significantly increased distances (P < 0.001). The condition of the approximal tooth surface and the presence of pocketing or gingivitis did not have a significant association with bone height. The geometric mean AC to ACJ distance, adjusted for subject, site and social class was 0.577 mm for males and 0.506 mm for females (P < 0.001), but the main effect of social class was not significant. Answer: Yes, nonstandardized bitewing radiographs can be used to assess the presence of alveolar bone loss in epidemiologic studies. A study comparing periodontitis-associated alveolar bone loss assessment by standardized and nonstandardized radiographs found that mean bone loss was similar in both groups, and the correlation was high (r = 0.95). The sensitivity and specificity of detecting periodontitis using nonstandardized radiographs were also high, with Kappa statistics for concordance ranging from 0.60 to 0.65. These findings suggest that periodontitis assessed as mean alveolar bone loss or the prevalence of disease based on alveolar bone loss can be accurately and reliably evaluated from nonstandardized radiographs (PUBMED:15239778).
Instruction: Do Tele-Operators Learn to Better Judge Whether a Robot Can Pass Through an Aperture? Abstracts: abstract_id: PUBMED:26721291 Do Tele-Operators Learn to Better Judge Whether a Robot Can Pass Through an Aperture? Objective: This experiment examined whether tele-operators learn to better judge a robot's ability to pass through an aperture, hereafter referred to as pass-ability judgments, and detailed the nature of such learning. Background: Jones, Johnson, and Schmidlin reported that tele-operators' pass-ability judgments did not improve over the course of their experiment, which was surprising. Method: In each of seven blocks, tele-operators made pass-ability judgments about 10 apertures whose width varied. During each trial, participants drove the robot toward the aperture, answered yes or no to whether it could pass through that aperture, and then attempted to drive the robot through the aperture. Pass-ability judgments were analyzed in terms of percentage correct and absolute thresholds; the latter mimicked how Jones et al. analyzed their data. Results: Learning was revealed when judgments were analyzed in terms of percentage correct and not when analyzed in terms of absolute thresholds. Further analyses revealed that tele-operators only improved their pass-ability judgments for impassable apertures, and tele-operators' perceptual sensitivity and response bias changed over the course of the experiment. Conclusion: The percentage correct-based analyses revealed that tele-operators learned to make better pass-ability judgments. Jones et al.'s decision to analyze their data in terms of absolute thresholds obscured learning. Application: The present results suggested that researchers should employ percentage correct when studying learning in this domain, training protocols should focus on improving tele-operators' abilities to judge the pass-ability of impassable apertures, and tele-operators truly learned to better discriminate passable and impassable apertures. abstract_id: PUBMED:35632356 Implementation of Omni-D Tele-Presence Robot Using Kalman Filter and Tricon Ultrasonic Sensors. The tele-presence robot is designed to set forth an economic solution to facilitate day-to-day normal activities in almost every field. There are several solutions to design tele-presence robots, e.g., Skype and team viewer, but it is pretty inappropriate to use Skype and extra hardware. Therefore, in this article, we have presented a robust implementation of the tele-presence robot. Our proposed omnidirectional tele-presence robot consists of (i) Tricon ultrasonic sensors, (ii) Kalman filter implementation and control, and (iii) integration of our developed WebRTC-based application with the omnidirectional tele-presence robot for video transmission. We present a new algorithm to encounter the sensor noise with the least number of sensors for the estimation of Kalman filter. We have simulated the complete model of robot in Simulink and Matlab for the tough paths and critical hurdles. The robot successfully prevents the collision and reaches the destination. The mean errors for the estimation of position and velocity are 5.77% and 2.04%. To achieve efficient and reliable video transmission, the quality factors such as resolution, encoding, average delay and throughput are resolved using the WebRTC along with the integration of the communication protocols. To protect the data transmission, we have implemented the SSL protocol and installed it on the server. We tested three different cases of video resolutions (i.e., 320×280, 820×460 and 900×590) for the performance evaluation of the video transmission. For the highest resolution, our TPR takes 3.5 ms for the encoding, and the average delay is 2.70 ms with 900 × 590 pixels. abstract_id: PUBMED:27588254 Performance impact of mutation operators of a subpopulation-based genetic algorithm for multi-robot task allocation problems. Multi-robot task allocation determines the task sequence and distribution for a group of robots in multi-robot systems, which is one of constrained combinatorial optimization problems and more complex in case of cooperative tasks because they introduce additional spatial and temporal constraints. To solve multi-robot task allocation problems with cooperative tasks efficiently, a subpopulation-based genetic algorithm, a crossover-free genetic algorithm employing mutation operators and elitism selection in each subpopulation, is developed in this paper. Moreover, the impact of mutation operators (swap, insertion, inversion, displacement, and their various combinations) is analyzed when solving several industrial plant inspection problems. The experimental results show that: (1) the proposed genetic algorithm can obtain better solutions than the tested binary tournament genetic algorithm with partially mapped crossover; (2) inversion mutation performs better than other tested mutation operators when solving problems without cooperative tasks, and the swap-inversion combination performs better than other tested mutation operators/combinations when solving problems with cooperative tasks. As it is difficult to produce all desired effects with a single mutation operator, using multiple mutation operators (including both inversion and swap) is suggested when solving similar combinatorial optimization problems. abstract_id: PUBMED:37509946 An Incremental Broad-Learning-System-Based Approach for Tremor Attenuation for Robot Tele-Operation. The existence of the physiological tremor of the human hand significantly affects the application of tele-operation systems in performing high-precision tasks, such as tele-surgery, and currently, the process of effectively eliminating the physiological tremor has been an important yet challenging research topic in the tele-operation robot field. Some scholars propose using deep learning algorithms to solve this problem, but a large number of hyperparameters lead to a slow training speed. Later, the support-vector-machine-based methods have been applied to solve the problem, thereby effectively canceling tremors. However, these methods may lose the prediction accuracy, because learning energy cannot be accurately assigned. Therefore, in this paper, we propose a broad-learning-system-based tremor filter, which integrates a series of incremental learning algorithms to achieve fast remodeling and reach the desired performance. Note that the broad-learning-system-based filter has a fast learning rate while ensuring the accuracy due to its simple and novel network structure. Unlike other algorithms, it uses incremental learning algorithms to constantly update network parameters during training, and it stops learning when the error converges to zero. By focusing on the control performance of the slave robot, a sliding mode control approach has been used to improve the performance of closed-loop systems. In simulation experiments, the results demonstrated the feasibility of our proposed method. abstract_id: PUBMED:35689526 Application of a Tele-Ultrasound Robot During COVID-19 Pandemic: A Feasibility Study. Objective: To investigate the accuracy of ultrasonic diagnosis using the tele-ultrasound robot in Leishen Shan Hospital. Method: Twenty-two patients with novel coronavirus pneumonia from Leishen Shan Hospital voluntarily participated in this study. Their thyroids, neck vessels, hepatobiliaries and kidneys were scanned by both a tele-ultrasound robot manufactured by Imabot Co., Ltd, Wuhan and conventional method. The ultrasound diagnosis of each patient was compared, and the ultrasound images obtained by the two methods were mixed together and double-blindly diagnosed by an experienced ultrasound radiologist. Results: There were 44 positive lesions in 110 sites of 22 patients. Of which the two methods, 40 positive lesions were detected by the robotic method with 4 lesions missed (2 small polyps of gallbladder, 1 small hemangioma of liver and 1 small cyst of kidney) and 1 lesion misdiagnosed (normal carotid artery was misdiagnosed as carotid atherosclerotic plaque); 44 positive lesions were detected by conventional method with 1 small cyst of the liver was missed. There was no statistically significant difference in the accuracy rate between the robotic method and the conventional method using the chi-square test of the four-grid data (P>.05). Conclusion: The application of tele-ultrasound robot meets the standard of patient care during the pandemic. The method is feasible to provide adequate ultrasound information to diagnose common abdominal, vascular, superficial organ pathologies in patients with COVID-19 with acceptable accuracy compared with a conventional ultrasound scan. abstract_id: PUBMED:26925010 Iconic Gestures for Robot Avatars, Recognition and Integration with Speech. Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realized remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances. abstract_id: PUBMED:30759866 Three-Dimensional Visualization System with Spatial Information for Navigation of Tele-Operated Robots. This study describes a three-dimensional visualization system with spatial information for the effective control of a tele-operated robot. The environmental visualization system for operating the robot is very important. The tele-operated robot performs tasks in a disaster area that is not accessible to humans. The visualization system should perform in real-time to cope with rapidly changing situations. The visualization system should also provide accurate and high-level information so that the tele-operator can make the right decisions. The proposed system consists of four fisheye cameras and a 360° laser scanner. When the robot moves to the unknown space, a spatial model is created using the spatial information data of the laser scanner, and a single-stitched image is created using four images from cameras and mapped in real-time. The visualized image contains the surrounding spatial information; hence, the tele-operator can not only grasp the surrounding space easily, but also knows the relative position of the robot in space. In addition, it provides various angles of view without moving the robot or sensor, thereby coping with various situations. The experimental results show that the proposed method has a more natural appearance than the conventional methods. abstract_id: PUBMED:38238690 Percutaneous nephrolithotomy guided by 5G-powered robot-assisted teleultrasound diagnosis system: first clinical experience with a novel tele-assistance approach (IDEAL stage 1). Background: To demonstrate the technical feasibility of percutaneous nephrolithotomy (PCNL) guided by 5G-powered robot-assisted teleultrasound diagnosis system (RTDS) in a complex kidney-stone (CKS) cohort and present our preliminary outcomes. PCNL is highly skill-required, which hinders it popularization in primary medical units of remote regions. We designed an innovative tele-assistance approach to make PCNL easy to be operated by inexperienced surgeons. Methods: This was a prospective proof-of-concept study (IDEAL phase 1) on intraoperative tele-assistance provided by online urological experts via a 5G-powered RTDS. Total 15 CKS patients accepted this technology. Online experts manipulated a simulated probe to assist unskilled local operators by driving a patient-side robot-probe to guide and monitor the steps of access establishment and finding residual stones. Results: Median total delay was 177ms despite one-way network-connecting distance > 5,800 km. No perceptible delay of audio-visual communication, driving robot-arm or dynamic ultrasound images was fed back. Successful tele-assistance was obtained in all cases. The first-puncture access-success rate was 78.6% with a one-session SF rate of 71.3% and without complications of grade III-V. Conclusions: The current technology based on 5G-powered RTDS can provide high-quality intraoperative tele-assistance, which has preliminarily shown satisfactory outcomes and reliable safety. It will break down a personal competence-based barrier to endow PCNL with more popular utilization. Trial Registration: The study was approved by ethics committee of the Xinjiang Kezhou People's Hospital and ethics committee of the First Affiliated Hospital of Nanjing Medical University and was registered on http://www.chictr.org.cn (ChiCTR2200065849, 16/11/2022). abstract_id: PUBMED:19899364 Perception of robot passability with direct line of sight and teleoperation. Objective: To examine participants' abilities to judge the passability of robots through apertures in direct-line-of-sight (DLS) and teleoperation (TO) conditions, two experiments were conducted. Background: Past work has demonstrated that operators find it difficult to perceive aspects of remote environments during TO. For example, urban search-and-rescue operators have experienced difficulty judging whether a robot could pass through openings or over obstacles. Although previous research has discussed perceptual difficulties in TO, the differences between DLS and TO have not been quantified. Method: In the first experiment, participants judged the smallest passable aperture widths for three robot sizes for both DLS and TO conditions. In the second experiment, aperture widths were judged for three camera heights and two robot distances during TO. Results: In the DLS condition, participants produced similar judgments for the three robot sizes using dimensionless measurements. In the TO condition, participants' judgments were more variable and they judged smaller apertures as passable. Conclusion: Overall, participants judged apertures that were too small for the robot to pass as passable. This tendency was more pronounced in four instances: as robot size increased, during TO, when the camera was at its lowest height, and as distance between the robot and the aperture increased. Application: Judgments of passability help to quantify differences in perception between DLS and TO. These results will be useful in the design of training regimes for TO tasks. Increasing operator understanding of performance differences under varying conditions will lead them to be more accurate when making critical decisions in remote environments. abstract_id: PUBMED:27136552 Assistant Personal Robot (APR): Conception and Application of a Tele-Operated Assisted Living Robot. This paper presents the technical description, mechanical design, electronic components, software implementation and possible applications of a tele-operated mobile robot designed as an assisted living tool. This robotic concept has been named Assistant Personal Robot (or APR for short) and has been designed as a remotely telecontrolled robotic platform built to provide social and assistive services to elderly people and those with impaired mobility. The APR features a fast high-mobility motion system adapted for tele-operation in plain indoor areas, which incorporates a high-priority collision avoidance procedure. This paper presents the mechanical architecture, electrical fundaments and software implementation required in order to develop the main functionalities of an assistive robot. The APR uses a tablet in order to implement the basic peer-to-peer videoconference and tele-operation control combined with a tactile graphic user interface. The paper also presents the development of some applications proposed in the framework of an assisted living robot. Answer: Yes, tele-operators do learn to better judge whether a robot can pass through an aperture. An experiment was conducted to examine this learning process, and it was found that tele-operators improved their pass-ability judgments over time, particularly for impassable apertures. The learning was revealed when judgments were analyzed in terms of percentage correct, rather than absolute thresholds. The study suggested that tele-operators' perceptual sensitivity and response bias changed over the course of the experiment, indicating a true learning effect in discriminating between passable and impassable apertures (PUBMED:26721291).
Instruction: Does cognitive avoidance mediate the relation of anxiety and binge eating? Abstracts: abstract_id: PUBMED:27100227 Does cognitive avoidance mediate the relation of anxiety and binge eating? Purpose: Theory suggests that binge eating symptoms may develop in an attempt to avoid distressing states that arise in the context of negative affect. In light of its theoretical significance, including the "escape from awareness" model of binge eating, surprisingly few empirical evaluations have examined the empirical evidence for this variable in relation to anxiety and binge eating symptoms. In addition, although it is understood that anxiety is more prevalent among women than men, empirical investigations of gender differences in cognitive avoidance in binge eating are thus far absent from the published literature. Methods: Participants (N = 436) were recruited from diverse geographic regions across the United States to take part in an online study. Cognitive avoidance, anxiety, and binge eating measures were collected. Results: Cognitive avoidance partially mediated the relation between anxiety and binge eating in the full sample; however, results differed across genders. Specifically, cognitive avoidance was a mediator for women, but not for men. Conclusions: Findings support the "escape from awareness" model of binge eating among women, and suggest that targeting cognitive avoidance in binge eating treatment may be a promising clinical avenue. Future research may benefit from exploring the broader construct of experiential avoidance to determine if the gender differences in cognitive avoidance observed in this study are indicative of a larger pattern of avoidance behavior, and if factors other than cognitive avoidance may have greater relevance for men. abstract_id: PUBMED:31278657 Associations Between Approach and Avoidance Coping, Psychological Distress, and Disordered Eating Among Candidates for Bariatric Surgery. Background: Individuals seeking bariatric surgery evidence risk for binge and disordered eating behaviors, which can lead to poorer post-surgical weight loss outcomes. Use of avoidant coping strategies to manage stress, along with symptoms of depression, are associated with disordered eating in the general population. However, the role of coping has not been examined among candidates for bariatric surgery, and coping and depression have rarely been considered in combination. Given the emerging standard that psychologists are involved in evaluations and treatment before and after surgery, consideration of these variables is clinically relevant. Methods: Participants were 399 patients undergoing pre-surgical bariatric psychological assessment. Hierarchical linear regression analyses tested whether gender, age, and BMI; approach and avoidance coping; and depression and anxiety were associated with disordered eating (binge eating, restraint, eating concerns, shape concerns, weight concerns) in a cross-sectional study design. Results: In initial steps of the model controlling demographic variables, approach coping predicted less and avoidance coping predicted more disordered eating across most outcomes examined. In models including depression and anxiety, avoidance (but not approach) coping remained a relevant predictor. The effects of depression were also quite robust, such that participants who were more depressed reported more disordered eating. More anxious participants reported more restrained eating. Conclusions: Avoidance coping and depressive symptoms emerged as key variables in understanding recent disordered eating among patients considering bariatric surgery. Pre-surgical psychological evaluations and treatment approaches could be enhanced with consideration of patient coping strategies, particularly avoidant coping responses to stress, independent of psychological distress. abstract_id: PUBMED:26175710 Emotion regulation in disordered eating: Psychometric properties of the Difficulties in Emotion Regulation Scale among Spanish adults and its interrelations with personality and clinical severity. Objective: The aims of the study were to (1) validate the Difficulties in Emotion Regulation Scale (DERS) in a sample of Spanish adults with and without eating disorders, and (2) explore the role of emotion regulation difficulties in eating disorders (ED), including its mediating role in the relation between key personality traits and ED severity. Methods: One hundred and thirty four patients (121 female, mean age = 29 years) with anorexia nervosa (n = 30), bulimia nervosa (n = 54), binge eating (n = 20), or Other Specified Feeding or Eating Disorders (n = 30) and 74 healthy control participants (51 female, mean age = 21 years) reported on general psychopathology, ED severity, personality traits and difficulties in emotion regulation. Exploratory and confirmatory factor analyses were conducted to examine the psychometrics of the DERS in this Spanish sample (Aim 1). Additionally, to examine the role of emotion regulation difficulties in ED (Aim 2), differences in emotion regulation difficulties across eating disorder subgroups were examined and structural equation modeling was used to explore the interrelations among emotion regulation, personality traits, and eating disorder severity. Results: RESULTS support the validity and reliability of the DERS within this Spanish adult sample and suggest that this measure has a similar factor structure in this sample as in the original sample. Moreover, emotion regulation difficulties were found to differ as a function of eating disorder subtype and to mediate the relation between two specific personality traits (i.e., high harm avoidance and low self-directedness) and ED severity. Conclusions: Personality traits of high harm avoidance and low self-directedness may increase vulnerability to ED pathology indirectly, through emotion regulation difficulties. abstract_id: PUBMED:36370244 Examining the role of experiential avoidance and valued action in the negative effects of weight self-stigma. Harmful effects of weight self-stigma on quality of life and health behaviors have been well-established. However, the processes that lead to these negative outcomes are less understood. Psychological inflexibility is defined as a pattern of rigid psychological reactions dominating over values and meaningful actions. A lack in valued action is characterized by the absence of activities that are connected to what is personally meaningful. In this secondary analysis, we aim to extend research by examining two subprocesses of psychological inflexibility, experiential avoidance and lack of valued action, as statistical mediators of the relations between weight self-stigma and quality of life/health behavior outcomes. Baseline data from a clinical trial comparing weight loss maintenance interventions in a sample of 194 adults living with overweight or obesity and seeking treatment is analyzed. Results show that greater experiential avoidance and lower valued action were significantly related to lower quality of life and satisfaction with social roles, as well as greater depression, anxiety, and binge eating. Further, results from a parallel mediation analysis indicated that weight self-stigma is indirectly related to anxiety, disinhibited eating, and hunger through the relationship with experiential avoidance and lack of valued action. abstract_id: PUBMED:11994835 The effect of cognitive-behavioral therapy for bulimia nervosa on temperament and character as measured by the temperament and character inventory. Cognitive-behavioral therapy (CBT) is a successful treatment method for reducing symptomatology associated with bulimia nervosa (BN). Less is known regarding the impact of CBT on measures of temperament and character across treatment. We explore changes on the Temperament and Character Inventory (TCI) from pretreatment to 1-year follow-up in a sample of women with bulimia nervosa (BN). Ninety-one women who met DSM-III-R criteria for BN, completed pretreatment measures, and participated in a randomized clinical trial with a core treatment of CBT were available for follow-up at 1 year. Participants were assessed at pretreatment using the TCI as part of a larger assessment battery, and received eight sessions of CBT and eight sessions of exposure with response prevention or relaxation training. The TCI was readministered at the 1-year follow-up point. We compared TCI scores across the two administrations and explored the relation between clinical status at 1 year and TCI scores. Significant decreases in the TCI temperament scale of harm avoidance and increases in the character scale of self-directedness were observed between pretreatment and 1-year follow-up. Observed differences in these scales were independent of change in depression scores as well as other measures of therapeutic change (i.e., binge and purge frequency). We conclude that self-directedness and harm avoidance scores are positively affected by CBT. Higher scores on self-directedness not only predict treatment outcome for BN, but overall self-directedness is improved with CBT in women with BN. These results suggest that the elements of CBT affecting self-directedness may hold promise for enhancing the effectiveness of treatment for BN. abstract_id: PUBMED:15838534 Contribution of NTRK2 to the genetic susceptibility to anorexia nervosa, harm avoidance and minimum body mass index. Anorexia nervosa (AN) and bulimia nervosa (BN) are eating disorders (ED) with complex genetic and environmental components. Genetic studies and animal models support the participation of brain-derived neurotrophic factor (BDNF) in the vulnerability to AN and BN. We investigated the genetic contribution of the BDNF-specific receptor neurotrophic tyrosine kinase receptor type 2 (NTRK2) to the susceptibility to ED. We have screened the entire NTRK2 gene in 91 patients with ED and have identified 14 single-nucleotide polymorphisms (SNPs). A population-based association study with six SNPs from the NTRK2 locus was performed in 164 ED patients and 121 controls. Significant evidence of association for markers -69C>G and IVS13+40G>A was detected. We also observed a strong association between the C-A-insC haplotype (-69/IVS13+40/2784-2785) and binge-eating/purging AN (ANP, P=0.006; OR=2.27), and a reduced frequency of haplotype G-A-delCl in BN patients (P=0.034; OR=0.6). The analysis of ED-related phenotypes revealed a clear association between NTRK2, high scores of Harm avoidance measured by the temperament and character inventory (TCI-R; P=0.003) and minimum body mass index (minBMI; P<0.001). Our data support a contribution of NTRK2 to the genetic susceptibility of ED, mainly ANP, and ED-related phenotypic traits, such as Harm avoidance and minBMI. abstract_id: PUBMED:10422602 Predictors of rapid and sustained response to cognitive-behavioral therapy for bulimia nervosa. Objective: To examine characteristics of individuals who show a rapid and sustained response to cognitive-behavioral therapy (CBT) for bulimia nervosa (BN). Method: As part of a randomized clinical trial designed to dismantle CBT for BN, we compared 19 individuals who exhibited complete abstinence from binging and purging after only eight sessions of CBT and maintained abstinence throughout the duration of treatment and the 1-year follow-up interval, to 79 individuals who had a more variable response to treatment. Demographics, baseline clinical characteristics, lifetime comorbidity, and personality disorder symptoms and profiles from the Temperament and Character Inventory were examined. Results: In univariate analyses, frequency of binging and purging at baseline, low scores on the Eating Disorders Inventory (EDI) Bulimia subscale, lower harm avoidance, and higher self-directedness were associated with rapid response. In multiple regression analyses, frequency of binging at baseline (OR = 0.87; 95% CI 0.77-0.98) and self-directedness (OR = 1.12; 95% CI 1.04-1.21) independently predicted rapid and sustained treatment response. Conclusion: Frequency of binging and the character quality of self-directedness may be useful predictors of those individuals who are likely to respond positively to a brief course of CBT for BN. abstract_id: PUBMED:26228412 Initial test of an emotional avoidance model of restriction in anorexia nervosa using ecological momentary assessment. It has been hypothesized that restrictive eating allows individuals with anorexia nervosa (AN) to avoid contact with negative emotions; however, this presumption has not been directly tested. In this study, we conducted an initial investigation examining whether restrictive eating serves an emotional avoidance function among individuals with AN. Females with AN (n = 118) reported on negative and positive affect, anxiety/tension, and eating behaviors at multiple time points daily over a 2-week period using ecological momentary assessment methodology. Affective patterns were compared using generalized estimating equation models between days in which participants reported either: (1) relatively high restriction (without binge eating); (2) relatively low restriction (without binge eating); (3) binge eating; or (4) no restriction or binge eating. We hypothesized that, if restriction were functioning to avoid negative affect, average negative affect and anxiety/tension, as well as average negative and positive affect lability, would be lower and average positive affect would be higher on days characterized by high levels of restriction compared to other eating patterns. Contrary to hypotheses: (1) average negative affect, anxiety/tension, and positive affect were not significantly different between days characterized by high restriction and those characterized by low or no restriction; (2) Negative affect and anxiety/tension lability were higher on days characterized by high restriction compared to no restriction or binge eating days; (3) Anxiety/tension lability was higher on days characterized by high versus low levels of restriction. This patterns of findings does not support an avoidance model of restrictive eating for individuals with AN. abstract_id: PUBMED:20653188 Psychological predictors of body image dissatisfaction in women suffering from eating discorders. The objective of the present study was to identify factors influencing body image dissatisfaction in female patients with anorexia and bulimia nervosa. 61 women, aged 16 to 31, (M = 21.69; SD = 3.85) filled questionnaires evaluating dissatisfaction with their body (Body Dissatisfaction Scale by Garner & al., 1984), attitudes about the body (Body Attitude Test by Probst & al., 1995), avoidance with regard to body image (Body Image Avoidance Questionnaire by Rosen & al., 1991), negative mood states (Hospital Anxiety and Depression Scale by Zigmond and Snaith, 1983) as well as self-esteem (Self-Esteem Inventory by Coopersmith, 1984). Negative appreciation of body size, symptoms of depression, grooming and weighting, lack of familiarity with one's own body, as well as low self-esteem in general, could predict the body image dissatisfaction among eating disordered women. The fact of knowing the most important determinants of body image could give indications for the prevention of the cognitive distortions concerning body image. abstract_id: PUBMED:30019259 Psychological inflexibility in overweight and obese people from the perspective of acceptance and commitment therapy (ACT). Purpose: To identify the psychological inflexibility (PI) processes (experiential avoidance, cognitive fusion, poor contact with the present moment, unclear personal values and lack of commitment to action) associated with body mass index (BMI), depression, anxiety and stress symptoms and binge eating in overweight and obese people by drawing on the acceptance and commitment therapy (ACT) framework. Methods: A cross-sectional study of 243 people with a BMI of at least 25 and aged from 18 to 60 years old. The following instruments were used: Acceptance and Action Questionnaire II (AAQ-II), Acceptance and Action Questionnaire-Weight (AAQ-W), Valued Living Questionnaire (VLQ), Cognitive Fusion Questionnaire (CFQ-7), Cognitive Fusion Questionnaire-Body Image (CFQ-BI), Mindful Eating Questionnaire (MEQ), Binge Eating Scale (BES) and Depression, Anxiety and Stress Scale (DASS-21). Data were analyzed using one-way ANOVA and t tests. Results: There were no differences between BMI categories with respect to PI variables, but PI was positively related to the severity of binge eating and to symptoms of depression, anxiety and stress. Conclusion: The development of interventions focused on psychological inflexibility can be useful for the treatment of symptoms associated with overweight and obesity. Level Of Evidence: Level V, cross-sectional descriptive study. Answer: Yes, cognitive avoidance does mediate the relation between anxiety and binge eating. The study with participants from diverse geographic regions across the United States found that cognitive avoidance partially mediated the relation between anxiety and binge eating in the full sample. However, the results differed across genders, with cognitive avoidance acting as a mediator for women but not for men (PUBMED:27100227). This supports the "escape from awareness" model of binge eating among women, suggesting that cognitive avoidance is a significant factor in the relationship between anxiety and binge eating, at least for women. Additionally, other studies have found that avoidance coping, which can be considered a form of cognitive avoidance, predicted more disordered eating across most outcomes examined. Even when controlling for demographic variables and including depression and anxiety in the models, avoidance coping remained a relevant predictor (PUBMED:31278657). This further supports the notion that cognitive avoidance plays a mediating role in the relationship between anxiety and disordered eating behaviors, including binge eating. Therefore, based on the empirical evidence from these studies, it can be concluded that cognitive avoidance does mediate the relation of anxiety and binge eating, particularly among women.
Instruction: Outcomes and complications of extension of previous long fusion to the sacro-pelvis: is an anterior approach necessary? Abstracts: abstract_id: PUBMED:22722041 Outcomes and complications of extension of previous long fusion to the sacro-pelvis: is an anterior approach necessary? Background: Patients with previous multilevel spinal fusion may require extension of the fusion to the sacro-pelvis. Our objective was to evaluate the outcomes and complications of these patients, stratified based on whether the revision was performed using a posterior-only spinal fusion (PSF) or combined anterior-posterior spinal fusion (APSF). Methods: A retrospective, multicenter evaluation of adults (>18 years old) with a history of prior spinal fusion for scoliosis (≥4 levels) terminating in the distal lumbar spine requiring extension of fusion to the sacro-pelvis (including iliac fixation in all cases), with minimum 2-year follow-up, was performed. Patients were stratified based on approach (APSF vs. PSF) and inclusion of pedicle subtraction osteotomy (PSO). The PSF group included patients treated with an anterior interbody fusion done through a posterior approach, whereas patients in the APSF group all had both anterior and posterior surgical approaches. Clinical outcomes were based on the Scoliosis Research Society (SRS-22) questionnaire. Results: Between 1995 and 2006, 45 patients (mean age = 49 years) met inclusion criteria, with a mean follow-up of 41.9 months (range 24 to 135 months). Demographic, preoperative, operative, and postoperative radiographic, SRS-22, and follow-up results were similar between APSF (n=30) and PSF (n=15) groups. The APSF group had more complications (13 of 30 vs. 3 of 15) and a greater number of pseudarthrosis (4 of 30 vs. 0 of 15) than the PSF group; however, these differences did not reach statistical significance. Patients treated with a PSO (n=13) had greater sagittal vertical axis correction (7.7 cm vs. 2.2 cm; P=.04) compared with patients not treated with a PSO (n=32). There were no differences in complication rates or follow-up SRS-22 scores based on whether a PSO was performed (P>.05). Conclusions: Among adults with previously treated scoliosis requiring extension to the sacro-pelvis, PSF produced radiographic fusion and clinical outcomes equivalent to APSF, whereas complication rates may be lower. PSO resulted in greater sagittal plane correction, without an increase in overall complication rates. abstract_id: PUBMED:24482619 Early results of sacro-iliac joint fixation following long fusion to the sacrum in adult spine deformity. Background: The sacro-iliac joint (SIJ) is the largest joint in the human body. When the lumbar spine is fused to the sacrum, motion across the SIJ is increased, leading to increased degeneration of the SIJ. Degeneration can become symptomatic in up to 75% of the cases when a long lumbar fusion ends with a sacral fixation. If medical treatments fail, patients can undergo surgical fixation of the SIJ. Questions/purposes: This study reports the results of short-term complications, length of stay, and clinical as well as radiographic outcomes of patients undergoing percutaneous SIJ fixation for SIJ pain following long fusions to the sacrum for adult scoliosis. Methods: A retrospective review of all the patients who underwent a percutaneous fixation of the SIJ after corrective scoliosis surgery was performed in a single specialized scoliosis center between the years 2011-2013. Ten SIJ fusions were performed in six patients who failed conservative care for SIJ arthritis. Average age was 50 (range 25-60 years). The patients were 15.3 years in average after the original surgical procedure (range 4-25 years). Average post-operative follow-up was 10.25 months (range 15-4 months). The medical charts of the patients were reviewed for hospital stay, complications, pre- and post-operative pain, quality of life, and satisfaction with surgery using the visual analogues score (VAS), Scoliosis Research Society (SRS)22 and Oswestry Disability Index (ODI) questionnaires. Images were reviewed for fixation of the SIJ, fusion, and deviation of the implants from the SIJ. Results: There were no complications in surgery or post-operatively. Discharge was on post-operative day 2 (range 1-4 days). Leg VAS score improved from 6.5 to 2.0 (P < 0.005; minimal clinically important difference (MCID) 1.6). Back VAS score decreased from 7.83 to 2.67 mm (P < 0.005; MCID 1.2). ODI scores dropped from 22.2 to 10.5 (P = 0.0005; MCID 12.4). SRS22 scores increased from 2.93 to 3.65 (P = 0.035; MCID 0.2) with the largest increases in the pain, function, and satisfaction domains of the questionnaires. Conclusion: Fixation of the SIJ in patients that fail conservative care for SIJ arthritis after long fusions ending in the sacrum provides a reduction in back pain and improved quality of life in the short and medium range follow-up period. abstract_id: PUBMED:37558910 Risk factors for mechanical complications after fusion extension surgery for lumbar adjacent segment disease. Purpose: Adjacent segment disease (ASD) is a common complication in fusion surgery. In the event of solid segmental fusion, previous implants can be removed or preserved during fusion extension for ASD. To compare the surgical outcomes of patients with and without implants and analyzes the risk factors for postoperative mechanical complications. Methods: Patients who underwent fusion extension for lumbar ASD from 2011 to 2019 with a minimum 2 year follow-up were retrospectively reviewed. Spinopelvic parameters were measured preoperatively and postoperatively. Clinical outcomes and surgical complications were compared between groups with implants preserved and removed. Risk factors for mechanical complications, including clinical, surgical, and radiographic factors were analyzed. Results: Sixty-nine patients (mean age, 69.9 ± 6.9 years) were included. The mean numbers of initial and extended fused segments were 2.8 ± 0.7 and 2.7 ± 0.7, respectively. Previous implants were removed in 43 patients (R group) and preserved in 26 patients (P group). Both groups showed an improvement in clinical outcomes without between-group differences. The operation time was significantly longer in R group (260 vs 207 min, p < 0.001). Mechanical complications occurred in 13 patients (12 in R group and 1 in P group) and reoperation was needed in 3 patients (R group). Implant removal, index fusion surgery including L5-S1, and postoperative sagittal malalignment were risk factors for mechanical complications. Conclusion: Implant removal was a risk factor for mechanical complications. Index fusion surgery including L5-S1 and postoperative sagittal malalignment were also risk factors for mechanical complications. abstract_id: PUBMED:29038952 Lateral Lumbar Interbody Fusion-Outcomes and Complications. Purpose Of Review: Lateral lumbar interbody fusion (LLIF) is a relatively new, minimally invasive technique for interbody fusion. The goal of this review is to provide a general overview of LLIF with a special focus on outcomes and complications. Recent Findings: Since the first description of the technique in 2006, the indications for LLIF have expanded and the rate of LLIF procedures performed in the USA has increased. LLIF has several theoretical advantages compared to other approaches including the preservation of the anterior and posterior annular/ligamentous structures, insertion of wide cages resting on the dense apophyseal ring bilaterally, and augmentation of disc height with indirect decompression of neural elements. Favorable long-term outcomes and a reduced risk of visceral/vascular injuries, incidental dural tears, and perioperative infections have been reported. However, approach-related complications such as motor and sensory deficits remain a concern. In well-indicated patients, LLIF can be a safe procedure used for a variety of indications. abstract_id: PUBMED:20657693 Minimally invasive versus open transforaminal lumbar interbody fusion. Background: Available clinical data are insufficient for comparing minimally invasive (MI) and open approaches for transforaminal lumbar interbody fusion (TLIF). To date, a paucity of literature exists directly comparing minimally invasive (MI) and open approaches for transforaminal lumbar interbody fusion (TLIF). The purpose of this study was to directly compare safety and effectiveness for these two surgical approaches. Materials And Methods: Open or minimally invasive TLIF was performed in 63 and 76 patients, respectively. All consecutive minimally invasive TLIF cases were matched with a comparable cohort of open TLIF cases using three variables: diagnosis, number of spinal levels, and history of previous lumbar surgery. Patients were treated for painful degenerative disc disease with or without disc herniation, spondylolisthesis, and/or stenosis at one or two spinal levels. Clinical outcome (self-report measures, e.g., visual analog scale (VAS), patient satisfaction, and MacNab's criteria), operative data (operative time, estimated blood loss), length of hospitalization, and complications were assessed. Average follow-up for patients was 37.5 months. Results: The mean change in VAS scores postoperatively was greater (5.2 vs. 4.1) in theopen TLIF patient group (P = 0.3). MacNab's criteria score was excellent/good in 67% and 70% (P = 0.8) of patients in open and minimally invasive TLIF groups, respectively. The overall patient satisfaction was 72.1% and 64.5% (P = 0.4) in open and minimally invasive TLIF groups, respectively. The total mean operative time was 214.9 min for open and 222.5 min for minimally invasive TLIF procedures (P = 0.5). The mean estimated blood loss for minimally invasive TLIF (163.0 ml) was significantly lower (P < 0.0001) than the open approach (366.8 ml). The mean duration of hospitalization in the minimally invasive TLIF (3 days) was significantly shorter (P = 0.02) than the open group (4.2 days). The total rate of neurological deficit was 10.5% in the minimally invasive TLIF group compared to 1.6% in the open group (P = 0.02). Conclusions: Minimally invasive TLIF technique may provide equivalent long-term clinical outcomes compared to open TLIF approach in select population of patients. The potential benefit of minimized tissue disruption, reduced blood loss, and length of hospitalization must be weighted against the increased rate of neural injury-related complications associated with a learning curve. abstract_id: PUBMED:37029604 Lateral lumbar interbody fusion - clinical outcomes, fusion rates and complications with recombinant human bone morphogenetic protein-2. Background: The authors report an Australian experience of lateral lumbar interbody fusion (LLIF) with respect to clinical outcomes, fusion rates, and complications, with recombinant human bone morphogenetic protein-2 (rhBMP-2) and other graft materials. Methods: Retrospective cohort study of LLIF patients 2011-2021. LLIFs performed lateral decubitus by four experienced surgeons past their learning curve. Graft materials classified rhBMP-2 or non-rhBMP-2. Patient-reported outcomes assessed by VAS, ODI, and SF-12 preoperatively and postoperatively. Fusion rates assessed by CT postoperatively at 6 and 12 months. Complications classified minor or major. Clinical outcomes and complications analysed and compared between rhBMP-2 and non-rhBMP-2 groups. Results: A cohort of 343 patients underwent 437 levels of LLIF. Mean age 67 ± 11 years (range 29-89) with a female preponderance (65%). Mean BMI 29kg/m2 (18-56). Most common operated levels L3/4 (36%) and L4/5 (35%). VAS, ODI and SF-12 improved significantly from baseline. Total complication rate 15% (53/343) with minor 11% (39/343) and major 4% (14/343). Ten patients returned to OR (2-wound infection, 8-further instrumentation and decompression). Most patients (264, 77%) received rhBMP-2, the remainder a non-rhBMP-2 graft material. No significant differences between groups at baseline. No increase in minor or major complications in the rhBMP-2 group compared to the non-rhBMP-2 group respectively; (10.6% vs 13.9% [p = 0.42], 2.7% vs 8.9% [p < 0.01]). Fusion rates significantly higher in the rhBMP-2 group at 6 and 12 months (63% vs 40%, [p < 0.01], 92% vs 80%, [p < 0.02]). Conclusion: LLIF is a safe and efficacious procedure. rhBMP-2 in LLIF produced earlier and higher fusion rates compared to available non-rhBMP-2 graft substitutes. abstract_id: PUBMED:24703447 Minimally invasive transforaminal lumbar interbody fusion (MI-TLIF): surgical technique, long-term 4-year prospective outcomes, and complications compared with an open TLIF cohort. Transforaminal lumbar interbody fusion (TLIF) is an important surgical option for the treatment of back pain and radiculopathy. The minimally invasive TLIF (MI-TLIF) technique is increasingly used to achieve neural element decompression, restoration of segmental alignment and lordosis, and bony fusion. This article reviews the surgical technique, outcomes, and complications in a series of 144 consecutive 1- and 2-level MI-TLIFs in comparison with an institutional control group of 54 open traditional TLIF procedures with a mean of 46 months' follow-up. The evidence base suggests that MI-TLIF can be performed safely with excellent long-term outcomes. abstract_id: PUBMED:32197973 Long-term complications of minimally-open anterolateral interbody fusion for L5-S1. Background: Multiple surgical techniques and approaches exist to obtain lumbar interbody fusion. Anterolateral (oblique) is a relatively recent technique. Controversy exists for its use at the L5-S1 level. We performed this study in order to show the safety and efficacy of this technique. The aim of this study was to report the long-term complications and fusion rates of minimally-open (mini-open) anterolateral interbody fusion at the L5-S1 level. Methods: We retrospectively analyzed all patients who underwent mini-open anterolateral interbody fusion for L5-S1 level in our department. The data collected were the following: age, sex, surgical indication, acute (less than four weeks) and long-term complications (>3 months), fusion at six months and length of follow-up. Results: Seventeen patients (8M/9F) underwent mini-open anterolateral interbody fusion at L5-S1. The mean age was 64.5 years. The surgical indication was scoliosis in 10 cases, flat back in 4 cases, and spondylolisthesis in 3 cases. All patients underwent a complementary posterior procedure that included fixation. Mean blood loss was 252.9mL for the anterior procedure. Eight acute and minor complications occurred (anemia, delirium, and psoas paresis). Two acute complications required surgical intervention (cage displacement and hematoma). Long-term complications were observed in 2 cases and included proximal junction kyphosis and non-union. The fusion rate was evaluated at 88%. The mean follow-up period was 28.3 months. Conclusions: Mini-open anterolateral interbody fusion at the L5-S1 level is safe and results in fusion at the same rate as anterior interbody fusion. Most acute complications are minor and resolve spontaneously. abstract_id: PUBMED:26605024 Fulminant Necrotizing Fasciitis of the Thigh, Following an Infection of the Sacro-iliac Joint in an Immunosuppressed, Young Woman. Necrotizing soft tissue infection of an extremity is a rare but life-threatening disease. The disease is an infection that involves the soft tissue layer and is characterized by rapidly spreading inflammation (especially of the fascial planes and the surrounding tissues) with a high mortality. Early diagnosis is essential for the outcome of the patients. Radical surgical debridement is the treatment of choice. The predisposing factors are immunosuppression, diabetes mellitus and drug abuse. This report presents a case of necrotizing fasciitis in the thigh, following an abscess of the sacro-iliac joint, as a rare complication in a young, immunosuppressed woman. The patient's history revealed intravenous drug abuse and hepatitis C. After immediate diagnosis by magnetic resonance imaging, radical surgical debridement was required and performed. Prior to soft tissue coverage with a split skin graft, five additional sequential debridements were necessary. During her hospital stay, the patient experienced further cerebral and pulmonary septic embolisms and an infection of the elbow. Six months after admission, the patient was discharged in good condition to a rehabilitation center. Necrotizing fasciitis is a life-threatening complication following an abscess of the sacro-iliac joint. Physicians must be vigilant to inflammatory signs and pain in immunosuppressed patients. An abscess of the sacro-iliac joint is rare, but complications of an untreated abscess can be fatal in these patients. abstract_id: PUBMED:24333459 Comparison of complications, costs, and length of stay of three different lumbar interbody fusion techniques: an analysis of the Nationwide Inpatient Sample database. Background Context: Lumbar interbody fusion (LIF) techniques have been used for years to treat a number of pathologies of the lower back. These procedures may use an anterior, posterior, or combined surgical approach. Each approach is associated with a unique set of complications, but the exact prevalence of complications associated with each approach remains unclear. Purpose: To investigate the rates of perioperative complications of anterior lumbar interbody fusion (ALIF), posterior/transforaminal lumbar interbody fusion (P/TLIF), and LIF with a combined anterior-posterior interbody fusion (APF). Study Design/setting: Retrospective review of national data from a large administrative database. Patient Sample: Patients undergoing ALIF, P/TLIF, or APF. Outcome Measures: Perioperative complications, length of stay (LOS), total costs, and mortality. Methods: The Nationwide Inpatient Sample database was queried for patients undergoing ALIF, P/TLIF, or APF between 2001 and 2010 as identified via International Classification of Diseases, ninth revision codes. Univariate analyses were carried out comparing the three cohorts in terms of the outcomes of interest. Multivariate analysis for primary outcomes was carried out adjusting for overall comorbidity burden, race, gender, age, and length of fusion. National estimates of annual total number of procedures were calculated based on the provided discharge weights. Geographic distribution of the three cohorts was also investigated. Results: An estimated total of 923,038 LIFs were performed between 2001 and 2010 in the United States. Posterior/transforaminal lumbar interbody fusions accounted for 79% to 86% of total LIFs between 2001 and 2010, ALIFs for 10% to 15%, and APF decreased from 10% in 2002 to less than 1% in 2010. On average, P/TLIF patients were oldest (54.55 years), followed by combined approach (47.23 years) and ALIF (46.94 years) patients (p<.0001). Anterior lumbar interbody fusion, P/TLIF, and combined surgical costs were $75,872, $65,894, and $92,249, respectively (p<.0001). Patients in the P/TLIF cohort had the greatest number of comorbidities, having the highest prevalence for 10 of 17 comorbidities investigated. Anterior-posterior interbody fusion group was associated with the greatest number of complications, having the highest incidence of 12 of the 16 complications investigated. Conclusions: These data help to define the perioperative risks for several LIF approaches. Comparison of outcomes showed that a combined approach is more expensive and associated with greater LOS, whereas ALIF is associated with the highest postoperative mortality. These trends should be taken into consideration during surgical planning to improve clinical outcomes. Answer: The outcomes and complications of extending a previous long fusion to the sacro-pelvis have been evaluated in several studies, with a focus on whether an anterior approach is necessary. A retrospective multicenter evaluation of adults with a history of prior spinal fusion for scoliosis requiring extension of fusion to the sacro-pelvis found that posterior-only spinal fusion (PSF) produced radiographic fusion and clinical outcomes equivalent to combined anterior-posterior spinal fusion (APSF), with potentially lower complication rates. The study also noted that pedicle subtraction osteotomy (PSO) resulted in greater sagittal plane correction without an increase in overall complication rates (PUBMED:22722041). Another study reported on the short-term complications, length of stay, and clinical as well as radiographic outcomes of patients undergoing percutaneous sacro-iliac joint (SIJ) fixation for SIJ pain following long fusions to the sacrum for adult scoliosis. The results indicated a reduction in back pain and improved quality of life in the short and medium range follow-up period, with no complications in surgery or post-operatively (PUBMED:24482619). A study comparing the surgical outcomes of patients with and without implants during fusion extension for lumbar adjacent segment disease (ASD) found that implant removal was a risk factor for mechanical complications. The study also identified index fusion surgery including L5-S1 and postoperative sagittal malalignment as risk factors for mechanical complications (PUBMED:37558910). In summary, the necessity of an anterior approach for extending a previous long fusion to the sacro-pelvis is not clearly supported by the available evidence. Posterior-only approaches may offer similar outcomes with potentially lower complication rates, and the decision to use an anterior approach should be based on individual patient factors and surgical goals.
Instruction: Short-term outcome of periviable small-for-gestational-age babies: is our counseling up to date? Abstracts: abstract_id: PUBMED:27854384 Short-term outcome of periviable small-for-gestational-age babies: is our counseling up to date? Objective: There are limited data for counseling on and management of periviable small-for-gestational-age (SGA) fetuses. We therefore aimed to investigate the short-term outcome of periviable SGA fetuses in relation to the likely underlying cause. Methods: This was a retrospective study of data from three London tertiary fetal medicine centers obtained between 2000 and 2015. We included viable singleton pregnancies with a severely small fetus, defined as those with an abdominal circumference ≤ 3rd percentile, identified between 22 + 0 and 25 + 6 weeks' gestation. Data obtained included fetal biometry, presence of placental anomalies, uterine and fetal Doppler and neonatal outcome. We excluded cases with structural abnormalities, proven or suspected abnormal karyotype or genetic syndromes. Cases were classified according to the suspected underlying cause of the small fetal size into one of the following categories: uteroplacental insufficiency, evidence of placental damage with normal uterine artery Doppler, viral infection, or unclassied. Results: There were 245 cases included in the study. Of these, at diagnosis of SGA, 201 (82%) were categorized as uteroplacental cause, 13 (5%) as suspected placental cause, one (0.4%) as suspected viral cause and 30 (12%) could not be assigned to any of these categories. Overall, 101 (41%) cases survived the neonatal period; 89 (36%) underwent in-utero fetal demise, 22 (9%) died neonatally and 33 (14%) pregnancies were terminated. The diagnosis-to-delivery interval was 8.1 weeks in those that survived and 4.5 weeks in those that died neonatally. Conclusions: Almost 90% of periviable SGA cases are associated with uteroplacental insufficiency or intraplacental damage. Survival is related to gestational age at delivery, with outcomes better than might be assumed at diagnosis and some pregnancies reaching term. Copyright © 2016 ISUOG. Published by John Wiley & Sons Ltd. abstract_id: PUBMED:28477660 Periviable Birth and the Shifting Limit of Viability. Periviable birth carries a high risk of fetal and newborn death, and the potential for life-long complications in survivors. The family at risk for periviable birth should receive objective, accurate, and up to date information regarding fetal, newborn, and maternal risks and outcomes with delivery or with continued pregnancy. This article describes the various descriptive terms for delivery near the limit of viability, considers the evolving limit of viability over time, and highlights the importance of adjusted counseling with brief pregnancy prolongation and/or changing clinical circumstances within the periviable period. abstract_id: PUBMED:24290396 Prevention of periviable birth: bed rest, tocolysis, progesterone, and cerclage. The periviable period is associated with a rapid improvement in both mortality and morbidity with increasing gestational age. Therapies that can prolong gestation have the potential for markedly improving outcome compared to later in pregnancy. The therapies commonly prescribed in order to prolong gestation include bed rest, tocolysis, progesterone, and cerclage. Although these therapies are used to prevent preterm birth throughout gestation, their impact on perinatal outcome during the periviable period remains largely unknown. These therapies are discussed relative to the periviable period exploring available data and possible impact on perinatal outcome. Studies specifically evaluating therapeutic interventions during the periviable period are lacking. abstract_id: PUBMED:29875817 Reproductive suppression, birth defects, and periviable birth. We argue that reproductive suppression has clinical implications beyond its contribution to the burden of spontaneous abortion. We theorize that the incidence of births before the 28th week of gestation, which contribute disproportionately to infant morbidity and mortality, varies over time in part due to reproductive suppression in the form of selection in utero. We further theorize that the prevalence of structural birth defects among survivors to birth from conception cohorts gauges selection in utero. We based these theories on literature positing that natural selection conserved mechanisms that spontaneously abort "risky" pregnancies including those otherwise likely to yield infants with structural birth defects or small-for-gestational age males. We test our theory using high-quality birth defect surveillance data. We identify 479,885 male infants exposed to strong selection defined as membership in conception cohorts ranked in the lowest quartile of odds of a birth defect among live-born females. We estimate the risk of periviable birth among these infants as a function of selective pressure as well as of mother's race/ethnicity and age. We find that male infants from exposed conception cohorts exhibited 10% lower odds of periviable birth than males from other conception cohorts. Our findings support the argument that selection in utero has implications beyond its contribution to the burden of spontaneous abortion. abstract_id: PUBMED:24468564 Initial resuscitation and stabilization of the periviable neonate: the Golden-Hour approach. There is a paucity of data to support recommendations for stabilization and resuscitation of the periviable neonate in the delivery room. The importance of delivery at a tertiary center with adequate experience, resuscitation team composition, and training for a periviable birth is reviewed. Evidence for delayed cord clamping, delivery room temperature stabilization, strategies to establish functional residual capacity, and adequate ventilation as well as oxygen use in the delivery room is generally based on expert consensus, physiologic plausibility, as well as data from slightly more mature extremely low gestational-age neonates. Little is known about optimal care in the delivery room of these most fragile infants, and thus the need for research remains critical. abstract_id: PUBMED:35437890 Death and severe morbidity in isolated periviable small-for-gestational-age fetuses. Objective: This study aims to predict perinatal death or severe sequelae in isolated small-for-gestational-age fetuses, diagnosed at a periviable gestational age, based on ultrasound and Doppler parameters at diagnosis. Design: Observational study. Setting: A tertiary perinatal centre. Population: A cohort of singleton non-malformed fetuses suspected to be small for gestational age (estimated fetal weight, EFW, <10th centile) diagnosed at 22.0-25.6 weeks of gestation. The following parameters were recorded at diagnosis: severe smallness (<3rd centile); absent or reversed end-diastolic velocity in umbilical artery; abnormal middle cerebral artery Doppler; abnormal cerebroplacental ratio; abnormal uterine artery Doppler; and absent or reversed end-diastolic velocity in the ductus venosus. Methods: Logistic regression analysis. Main Outcome Measures: Predictive performance of EFW and Doppler parameters for short-term adverse outcome of perinatal morbimortality and composite serious adverse outcomes (death, neurological impairment or severe bronchopulmonary dysplasia). Results: A total of 155 pregnancies were included. There were 13 (8.4%) intrauterine and 11 (7.7%) neonatal deaths. A short-term adverse perinatal outcome occurred in 40 (25.8%) pregnancies. There were 31 (20%) cases of serious adverse outcomes. For the prediction of serious adverse outcomes, the combination of absent or reversed end-diastolic velocity in the umbilical artery and impaired middle cerebral artery detected by Doppler evaluation achieved a detection rate of 87%, with a false-positive rate of 14% (accuracy 86%). Conclusion: In periviable isolated small-for-gestational-age fetuses, a Doppler evaluation of the umbilical and fetal brain circulation can accurately predict short-term adverse perinatal complications and serious adverse outcomes. abstract_id: PUBMED:30817025 Low mortality and short-term morbidity in very preterm infants in Austria 2011-2016. Aim: The current study determined survival, short-term neonatal morbidity and predictors for death or adverse outcome of very preterm infants in Austria. Methods: This population-based cohort study included 5197 very preterm infants (53.3% boys) born between 2011 and 2016 recruited from the Austrian Preterm Outcome Registry. Main outcome measures were gestational age-related mortality and major short-term morbidities. Results: Overall, survival rate of all live-born infants included was 91.6% and ranged from 47.1% and 73.4% among those born at 23 and 24 weeks of gestation to 84.9% and 88.2% among infants born at 25 and 26 weeks to more than 90.0% among those with a gestational age of 27 weeks or more. The overall prevalence of chronic lung disease, necrotising enterocolitis requiring surgery, intraventricular haemorrhage Grades 3-4, and retinopathy of prematurity Grades 3-5 was 10.0%, 2.1%, 5.5%, and 3.6%, respectively. Low gestational age, low birth weight, missing or incomplete course of antenatal steroids, male sex, and multiple births were significant risk predictors for death or adverse short-term outcome. Conclusion: In this national cohort study, overall survival rates were high and short-term morbidity rate was low. abstract_id: PUBMED:28409753 Short term cost of care for the surviving periviable neonate. Objective: To determine the hospital cost and distribution of financial charges for the initial hospitalization of the surviving periviable neonate. Study Design: In this retrospective case series, we analyzed medical records and financial data for neonates 23-25 weeks' gestational age in a single tertiary care NICU over 42 months. A detailed breakdown of hospital cost components and charges was determined for all survivors during their initial hospitalization. Statistical significance was determined using the Bonferroni-Sidak method. Results: Overall survival was 78% in infants born at 23-25 weeks' gestational age. Survival increased and length of stay and hospital costs decreased with increased gestational age (p < 0.05 for all). Hospital charges were distributed as: NICU 56%, respiratory 11%, pharmacy 6%, laboratory 6%, radiology 6%, surgery 1%, neonatology 13% and miscellaneous 1%. Conclusion: Our study describes the hospital cost and distribution of charges for the periviable neonate during the initial hospitalization. These economic data may guide clinicians in quality improvement and cost management. abstract_id: PUBMED:37170688 Association of mode of delivery with short-term and neurodevelopmental outcomes in periviable singleton infants: A nationwide database study. Objective: To evaluate the association of mode of delivery (MOD) with short-term and neurodevelopmental outcomes at 2 years of corrected age (CA) in periviable singleton infants. Methods: This retrospective cohort study of the Taiwan Premature Infant Follow-up Network database between 2010 and 2016 compared non-anomalous singleton deliveries (cesarean delivery [CD] vs vaginal delivery [VD]) between 22 0/7 and 25 6/7 gestational weeks. Major morbidities, mortality, and neurodevelopmental outcomes were evaluated at 2-year CA. Results: The CD and VD groups included 354 and 472 infants, respectively. The intraventricular hemorrhage (IVH) rate was lower in the CD group (54% vs 66%, P = 0.001), but severe IVH differed non-significantly between groups (20% vs 26%, P = 0.057). In the small-for-gestational age subgroup, CD was associated with lower IVH (56% vs 84%, adjusted odds ratio [aOR] 0.17, 95% confidence interval [CI] 0.04-0.69) and better survival without neurodevelopmental impairment (29% vs 8%, aOR, 6.64, 95% CI 1.02-43.29) after controlling for potential confounders. Conclusion: The optimal MOD for periviable singleton birth and its impact are unclear. CD in periviable singleton births is associated with a decreased IVH risk, without improvement in severe IVH, mortality, or neurodevelopment at 2-year CA. The small-for-gestational age subgroup may benefit from CD for better survival without neurodevelopmental impairment. abstract_id: PUBMED:24290393 Periviable births: epidemiology and obstetrical antecedents. Confusion in terminology and non-standardized definitions can arguably be classified as the most important characteristics that lead to disparate study findings. While such situations abound in all of perinatal epidemiology, the study of "periviable birth" is a classic case in point. The most common term used to describe periviable birth has been extremely low birthweight (ELBW), often defined as newborns with birthweight below 1000g; yet, we identified 11 other descriptors for periviable birth. Due to the multitude of definitions, the reported incidence of periviable births varies from 0.03% to 1.9% (a relative difference of 7025%). Antecedent risk factors for periviable birth, though geographically heterogeneous, include nulliparity and multiple gestations, each accounting for one-third and one-fourth of all periviable births, respectively. Spontaneous preterm labor precedes 34% of these deliveries and premature rupture of membranes in 25%. The pregnancy was complicated by hypertensive disease in 21% and bleeding and chorioamnionitis in 18% each. Over 50% of these births are cesarean deliveries. Six clinical interventions before delivery that may improve outcomes include transfer of patient to a tertiary center, administration of antenatal corticosteroids, tocolytics, or antibiotics after premature rupture of membranes, assessment of fetal well-being with electronic fetal heart rate monitoring, and willingness to perform a cesarean delivery after the limit of viability is reached. While there is an accessible predictive model for mortality and long-term morbidities for newborns delivered at 22-25 weeks with weights of 400-1000g, it is hampered by the need to know the birthweight. In addition to reaching a consensus regarding what birthweight-gestational age thresholds determine a periviable birth, the acute need to optimize neonatal outcomes remains a paramount target worthy of future research, and efforts to advance obstetrical medicine are needed to minimize the likelihood of its occurrence. Answer: The short-term outcome of periviable small-for-gestational-age (SGA) babies is a critical concern for clinicians when counseling parents about the prognosis and management options for these high-risk pregnancies. The study by PUBMED:27854384 found that almost 90% of periviable SGA cases are associated with uteroplacental insufficiency or intraplacental damage. The survival rate was 41% for the neonatal period, with some pregnancies reaching term, suggesting that outcomes may be better than assumed at diagnosis. This indicates that current counseling may need to be updated to reflect these survival rates and the potential for some pregnancies to reach term. The importance of accurate and up-to-date counseling is further emphasized by PUBMED:28477660, which highlights the need for objective information regarding risks and outcomes for both delivery and continued pregnancy near the limit of viability. The article also notes the importance of adjusting counseling based on brief pregnancy prolongation and changing clinical circumstances within the periviable period. PUBMED:35437890 adds that in periviable isolated SGA fetuses, Doppler evaluation of the umbilical and fetal brain circulation can accurately predict short-term adverse perinatal complications and serious adverse outcomes. This suggests that incorporating Doppler findings into counseling could improve the accuracy of prognostic information provided to parents. Moreover, PUBMED:24290396 points out that there is a lack of specific studies evaluating therapeutic interventions during the periviable period, which could impact the short-term outcomes of these infants. This gap in knowledge may affect the current counseling practices, as the impact of interventions like bed rest, tocolysis, progesterone, and cerclage on perinatal outcomes during the periviable period remains largely unknown. In conclusion, while some studies provide data that can inform counseling practices, there is a need for more research to fully understand the short-term outcomes of periviable SGA babies and the effectiveness of various interventions. Clinicians should ensure that counseling is based on the most current evidence, including survival rates and the potential for term pregnancies, as well as the predictive value of Doppler evaluations for adverse outcomes.
Instruction: Does living in a female-headed household lower child mortality? Abstracts: abstract_id: PUBMED:21486098 Does living in a female-headed household lower child mortality? The case of rural Nigeria. Introduction: The high infant and child morbidity and mortality in most sub-Saharan African countries, and Nigeria in particular, is a prominent global concern. The objective of this study was to assess factors influencing, and the prevalence of, the experience of child death among rural Nigerian mothers, with the specific aim to investigate whether household headship had an impact on child death. Methods: Using data from the 2008 Nigeria Demographic and Health Survey, multivariate logistic regression methods were used to assess the influence of household headship and other associated variables among rural women who experienced child death (n=13 203) in the 5 years preceding the survey. Methods: A total of 5632 women (43%) whose most recent birth occurred in the 5 years preceding the survey had reported the death of a child. Women who utilized health services were less likely to report child death than those who never utilized health services. Women who delivered their most recent child at home were more likely (46%; n=4565) to report child death compared with those who delivered in a health facility (32%; n=997). The women who resided in male-headed households had a significantly higher (43%; n=5143) prevalence of child death than women from female-headed households (37%; n=489). After controlling for all covariates in the multivariate logistic regression models, women from female-headed households were 17% less likely to experience child death (odds ratio=0.83; 95% confidence interval 0.71, 0.98) than women from male-headed households. Conclusions: The occurrence of child death is not unusual in rural Nigeria. Multiple frameworks are needed to account for differentials in child mortality. After controlling for other explanatory variables such as age, wealth status, region and place of delivery of recent birth, this study found that household headship remained a strong predictor of child mortality. Recommendations are provided according to the complex interplay of socio-cultural, economic, and situational factors affecting the survival of children in rural Nigeria. abstract_id: PUBMED:34264165 Life in a child-headed household: Exploring the quality of life of orphans living in child-headed households in Zimbabwe. The overwhelming rate of HIV-related mortality in the past decades has eroded traditional safety nets for orphans, of which the extended family is the most critical. This has led to the formation of new household structures, i.e. child- and youth-headed households, as children and youths are left to live alone and look after themselves. These types of households are a growing phenomenon in many African countries and are increasingly being recognised as household structures. However, in the absence of an economically active adult, orphans living in these households experience numerous challenges. The socio-economic challenges in Zimbabwe could further exacerbate the conditions of child-headed households. This article discusses findings of an ethnographic inquiry on orphans living in child- and youth-headed households in Zimbabwe. We identified parentification, lack of income and social isolation as risk factors affecting quality of life. Social capital and agency were identified as protective factors enhancing children's quality of life. Our findings suggest that orphans living in child- and youth-headed households are not only victims but are also resilient and can exercise agency. Therefore, interventions to support this vulnerable group should build on their existing resilience and agency. abstract_id: PUBMED:27621985 Household Headship and Infant Mortality in India: Evaluating the Determinants and Differentials. Background: There has been ample discussion on the levels and trends of infant mortality in India over time, but what remains less explored are, the differentials in infant mortality according to household headship. This paper examined the differences in the determinants of infant mortality between male-headed households (MHH) and female-headed households (FHH). Methods: The study used Cox proportional hazard model to examine the determinants of infant death, and Kaplan-Meier estimation technique to examine the survival pattern during infancy using data from Indian National Family Health Survey (2005-06). The analysis is restricted to women who had at least one live birth in the five years preceding the survey. Results: The study observed that household size and number of children below five are significant risk factors of infant mortality in MHH while length of previous birth interval is the only significant risk factor of infant death in FHH. Conclusions And Global Health Implications: The results indicate that children from FHH have higher survival probability at each age than children from MHH irrespective of place of residence and sex of the child. abstract_id: PUBMED:36217188 Sex of household head and other household determinants of childhood anaemia among households in Ghana: regression analysis of the 2019 Malaria Indicator Survey. Background: Childhood anaemia is still a major public health concern. Although the prevalence of anaemia among children under age five is reducing in Ghana, the severity level is still worsening. This study sought to examine and compare household factors affecting the anaemia status of children under age five living in male- and female-headed households in Ghana. Methods: The study used a weighted sample of 5,799 household heads from the 2019 Ghana Malaria Indicator Survey (GMIS). A binary logistic regression analysis was used to examine the effect of sex of household heads and other household factors on the anaemia status of children under the age of five in male- and female-headed households in Ghana. All analyses were conducted at the 95% confidence level. Results: The results showed that a higher proportion (83.0%) of children under age five are not anaemic in households in Ghana. However, the probability of a child being anaemic is higher in male-headed households (aOR = 1.28; C.I:1.08-1.51), in the poorest (aOR = 2.41; CI: 1.59-3.65), poorer (aOR = 2.04; C.I:1.41-2.94) and middle (aOR = 1.78; C.I:1.29-2.46) household wealth category. Higher likelihood of anaemia was found among children in households that used charcoal (aOR = 1.51; C.I:1.15-1.99) and fuelwood (aOR = 1.44; C.I:1.02-2.02) for cooking. Similarly, there is a high probability of childhood anaemia in households with 5-10 members (aOR = 4.49; C.I: 3.78-5.34), 11 or more members (aOR = 7.21; C.I: 4.60-11.31) and household residing in northern part of Ghana (aOR = 1.40; C.I:1.07-1.83). The lower odds of being anaemic were recorded among children whose household heads were aged 40 years and older, household using other cooking fuels (aOR = 0.49; C.I: 0.21-0.78) and household with no bednets (aOR = 0.57; C.I: 0.44-0.74). Conclusions: The GMIS data suggest that anaemia and its severity are higher among children living in MHH than among those living in FHH. The results indicate that poverty, a higher number of household members, relatively younger male household heads and the type of cooking fuel used were factors accounting for the differences in childhood anaemia in MHH and FHH. Equal attention should be given to MHH and FHH in terms of programmes and interventions aimed towards preventing and reducing childhood anaemia in Ghana. abstract_id: PUBMED:37016627 Does the female-headed household suffer more than the male-headed from Covid-19 impact on food security? Evidence from Ethiopia. Covid-19 pandemic and accompanying policy measures affected women's economic and social lives differently from men. However, its effect on the food security situation of women and men remains unclear. This study investigated whether female-headed households suffer more from covid 19 impact on food security than male-headed households in Ethiopia using the World Bank High Frequency Phone Survey data. Food Insecurity Experience Scale (FIES) is applied to measure food insecurity and multilevel-mixed effect random intercept linear model is used to analyze the determinants. The survey data show that, during the pandemic in 2020, nearly 27% of households face moderate or severe food insecurity and close to 6% face severe food insecurity. Regression results revealed that the pandemic and accompanying policy measures affected food security of female-headed households differently from male-headed households. The likelihood of female-headed households experiencing moderate or severe food insecurity is significantly higher by 5.7% points than male-headed households. Education level of the household head is the critical determinant of women's food insecurity. Every additional year of education of female heads has a significantly higher impact in reducing the likelihood of experiencing food insecurity than that of male-headed households. It is also found that having access to basic food items reduces the likelihood of facing moderate or severe food insecurity by 7.4% points. Besides, education level of the household head, income change and family size are critical determinants for food insecurity of the sample households. It is recommended that stakeholders and policy makers focus in the gender aspect while designing policy measures to control pandemics and fucus on improving accessibility of basic needs such as food items to all. abstract_id: PUBMED:28717921 Female-Headed Households Associated with Lower Childhood Stunting Across Culturally Diverse Regions of Pakistan: Results from a Cross-Sectional Household Survey. Objectives Early childhood stunting adversely influences long-term cognitive and health outcomes. There is limited evidence on whether female empowerment within households could reduce its prevalence. We investigated this relationship in Punjab, Pakistan, which has high stunting rates and a sizeable proportion of female-headed households, and whether this relationship differed across three provincial regions with diverse cultural attitudes towards the role of women in society. Methods Using cross-sectional data from the 2011 Multiple Indicator Cluster Survey, we identified 13,412 children aged 1-4 from 8985 two-parent households in three culturally distinct regions in Punjab, Pakistan. Logistic regression models assessed whether the likelihood of stunting was associated with female-headed households, a proxy for female empowerment, and whether this relationship differed by region. Regressions controlled for child- and household-level covariates. Results Children had 26% lower odds of stunting among female-headed households (OR 0.74, CI 0.60, 0.90). The interaction term for female-headed households and child stunting by provincial region was not statistically significant, suggesting that the relationship holds across the three culturally distinct regions. Conclusions for Practice Female empowerment was associated with lower rates of stunting among young children, and the results did not vary by provincial region. This suggests that women can play important roles as agents of change, even in areas where females have limited freedoms. Greater investments in public education and awareness campaigns to improve health literacy might have important spillover effects for child health and improve the success of existing public health interventions targeting childhood stunting. abstract_id: PUBMED:28992713 Household resources as determinants of child mortality in Ghana. Introduction: Although the association between child mortality and socioeconomic status is well established, the role of household assets as predictors of child mortality, over and above other measures of socioeconomic status, is not well studied in developing nations. This study investigated the contribution of several household resources to child mortality, beyond the influence of maternal education as a measure of socioeconomic status. Methods: This secondary analysis used data from the 2007 Ghana Maternal Health Survey to explore the relationship of child mortality to household resources. The analysis of 7183 parous women aged 15-45 years examined household resources for their association with maternal reports of any child's death for children aged less than 5 years using a survey-weighted logistic regression model while controlling for sociodemographic and health covariates. Results: The overall household resources index was significantly associated with the death of one or more child in the entire sample (adjusted odd ratios (OR)=0.95; 95% confidence interval (CI): 0.92, 0.98]. In stratified analysis, this finding held for women living in rural but not in urban areas. Having a refrigerator at the time of interview was associated with lower odds of reporting child mortality (OR=0.63; 95%CI: 0.48, 0.83). Having a kerosene lantern (OR=1.40; 95%CI: 1.06, 1.85) or flush toilet (OR=1.84; 95%CI: 1.23, 2.75) was associated with higher odds of reporting child mortality. Adjusted regression models showed only possession of a refrigerator retained significance. Conclusions: Possession of a refrigerator may play a role in child mortality. This finding may reflect unmeasured socioeconomic status or the importance of access to refrigeration in preventing diarrheal disease or other proximal causes of child mortality in sub-Saharan Africa. abstract_id: PUBMED:25362964 Child-headed households in Rakai District, Uganda: a mixed-methods study. Objective: An important but neglected consequence of the AIDS pandemic that continues across sub-Saharan Africa is the phenomenon of child-headed households (CHH). This study aims to describe the challenges to health and well-being for young people living in child-headed households. Methods: A mixed-methods research approach linked common themes using qualitative and quantitative instruments to provide a broad picture of the location and challenges of CHH in Kabira, Kyotera and Kamuganja in the Rakai District of southern Uganda. Local knowledge was used to locate CHH. Results: 163 children living in 40 CHH were traced: 42·5% of the household heads were double orphans caring for younger siblings, and 43% were also caring for chronically ill or disabled grandparents who were economically unproductive and largely dependent on the eldest child for survival. It was found that those heading households were more likely not to attend school than children living at home with a parent. Their immediate needs ranged from food and shelter to health-care and education. Fear was a major theme: 38% of those interviewed reported fear of 'violence'. Children as young as 13 were responsible for navigating through complex decision-making processes from everyday basic necessities to decisions on the health care of younger siblings and grandparents. Conclusion: Children and young people living in CHH are a largely invisible and highly vulnerable population. Clear, officially accepted definitions of CHH are a first step in recognising this vulnerable group for whom safeguards will be necessary as social work develops in lower- and middle-income countries (LMICs). The precise numbers of CHH are unknown and further examination of this undocumented group is needed. abstract_id: PUBMED:34430700 Are children in female-headed households at a disadvantage? An analysis of immunization coverage and stunting prevalence: in 95 low- and middle-income countries. Studies of inequalities in child health have given limited attention to household structure and headship. The few existing reports on child outcomes in male and female-headed households have produced inconsistent results. The aim of our analyses was to provide a global view of the influence of sex of the household head on child health in cross-sectional surveys from up to 95 LMICs. Studied outcomes were full immunization coverage in children aged 12-23 months and stunting prevalence in under-five children. We analyzed the most recent nationally-representative surveys for each country (since 2010) with available data. After initial exploratory analyses, we focused on three types of households: a) male-headed household (MHH) comprised 73.1% of all households in the pooled analyses; b) female Headed Household (FHH) with at least one adult male represented 9.8% of households; and c) FHH without an adult male accounted for 15.0% of households. Our analyses also included the following covariates: wealth index, education of the child's mother and urban/rural residence. Meta-analytic approaches were used to calculate pooled effects across the countries with MHH as the reference category. Regarding full immunization, the pooled prevalence ratio for FHH (any male) was 0.99 (0.97; 1.01) and that for FHH (no male) was 0.99 (0.97; 1.02). For stunting prevalence, the pooled prevalence ratio for FHH (any male) was 1.00 (0.98; 1.02) and for FHH (no male) was 1.00 (0.98; 1.02). Adjustment for covariates did not lead to any noteworthy change in the results. No particular patterns were found among different world regions. A few countries presented significant inequalities with different directions of association, indicating the diversity of FHH and how complex the meaning and measurement of household headship may be. Further research is warranted to understand context, examine mediating factors, and exploring alternative definitions of household headship in countries with some association. abstract_id: PUBMED:1939292 Effect of education and household characteristics on infant and child mortality in urban Nepal. Infant and child mortality differentials are analysed by education of parents and other family members, access to toilet, electricity and source of drinking water in urban Nepal, using data from the Nepal Fertility and Family Planning Survey, 1986. The analyses showed significant effects of education, access to toilet and electricity in lowering infant and child mortality. Access to toilet and electricity are proxies for household socioeconomic status which suggests that education and household resources are complementary in lowering the infant and child mortality. Answer: Yes, living in a female-headed household appears to lower child mortality according to several studies. In rural Nigeria, after controlling for covariates such as age, wealth status, region, and place of delivery, women from female-headed households were found to be 17% less likely to experience child death compared to those from male-headed households (PUBMED:21486098). Similarly, in India, children from female-headed households had a higher survival probability at each age than those from male-headed households, irrespective of place of residence and sex of the child (PUBMED:27621985). In Pakistan, children had 26% lower odds of stunting, which is a key indicator of child health and survival, in female-headed households, and this relationship held across three culturally distinct regions (PUBMED:28717921). These findings suggest that female headship of households may be associated with certain protective factors that contribute to lower child mortality rates.
Instruction: Does the use of preoperative antiglaucoma medications influence trabeculectomy success? Abstracts: abstract_id: PUBMED:24918962 Does the use of preoperative antiglaucoma medications influence trabeculectomy success? Purpose: To investigate the influence of preoperative antiglaucoma medications on trabeculectomy outcome. Methods: Two hundred fifteen eyes, which underwent primary trabeculectomy, were retrospectively analyzed. The average follow-up was 39.8±30.3 months. The only cases of primary open-angle glaucoma, with or without pseudoexfoliation (PXF), were included. "Complete success" was defined as intraocular pressure (IOP) <18 mmHg without glaucoma medications, whereas relative success was defined as the same IOP target with medications. The influence of the preoperatively used glaucoma medications on surgical success was analyzed by univariate Pearson correlation and multivariate (ordinal) regression analysis. Results: There were 118 male (54.9%) and 97 female (45.1%) patients with a mean age of 66.9±9.3 years. PXF glaucoma (PXFG) was present in 93 eyes (43.3%). In 33 patients (15.3%), diabetes mellitus (DM) was present. Complete success was achieved in 116 eyes (54%), relative success in 81 eyes (37.6%), and failure in 18 eyes (8.4%). Neither the total number nor the duration of glaucoma medications used before trabeculectomy was found to have any statistically significant influence on surgical success. In statistical analysis, a combination of topical beta-blocker and carbonic anhydrase inhibitor (BB+CAI) used before surgery was found to be associated with statistically better outcome, whereas the preoperative use of topical beta-blockers alone could have a negative influence on success. PXF was shown to be independently associated with trabeculectomy outcome on multivariate regression analysis. Conclusion: The glaucoma medications used preoperatively were not found to have any statistically significant negative influence on the trabeculectomy outcome and use of the combined BB+CAI preparation could have a positive influence, whereas the use of topical beta-blockers alone could have a negative influence on success, although not statistically significant. The presence of PXF was independently associated with a better surgical outcome. abstract_id: PUBMED:15747760 Does medical treatment influence the success of trabeculectomy? Purpose: Many ophthalmologists believe that long-term use of topically applied glaucoma medications can adversely affect results of fistulizing surgery. This presentation critically analyzes the published studies most often cited in support of this view to determine whether this conclusion is justified. Methods: Morphologic effects of long-term treatment with antiglaucoma drugs on the conjunctiva and Tenon's capsule in glaucomatous patients have been studied. The results of these studies encouraged investigators to examine the influence of prior therapy on the success of trabeculectomy performed in patients with open-angle glaucoma. From this work, many have concluded that long-term use of topically applied glaucoma medications can adversely affect the results of fistulizing surgery. These results and conclusions are summarized and critically analyzed to determine whether this conclusion is justified. Results: Morphologic studies describe increased numbers of macrophages, fibroblasts, lymphocytes, and mast cells in conjunctival and Tenon's capsule specimens taken from patients receiving long-term antiglaucoma drugs. These findings suggest a potential for more inflammation and subsequent scarring following trabeculectomies in these patients. Efforts to confirm the clinical relevance of these histologic findings in open-angle glaucoma patients with a history of long-term antiglaucoma medication prior to surgery have been published. These retrospective, nonrandomized, unmasked studies of open-angle glaucoma patients include treatment groups and surgeries that are not comparable. In addition, the medical treatments within these studies do not reflect our current approaches to the medical management of open-angle glaucoma. Conclusions: At present, there is no convincing clinical evidence that long-term medical treatments influence the success of contemporary trabeculectomy surgery performed on open-angle glaucoma patients. abstract_id: PUBMED:31225428 Midterm outcome of single scleral suture technique in trabeculectomy and phacotrabeculectomy: a simplified approach. Purpose: To report outcome of trabeculectomy performed by single scleral suture in Indian eyes. Methods: Non-randomized, non-masked, retrospective cohort analysis. Medical records of all consecutive patients who underwent trabeculectomy between 1 January 2011 and 31 December 2016 at CL Gupta Eye Institute, Moradabad (India), were reviewed and analyzed. Trabeculectomy was performed using single scleral suture for flap closure. The suture knot was buried inside sclera. The laser lysis of scleral suture was not done. Success was defined as complete success if an intraocular pressure (IOP) was >5 and ⩽21 mmHg without any glaucoma medications or re-surgery. Qualified success was defined as IOP ⩽21 mmHg with or without antiglaucoma medications. Results: Data of 98 patients was analyzed. A total of 61 (62.2%) of them underwent trabeculectomy, and 37 (37.7%) underwent phacotrabeculectomy. The probability of complete success in patient underwent phacotrabeculectomy was 86% at 3 years, and in-patient underwent only trabeculectomy was 95% at 3 years. The probability of qualified success in patient undergone phacotrabeculectomy was 91% at 3 years, and in patient undergone only trabeculectomy was 97% at 3 years. Conclusion: The mean postoperative IOP was significantly less then mean preoperative IOP, at all follow-up visits. This simplified technique also resulted in high midterm success probabilities with low complication rates. abstract_id: PUBMED:34932180 Evaluation of Adverse Effects of Topical Glaucoma Medications on Trabeculectomy Outcomes Using the Glaucoma Medications Intensity Index. Introduction: Trabeculectomy is commonly performed for glaucoma when medications are unable to control disease progression or have intolerable adverse effects. Previous studies have suggested that a higher number of and/or longer treatment duration with preoperative topical glaucoma medications are associated with a higher risk of trabeculectomy failure, but most of these studies lack quantification of exposure. The aim of this study was to investigate the relationship between preoperative exposure to topical glaucoma medications and trabeculectomy outcome, using a new method for quantifying accumulated exposure. Methods: Consecutive patients with primary open-angle glaucoma (POAG) or normal-tension glaucoma (NTG) who underwent primary trabeculectomy between 2013 and 2017 were retrospectively reviewed. The Glaucoma Medications Intensity Index (GMII) was calculated for each eye by multiplying the number of drops per week by duration of use (in years). The relationship between the GMII and postoperative outcome in terms of 1- and 2-year success rates and survival time was analyzed. Results: A total of 55 eyes from 40 subjects were analyzed, all with follow-up > 6 months (mean 2.72 ± 1.46 years). The GMII for eyes with successful (n = 41) and failed (n = 14) outcome at last visit was 111.71 ± 78.59 and 167.41 ± 85.04, respectively, and significantly higher in failed eyes (P = 0.03). Univariate regression analysis of age, gender, cup-disc ratio, previous phacoemulsification, diabetes, hypertension, dyslipidemia, preoperative number of glaucoma medications/treatment duration/intraocular pressure (IOP), and GMII showed age and GMII to be possible predictors of treatment failure. On subsequent multivariate analysis, only GMII was correlated with failure (odds ratio 1.021, 95% confidence interval 1.00-1.05; P = 0.05). When GMII ≥ 80, the postoperative survival time was shorter (P = 0.02), the 1-year IOP, number of glaucoma medications, and number of needlings performed were higher (P = 0.03, P < 0.01, P < 0.03, respectively), and reduction in glaucoma medication was less (P = 0.02). Conclusion: The GMII can be used to predict eyes at higher risk for trabeculectomy that may benefit from additional perioperative intervention or treatment. It can also help the surgeon time the surgery before the GMII becomes too high, thereby optimizing the patient's postoperative outcome. abstract_id: PUBMED:38421291 Comparative study of the outcomes of newly developed polypropylene-based modified trabeculectomy and glaucoma shunt surgery in neovascular glaucoma. Background: Neovascular glaucoma (NVG) is a secondary glaucoma with a poor visual prognosis. Trabeculectomy with antifibrotic agents, glaucoma drainage devices (GDDs), and cyclo-destructive procedures are recommended in patients who are refractory to medical management. However, due to the poor success rate of conventional trabeculectomy and the higher cost of GDDs, alternative procedures need to be looked at. Purpose: To compare the surgical outcomes and economic aspects of a newly developed polypropylene suture bed-based modified trabeculectomy to Ahmed glaucoma valve (AGV) implantation for NVG. Methods: It was a prospective interventional study conducted at a tertiary care center between 2018 and 2020. Consecutive patients with NVG with a minimum follow-up of 18 months were included. Surgical outcomes are mainly based on intraocular pressure (IOP) control and the cost of surgery. Results: Sixty eyes were included out of which 40 (60.6%) underwent modified trabeculectomy and 20 (33.7%) underwent AGV. At the final follow-up, no significant difference (P < 0.05) was found between the surgical outcomes of both groups. The complete success rate (IOP < 21 mm Hg without antiglaucoma medications) was 60 and 65% while the qualified success rate (IOP < 21 mm Hg with antiglaucoma medications) was 30 and 25% in modified trabeculectomy and AGV groups, respectively, at final follow-up. The cost of surgery was significantly higher in the AGV group (P < 0.0001). Conclusion: Modified trabeculectomy as described might be a better alternative for NVG eyes. abstract_id: PUBMED:36897481 Two-stage intra-tenon injection versus sponge-applied mitomycin C-augmented trabeculectomy: a one-year study. Purpose: Mitomycin C (MMC) is normally used to avoid scar formation in trabeculectomy. There has been a shift from conventional delivery via soaked sponges to preoperative injection of MMC. This study aimed to compare the effectiveness of a modified two-stage low-dose intra-Tenon injection with soaked sponges of MMC for trabeculectomy over a 1-year follow-up period. Methods: This retrospective study enrolled patients with glaucoma undergoing modified trabeculectomy with a two-stage intra-Tenon injection (0.01%, 0.1 mL) or soaked sponges (0.02%) of MMC. In the former group, patients received intra-Tenon injection of MMC (the first stage) at least 4 h before trabeculectomy (the second stage). Patient characteristics, preoperative and postoperative intraocular pressure, antiglaucoma medication use, complications, and post-trabeculectomy surgical interventions were recorded during a 1-year follow-up period. Results: There were 36 and 35 eyes in the injection and sponge groups, respectively, in 58 patients. The injection group showed significantly lower intraocular pressure (p < 0.05) at every time point except on postoperative day 1 and week 1, fewer medications at the 1-year follow-up (p = 0.018), and a higher complete success rate (p = 0.011) than the sponge group. Both techniques showed a significant reduction in intraocular pressure and medication use at the 1-year follow-up. There were no significant differences in complications between both groups. Conclusion: Our two-stage intra-Tenon MMC injection technique resulted in lower postoperative intraocular pressure, less antiglaucoma medication use, and fewer needling revisions compared to the sponge technique. abstract_id: PUBMED:29032434 The influence of mitomycin C concentration on the outcome of trabeculectomy in uveitic glaucoma. Purpose: To evaluate the influence of mitomycin C (MMC) concentration on the outcome of trabeculectomy in uveitic glaucoma. Methods: Retrospective comparative study included 50 patients who underwent MMC-enhanced trabeculectomy for uncontrolled uveitic glaucoma. Patients were divided into two groups based on the concentration of MMC used during trabeculectomy (high [0.04%] or low [0.02%]). The main outcome measures were: the intraocular pressure (IOP), the number of antiglaucoma medications, the need for further glaucoma surgeries, and trabeculectomy success. Results: There were no differences in the IOP nor antiglaucoma medications. The success rates were 76.0 and 68.0%, while the failure rates were 24.0 and 32.0% in the 0.02 and 0.04% MMC groups, respectively. Conclusion: Trabeculectomy augmented with low (0.02%) MMC concentration seems to have comparable outcomes in uveitic glaucoma compared with high (0.04%) concentration. abstract_id: PUBMED:33499665 The significance of ultrasound biomicroscopy in the objective assessment of post-trabeculectomy bleb success. Purpose: To compare clinical characteristics and ultrasound biomicroscopy (UBM) measurements of cases with functional and non-functional blebs, and to identify potential risk factors capable of affecting UBM parameters. Methods: Thirty-one patients aged 40-79 were included. Following trabeculectomy, cases were divided into two groups as those with functional bleb (Group 1, n = 20) and those with non-functional bleb (Group 2, n = 11). UBM was performed. Results: In Group 1 compared to Group 2, lower postoperative intraocular pressure (IOP) (12.1 ± 1.7 vs 27.2 ± 3.2 mmHg, p < 0.001), greater bleb height (1.0 ± 0.2 vs 0.5 ± 0.3 mm, p < 0.001), greater scleral route visibility [16/20 (80%) vs 4/11 (36.4%), p = 0.023] and lower bleb reflectivity (1.8 ± 0.2 vs 2.4 ± 0.4, p = 0.015) were observed. In Group 1, as postoperative IOP decreased, bleb height and scleral route visibility increased (r = -0.387, p = 0.029 for bleb height; r = -0.374, p = 0.033 for scleral route), and bleb reflectivity decreased (r = 0.359, p = 0.042). In Group 1, as duration of preoperative antiglaucoma drug use increased, bleb reflectivity increased (r = 0.348, p = 0.046). Preoperative IOP and number of preoperative antiglaucoma drug use had no effect on UBM parameters in groups (p > 0.05). Conclusion: UBM is useful in assessing bleb success. Lower bleb reflectivity, greater bleb height and greater scleral route visibility indicate that the bleb is functional. To the best of our knowledge, this is the first study investigating effects of preoperative IOP, number and duration of preoperative antiglaucoma drug use on UBM parameters. In cases with functional bleb, duration of preoperative drug use affects only bleb reflectivity, while postoperative IOP affects bleb height, scleral route visibility and bleb reflectivity. abstract_id: PUBMED:34928769 Success Rate of Augmented Trabeculectomy in Primary Congenital Glaucoma. Purpose: To evaluate the outcome of trabeculectomy augmented with mitomycin C in primary congenital glaucoma and to document the complications of the procedure in young children. Methods: This was a prospective study of children younger than 2 years with primary congenital glaucoma who were treated with primary trabeculectomy with mitomycin C. After surgery, patients were observed for a minimum of 1 year. Trabeculectomy success was defined as an intraocular pressure (IOP) of 21 mm Hg or less without (absolute success) or with (qualified success) topical antiglaucoma medications. Surgical success was assessed at 1, 6, and 12 months after the procedure. Results: Seventy-four eyes of 42 children were included in the current study. The mean age was 11.7 ± 8.5 months. Of the 42 children, 57.2% were male and 42.8% were female. The mean IOP after surgery was significantly lower than the preoperative IOP (P < .0001). Absolute success was observed in 98.6%, 50%, and 27% of eyes at 1, 6, and 12 months, respectively. Qualified success was achieved in 1.4%, 36.5%, and 48.6% of eyes at 1, 6, and 12 months, respectively. Overall success of the procedure was 100% at 1 month but reduced to 86.5% at 6 months and 75.7% at 12 months. In 24.3% of eyes, IOP was not controlled even with adjunctive topical glaucoma medications and was considered a failure. Postoperative complications were shallow anterior chamber (10.8%), collapsed anterior chamber (1.3%), and choroidal detachment (12.0%). Complications were managed conservatively, and 6 eyes needed surgical intervention. Conclusions: Augmented trabeculectomy is a useful primary procedure in children with primary congenital glaucoma. Topical glaucoma medications supplement the success of the procedure. [J Pediatr Ophthalmol Strabismus. 2022;59(3):180-186.]. abstract_id: PUBMED:2244836 The influence of prior therapy on the success of trabeculectomy. The role of early surgery in the management of primary open angle glaucoma is under debate. To determine whether previous medical therapy influences the outcome of subsequent trabeculectomy, we retrospectively reviewed the results of surgery in two groups of patients. The first group underwent primary trabeculectomy, having had an average of 2 weeks of preoperative medical therapy, and this group was compared with a group of patients who had received at least 1 year of topical glaucoma therapy before undergoing trabeculectomy (the multiple-treatment group). The two groups were similar in terms of a number of variables, including race, age, sex, presenting intraocular pressures, and presenting visual fields, and they differed only in the known duration of their disease. The success rate of trabeculectomy was significantly higher in the primary trabeculectomy group as compared with that in the multiple-treatment group (P less than .001). We discuss the possible reasons for this difference and its implications for the future management of primary open angle glaucoma. Answer: The influence of preoperative antiglaucoma medications on trabeculectomy success has been a subject of investigation, but the evidence is not entirely conclusive. Some studies suggest that there is no statistically significant negative influence of preoperative glaucoma medications on trabeculectomy outcomes. For instance, a study with 215 eyes undergoing primary trabeculectomy found that neither the total number nor the duration of glaucoma medications used before surgery had a statistically significant influence on surgical success (PUBMED:24918962). Similarly, another study critically analyzing published studies concluded that there is no convincing clinical evidence that long-term medical treatments influence the success of contemporary trabeculectomy surgery performed on open-angle glaucoma patients (PUBMED:15747760). However, other research has indicated that there might be a relationship between preoperative exposure to topical glaucoma medications and trabeculectomy outcomes. A study using the Glaucoma Medications Intensity Index (GMII) found that a higher GMII was significantly associated with trabeculectomy failure, suggesting that accumulated exposure to glaucoma medications could be a risk factor for unsuccessful surgery (PUBMED:34932180). Additionally, a retrospective study on ultrasound biomicroscopy in post-trabeculectomy bleb success indicated that the duration of preoperative antiglaucoma drug use could affect bleb reflectivity, which is associated with functional blebs (PUBMED:33499665). In contrast, a study on trabeculectomy in primary congenital glaucoma showed that augmented trabeculectomy was a useful primary procedure and that topical glaucoma medications supplemented the success of the procedure, implying that preoperative medication use did not hinder surgical success (PUBMED:34928769). Overall, while some studies suggest that preoperative antiglaucoma medications do not significantly influence trabeculectomy success, others indicate that the intensity and duration of preoperative medication use may have an impact on surgical outcomes. Further research with standardized methodologies and quantification of medication exposure may be needed to clarify these findings.
Instruction: Do participation and personalization matter? Abstracts: abstract_id: PUBMED:36818664 Reimagining health services provision for neglected groups: The "personalization from below" phenomenon. How can data-driven citizen science activities supporting health research and services provision meet the needs of unrepresented and neglected groups through increased personalization? In this short Perspective, we explore "personalization from below" as a concept designating forms of citizen science-based data altruism that specifically push for and enact a different understanding of both health services and personalization. We develop the argument that such phenomenon taking place outside "institutionalized" health-related practices could make health services provision more inclusive of values that matter to people. We contextualize instances of "personalization from below," discuss related data governance models and alternative public health interventions, and conclude by outlining three key arguments in favor of "personalization from below" and future research avenues. abstract_id: PUBMED:38013839 Personalization: a new political arithmetic? Scholarship on the history of political arithmetic highlights its significance for classical liberalism, a political philosophy in which subjects perceive themselves as autonomous individuals in an abstract system called society. This society and its component individuals became intelligible and governable in a deluge of printed numbers, assisted by the development of statistics, the emergence of a common space of measurement, and the calculation of probabilities. Our proposal is that the categories, numbers, and norms of this political arithmetic have changed in a ubiquitous culture of personalization. Today's political arithmetic, we suggest, produces a different kind of society, what Facebook CEO Mark Zuckerberg calls the 'default social'. We address this new social as a 'vague whole' and propose that it is characterized by a continuous present, the contemporary form of simultaneity or way of being together that Benedict Anderson argued is fundamental to any kind of imagined community. Like the society imagined in the earlier arithmetic, this vague whole is an abstraction that obscures forms of stratification and discrimination. abstract_id: PUBMED:35558170 Mitigating Issues With/of/for True Personalization. A common but false perception persists about the level and type of personalization in the offerings of contemporary software, information systems, and services, known as Personalization Myopia: this involves a tendency for researchers to think that there are many more personalized services than there genuinely are, for the general audience to think that they are offered personalized services when they really are not, and for practitioners to have a mistaken idea of what makes a service personalized. And yet in an era, which mashes up large amounts of data, business analytics, deep learning, and persuasive systems, true personalization is a most promising approach for innovating and developing new types of systems and services-including support for behavior change. The potential of true personalization is elaborated in this article, especially with regards to persuasive software features and the oft-neglected fact that users change over time. abstract_id: PUBMED:32570671 Service Personalization in IT-Supported Finnish Healthcare. This paper reports a case study on the spontaneous personalization discussions emerged from interviews with healthcare professionals when asked about their work practices and the role of information technology (IT) during consultations. We thematically analyzed the personalization elements using an existing personalization framework to provide insights on the service personalization. Our results contribute to the better design of IT solutions that can support health services' personalization. abstract_id: PUBMED:22943560 Creating an impersonal NHS? Personalization, choice and the erosion of intimacy. Background: Personalization - most often understood in terms of granting patients greater opportunity to participate in, and make choices about, the services they receive - has become a key principle guiding reform of the English NHS. Objective: This study sets out to explore the relationship between two senses of the term 'personal' within the context of personalization. Firstly, much of the policy literature equates a 'personal' service with one that is responsive to the choices of individual patients. Secondly, the term 'personal' can be thought to refer to the intimate relationships between patients and medical professionals that have typified traditional models of good practice. Methodology And Discussion: I combine a review of the relevant academic and policy literature on personalization with a process of conceptual analysis to uncover three arguments, which suggest that personalization based on choice may adversely affect standards of care by eroding the qualities of intimacy at the heart of the care process. Thus, an unintended consequence of the drive for personalization may be the creation of an NHS that is, in an important sense, less personal than it once was. Conclusion: Whilst personalization may deliver many potential benefits, the tension between promoting patient choice and retaining intimate professional-patient relationships ought to be taken seriously. Thus, the task of promoting choice whilst retaining intimacy represents a key policy challenge for advocates of personalization. abstract_id: PUBMED:37361673 "Creatures of habit": influential factors to the adoption of computer personalization and accessibility settings. Purpose: Customizing the accessibility and ease-of-use settings of a computer can improve the overall user experience for people with and without disabilities. However, the adoption of such settings remains poor. This study investigates why various factors may influence people as they personalize and when they choose to integrate those changes into everyday use. Methods: To better understand how these factors may impact people's personalization process, we conducted a series of interviews involving 15 people with and without disabilities across several months in 2020 when computer usage became required due to the COVID-19 lockdown. We used grounded theory to analyze 49 semi-structured interviews with participants. During these interviews, people reflected upon: past attempts at personalization using built-in accessibility or ease-of-use features of their Operating System (OS) or other assistive technologies (AT), current customization activities using the personalization software, Morphic, and future opportunities for personalization systems and features. Results: We identified several barriers, facilitators, and perpetuating factors that can influence why and when people choose to adopt and integrate their personalization changes. We also outline the overall personalization lifecycle, which illustrates when various factors may impact computer personalization. Conclusion: Personalization activities are complex and easily affected by an ecosystem of influential factors that surround them. When paired with the three design considerations and ten lessons learned from this qualitative study, the overall personalization lifecycle may be useful during the design and development of future personalization systems or features for people with and without disabilities. abstract_id: PUBMED:32841439 The development and psychometric testing of three instruments that measure person-centred caring as three concepts - Personalization, participation and responsiveness. Aim: To develop and test the psychometric properties of three instruments that measure Person-centred Caring: as Personalization, Participation and Responsiveness. Design: A three-phase mixed methods design used two frameworks: content validity determination and quantification; consensus-based standards for selection of health measurement instruments. Methods: A narrative literature review identified the domain definition. A systematic review of instruments provided the basis for item pools, which were refined by focus groups (N = 4) of multidisciplinary staff and service users (N = 25) and cognitive interviews (N = 11) with service users. Scale content validity indexes were calculated. Three cross-sectional surveys were conducted between April 2015 and June 2016. The instruments' psychometric properties tested included factor structure, internal consistency and construct validity. Convergent validity was tested, hypothesizing that: Personalization related to relational empathy; Participation related to empowerment; and Responsiveness related to trust. Results: Scale content validity indexes were ≥0.96 in all instruments. Response rates were 24% (N = 191), 15% (N = 108) and 19% (N = 124). Two factors were revealed for the Personalization and Responsiveness instruments and one factor for the Participation instrument. All had acceptable: reliability (Cronbach's Alpha >0.7); construct validity (>50%); and convergent validity (Spearman's correlation coefficient >0.25, p < 0.05). Conclusion: This study composed definitions and instruments that reflect the multidisciplinary teams' caring behaviours, which have acceptable reliability and validity in the community population. Further psychometric testing of Participation and Responsiveness instruments should be undertaken with a larger sample. Impact: The instruments can be used to monitor the variability of multidisciplinary teams' caring behaviours; research effective interventions to improve caring behaviours; and increase understanding of the impact of caring on health outcomes. abstract_id: PUBMED:27050840 MAPx (Mobility Aid Personalization): examining why older adults "pimp their ride" and the impact of doing so. We all do this. We personalize things. We buy leopard-printed seat covers and fuzzy dice for our cars, and display action figures and photographs in our offices. Studying older adults who have extended this process of personalization to their mobility devices, the purpose of the mobility aid personalization (MAPx) project is to examine MAPx and its impact on the health and mobility of older adults. Using a qualitative research design, field observations and interviews were conducted with 72 older adults to gain an in-depth understanding of device customization from an emic (insider's) perspective. Findings illustrate that older adults personalize their devices for reasons of fun, function and fashion. MAPx - the process of purposefully selecting or modifying a mobility device to suit individual needs and preferences - was also found to promote health and mobility by encouraging device acceptance, increasing social participation, enhancing joy and preserving identity. MAPx makes an important contribution to our understanding of the complex relationship between older adults and assistive devices and provides a new approach to some old problems including falls, inactivity and social isolation. Encouraging MAPx is a promising rehabilitation strategy for promoting health and community mobility among the older adult population. Implications for Rehabilitation Personalizing an assistive device facilitates device acceptance, promotes health and well-beingand should be supported and encouraged in rehabilitative care. Choice, variety and access are critical aspects of assistive devices; vendors, manufacturers andpractitioners should work together to provide clients with a greater range of affordable optionsfor new devices. Function is more than mechanical or physical; social factors including social identity, stigma andsocial roles must be adequately considered and explicit in rehabilitative practice. abstract_id: PUBMED:31697237 The Personalization of Conversational Agents in Health Care: Systematic Review. Background: The personalization of conversational agents with natural language user interfaces is seeing increasing use in health care applications, shaping the content, structure, or purpose of the dialogue between humans and conversational agents. Objective: The goal of this systematic review was to understand the ways in which personalization has been used with conversational agents in health care and characterize the methods of its implementation. Methods: We searched on PubMed, Embase, CINAHL, PsycInfo, and ACM Digital Library using a predefined search strategy. The studies were included if they: (1) were primary research studies that focused on consumers, caregivers, or health care professionals; (2) involved a conversational agent with an unconstrained natural language interface; (3) tested the system with human subjects; and (4) implemented personalization features. Results: The search found 1958 publications. After abstract and full-text screening, 13 studies were included in the review. Common examples of personalized content included feedback, daily health reports, alerts, warnings, and recommendations. The personalization features were implemented without a theoretical framework of customization and with limited evaluation of its impact. While conversational agents with personalization features were reported to improve user satisfaction, user engagement and dialogue quality, the role of personalization in improving health outcomes was not assessed directly. Conclusions: Most of the studies in our review implemented the personalization features without theoretical or evidence-based support for them and did not leverage the recent developments in other domains of personalization. Future research could incorporate personalization as a distinct design factor with a more careful consideration of its impact on health outcomes and its implications on patient safety, privacy, and decision-making. abstract_id: PUBMED:33733139 Evaluating Personalization: The AB Testing Pitfalls Companies Might Not Be Aware of-A Spotlight on the Automotive Sector Websites. The importance of companies' website as instrument for relationship marketing activities is well-known both in the academia and in the industry. In the last decades, there has been great interest in studying how technology can be used to influence people's attitudes and motivate behavior change. With this, web personalization has had increasing research and practitioner interest. However, the evaluation of user interaction with companies' websites and personalization effects remains an elusive goal for organizations. Online controlled experiments (A/B tests) are one of the most commonly known and used techniques for this online evaluation. And, while there is clearly value in evaluating personalized features by means of online controlled experiments, there are some pitfalls to bear in mind while testing. In this paper we present five experimentation pitfalls, firstly identified in an automotive company's website and found to be present in other sectors, that are particularly important or likely to appear when evaluating personalization features. In order to obtain the listed pitfalls, different methods have been used, including literature review, direct, and indirect observation within organizations of the automotive sector and a set of interviews to organizations form other sectors. Finally, the list of five resulting pitfalls is presented and some suggestions are made on how to avoid or mitigate each of them. Answer: Participation and personalization are indeed significant concepts in various fields, including healthcare and technology. Personalization refers to tailoring services, products, or interactions to individual preferences and needs, which can enhance user satisfaction, engagement, and potentially improve health outcomes (PUBMED:31697237). In healthcare, personalization can lead to more inclusive health services that align with values important to people, especially neglected groups, by utilizing citizen science-based data altruism (PUBMED:36818664). Personalization in IT-supported healthcare has been studied, with findings contributing to the design of IT solutions that support health service personalization (PUBMED:32570671). Participation, on the other hand, relates to the involvement of individuals in the decision-making processes that affect them. It is linked to empowerment and can be associated with improved health outcomes when individuals actively engage in their care (PUBMED:32841439). The concept of participation is also crucial in the context of personalization, as it allows individuals to have a say in how services are tailored to their needs. However, there is a tension between promoting patient choice (participation) and retaining intimate professional-patient relationships, which are at the heart of the care process. Personalization based on choice may inadvertently lead to an impersonal healthcare system if not managed carefully (PUBMED:22943560). Additionally, the adoption of personalization settings in technology, such as computer accessibility features, can be influenced by various factors, and understanding these can help in designing systems that people will integrate into their daily lives (PUBMED:37361673). In summary, participation and personalization are important because they can lead to more effective and satisfying experiences for individuals, whether in healthcare or technology. However, they must be implemented thoughtfully to avoid potential negative consequences, such as the erosion of intimate relationships in healthcare or the underutilization of personalization features in technology.
Instruction: Mitraclip therapy in patients with functional mitral regurgitation and missing leaflet coaptation: is it still an exclusion criterion? Abstracts: abstract_id: PUBMED:27102848 Mitraclip therapy in patients with functional mitral regurgitation and missing leaflet coaptation: is it still an exclusion criterion? Aims: The aim of this study was to investigate the feasibility, safety, and efficacy of Mitraclip therapy in patients with functional mitral regurgitation (MR) and missing leaflet coaptation (MLC). Methods And Results: Out of 62 consecutive patients with functional MR undergoing Mitraclip implantation, 22 had MLC defined as the presence of a 'gap' between two mitral leaflets or insufficient coaptation length (<2 mm), according to the EVEREST II criterion. Compared with the control group, the MLC population had a significantly higher effective regurgitant orifice area (0.67 ± 0.31 vs. 0.41 ± 0.13 cm2 ; P = 0.019) and sphericity index (0.80 ± 0.11 vs. 0.71 ± 0.10; P = 0.003). MLC patients were treated with pharmacological/mechanical support in order to improve leaflet coaptation and to prepare the mitral valve apparatus for grasping. Implantation of >1 clip and device time were comparable in patients with and without MLC (61.9% vs. 47.5%; P = 0.284 and 101 ± 39 vs. 108 ± 69 min; P = 0.646, respectively). No significant differences were observed between the two cohorts in technical success (95.5% vs. 97.5%, P = 0.667), 30-day device success (85.7% vs. 78.9%; P = 0.525), procedural success (81.8% vs. 75%; P = 0.842), and 1-year patient success (52.9% vs. 44.1%; P = 0.261), defined according to the MVARC (Mitral Valve Academic Research Consortium) criteria. The long-term composite endpoint of cardiovascular death and heart failure hospitalization was similar in the two groups (49.9% vs. 44.4%; P = 0.348). A significant improvement of MR and NYHA functional class and a lack of reverse remodelling were observed up to 2 years in both arms. Conclusion: The Mitraclip procedure could be extended to patients with functional MR who do not fulfil the coaptation length EVEREST II criterion and who would otherwise be excluded from this treatment. abstract_id: PUBMED:32906188 Prediction for residual regurgitation after MitraClip for functional mitral regurgitation using leaflet coaptation index. Background: Given the recent growth in the number of MitraClip procedures in patients with functional mitral regurgitation (MR), there is interest in an echocardiographic index to predict technical success before intervention. We investigated the utility of a novel leaflet coaptation index (LCI) to predict outcomes after MitraClip in functional MR patients. Methods: Forty consecutive patients with functional MR who underwent MitraClip were included. The LCI was defined as the coapted leaflet length divided by the anteroposterior diameter of the mitral annulus. The coapted leaflet length was calculated by separately tracing the total length of the mitral valve leaflets at early and end systole on transesophageal echocardiography. The primary endpoint was defined as residual MR ≥ moderate after MitraClip. Results: The LCI was significantly associated with residual MR ≥ moderate (odds ratio, 0.827; 95% confidence interval, 0.696-0.982; p = .030) with the cut-off LCI value of 0.14 (sensitivity 70.4%, specificity 69.2%, c-statistic 0.815; p = .001). Conclusion: This novel index may be a useful adjunct that can be incorporated into the armamentarium of contemporary clinical performance risk prediction tools for MitraClip success. abstract_id: PUBMED:30598891 MitraClip patient selection: inclusion and exclusion criteria for optimal outcomes. Edge-to-edge repair with the percutaneous MitraClip technology has changed the landscape for patients with symptomatic, severe degenerative mitral valve regurgitation who are at prohibitive surgical risk. While the results of randomized controlled trials of MitraClip therapy in patients with functional mitral valve regurgitation are still pending, single center experiences as well as registry data generally support the real-world application of the MitraClip therapy. In the majority of individuals treated with MitraClip, complete or near-complete relief of mitral regurgitation occurs, with results approaching the effectiveness of open surgery. This perspective summarizes the data, with a focus on current selection criteria of percutaneous MitraClip edge-to-edge repair that can optimize clinical outcomes. abstract_id: PUBMED:30994402 Anesthesiological Management in Transcatheter Mitral Valve Repair With MitraClip: Beyond the EVEREST Criteria. Percutaneous mitral valve repair with the MitraClip system recently emerged as a viable and less invasive therapeutic option in patients with severe mitral regurgitation deemed to be high-risk surgical candidates. Mitral valve morphology and geometry features are key elements for MitraClip eligibility. In the setting of functional mitral regurgitation, the presence of a leaflet coaptation gap due to advanced left ventricle remodeling can be a potential exclusion criterion for MitraClip therapy. In this article, the authors present a case of successful MitraClip implantation in a patient with severe functional mitral regurgitation and a significant coaptation gap. Periprocedural and intraoperative pharmacological and anesthesiological management were fundamental for successful grasping and procedural success. abstract_id: PUBMED:34056668 Surgical revision of failed MitraClip procedure for atrial functional mitral regurgitation. The MitraClip system is used for patients with severe mitral regurgitation (MR) who are at high risk for open surgery. However, some patients need surgical revision for various complications. The acute outcome of MitraClip treatment for atrial functional MR (aFMR) is scarcely reported. Herein, we describe a rare case of an 80-year-old woman treated with a MitraClip for aFMR with mitral annular dilatation and failed leaflet adaptation. The patient suffered from single leaflet device attachment (SLDA) and posterior leaflet injury 3 days posttreatment. The patient successfully underwent mitral valve replacement. The postoperative pulmonary hypertension was markedly improved and the left atrial volume was reduced. A MitraClip should be carefully used for aFMR with mitral annular dilatation and failed leaflet adaptation as it may cause SLDA. abstract_id: PUBMED:31820395 Mitraclip Plus Medical Therapy Versus Medical Therapy Alone for Functional Mitral Regurgitation: A Meta-Analysis. Introduction: The purpose of this meta-analysis is to compare the efficacy of MitraClip plus medical therapy versus medical therapy alone in patients with functional mitral regurgitation (FMR). FMR caused by left ventricular dysfunction is associated with poor prognosis. Whether MitraClip improves clinical outcomes in this patient population remains controversial. Methods: We conducted an electronic database search of PubMed, CINAHL, Cochrane Central, Scopus, Google Scholar, and Web of Science databases for randomized control trials (RCTs) and observational studies with propensity score matching (PSM) that compared MitraClip plus medical therapy with medical therapy alone for patients with FMR and reported on subsequent mortality, heart failure re-hospitalization, and other outcomes of interest. Event rates were compared using a random-effects model with odds ratio as the effect size. Results: Five studies (n = 1513; MitraClip = 796, medical therapy = 717) were included in the final analysis. MitraClip plus medical therapy compared to medical therapy alone was associated with a significant reduction in overall mortality (OR = 0.66, 95% CI = 0.44-0.99, P = 0.04) and heart failure (HF) re-hospitalization rates (OR = 0.57, 95% CI = 0.36-0.91, P = 0.02). There was reduced need for heart transplantation or mechanical support requirement (OR = 0.48, 95% CI = 0.25-0.91, P = 0.02) and unplanned mitral valve surgery (OR = 0.21, 95% CI = 0.07-0.61, P = 0.004) in the MitraClip group. No effect was observed on cardiac mortality (P = 0.42) between the two groups. Conclusions: MitraClip plus medical therapy improves overall mortality and reduces HF re-hospitalization rates compared to medical therapy alone in patients with FMR. abstract_id: PUBMED:26404201 Repeat MitraClip Therapy for Significant Recurrent Mitral Regurgitation in High Surgical Risk Patients: Impact of Loss of Leaflet Insertion. Objectives: The goal of this study was to assess procedural details and outcomes of repeat MitraClip therapy. Background: MitraClip implantation is a safe and efficacious percutaneous approach to treat significant mitral regurgitation (MR). Methods: Of 410 inoperable or high surgical risk patients treated with the MitraClip at our institution, 17 (4.1%) patients, as well as 4 patients initially treated at external institutions, underwent repeat MitraClip procedures. Mean age of the 21 patients (14 men [67%]) was 77 years; 15 patients (71%) had functional MR. Results: Repeat procedures performed at a median of 6.3 months (range 0.7 to 34 months) after the index intervention were successful (discharge MR grade ≤2+) in 13 patients (62%), with a pronounced difference in success rate observed between the 13 patients with adequate leaflet insertion at the time of the repeat intervention and the 8 patients in whom loss of leaflet insertion (LLI) (leaflet tear/perforation or partial clip detachment) was present (11 of 13 [85%] vs. 2 of 8 [25%], respectively). The 21 patients were followed for a median of 8.5 (interquartile range: 2.3 to 18.6) months; 13 patients (62%), 8 with adequate leaflet insertion and 5 with LLI, died during follow-up. Conclusions: Repeat MitraClip intervention for significant recurrent MR appears to be a viable therapeutic approach in patients in whom leaflet insertion into the MitraClip is not compromised. LLI is strongly associated with repeat procedural failure. abstract_id: PUBMED:28063811 Mitral stenosis due to dynamic clip-leaflet interaction during the MitraClip procedure: Case report and review of current knowledge. The goal of MitraClip therapy is to achieve mitral regurgitation reduction without iatrogenic creation of clinically significant MS. In some series, up to 35% of patients are left with mild MS. There are many contributors to the final transmitral gradient achieved in patients undergoing MitraClip therapy. Additionally, there are many modalities used for the intraprocedural assessment of MS with no one modality considered to be the benchmark. We herein describe a case which illustrates the dynamic nature of clip-leaflet interaction, and review intraprocedural techniques for invasively and noninvasively assessing MS. abstract_id: PUBMED:32953758 Functional mitral regurgitation: structural modifications with percutaneous valve repair with MitraClip. Background: Mitral regurgitation (MR) is one of the most prevalent valvular diseases in our society. Transcatheter mitral valve repair (TMVR) with the MitraClip® system is increasingly used for treating this condition. The aim of our study is to analyse morphological mitral valve changes related to the procedure and its correlation with the degree of regurgitation at mid-term follow-up and with the combined endpoint of heart failure and all-cause mortality. Methods: A single-centre, prospective and observational study including consecutive patients admitted between October 2015 and October 2019 for TMVR was designed. The mitral valve annulus (MVA) was analyzed using the three-dimensional MVQ QLAB mitral valve quantification software (Philips; Amsterdam, The Netherlands). Clinical data was collected retrospectively. Results: Eighty-two MitraClip® were implanted. Control echocardiograms showed a significant decrease in the three measures: annular diameters, perimeter and area. Patients with functional MR had greater left ventricular and mitral annular dimensions and experienced greater reductions in anteroposterior diameter and mitral valve area compared with organic MR patients. The anteroposterior diameter reduction (OR 0.90; 95% CI: 0.82-0.99), as well as posterior leaflet grasping (OR 0.94; 95% CI: 0.89-0.99) were associated with the probability of significant MR recurrence. Posterior leaflet grasping and left ventricle ejection fraction were associated with the combined endpoint of heart failure and any cause death. Conclusions: MitraClip® implantation results in secondary changes in the mitral annular morphology. The magnitude of these morphological changes varies depending on the etiology of the mitral valve regurgitation. Posterior leaflet grasping and anteroposterior diameter reduction after clip are factors associated with relevant clinical and echocardiographic endpoints. abstract_id: PUBMED:30971281 Failed MitraClip therapy: surgical revision in high-risk patients. Background: MitraClip implantation is a valid interventional option that offers acceptable short-term results. Surgery after failed MitraClip procedures remains challenging in high-risk patients. The data on these cases are limited by the small sample numbers. Aim: The aim of our study is to show, that mitral valve surgery could be possible and more advantageous, even in high-risk patients. Methods: Between 2010 and 2016, nine patients underwent mitral valve surgery after failed MitraClip therapy at our institution. Results: The patients' ages ranged from 19 to 75 years (mean: 61.2 ± 19.6 years). The median interval between the MitraClip intervention and surgical revision was 45 days (range: 0 to 1087 days). In eight of nine patients, the MitraClip intervention was initially successful and the mitral regurgitation was reduced. Only one patient had undergone cardiac surgery previously. Intra-operatively, leaflet perforation or rupture, MitraClip detachment, and chordal or papillary muscle rupture were potentially the causes of recurrent mitral regurgitation. There were three early deaths. One year after surgery, the six remaining patients were alive. Conclusions: Mitral valve surgery can be successfully performed after failed MitraClip therapy in high-risk patients. The initial indication for MitraClip therapy should be considered carefully for possible surgical repair. Answer: MitraClip therapy in patients with functional mitral regurgitation (MR) and missing leaflet coaptation (MLC) is no longer considered an absolute exclusion criterion. A study investigated the feasibility, safety, and efficacy of MitraClip therapy in patients with functional MR and MLC, defined as the presence of a 'gap' between two mitral leaflets or insufficient coaptation length according to the EVEREST II criterion. The results showed that with pharmacological/mechanical support to improve leaflet coaptation, the MitraClip procedure could be extended to patients with functional MR who do not fulfill the coaptation length EVEREST II criterion. The study found no significant differences in technical success, 30-day device success, procedural success, and 1-year patient success between patients with and without MLC. Additionally, the long-term composite endpoint of cardiovascular death and heart failure hospitalization was similar in the two groups, suggesting that MitraClip therapy could be a viable option for this subset of patients (PUBMED:27102848). Furthermore, the use of a novel leaflet coaptation index (LCI) to predict outcomes after MitraClip in functional MR patients has been investigated. The LCI, which is the coapted leaflet length divided by the anteroposterior diameter of the mitral annulus, was significantly associated with residual MR ≥ moderate after MitraClip, indicating that this index may be a useful tool for predicting technical success before intervention (PUBMED:32906188). In summary, while MitraClip therapy was initially considered for patients with adequate leaflet coaptation, recent studies and experiences suggest that with careful patient selection and procedural strategies, MitraClip therapy can be successfully applied to patients with functional MR and MLC, expanding the potential candidate pool for this minimally invasive treatment (PUBMED:27102848; PUBMED:32906188).
Instruction: Long-term renal function deteriorates at a similar rate among liver transplant patients with preserved renal function at 1 year and in the general population: is chronic calcineurin inhibitor nephrotoxicity overrated? Abstracts: abstract_id: PUBMED:23622655 Long-term renal function deteriorates at a similar rate among liver transplant patients with preserved renal function at 1 year and in the general population: is chronic calcineurin inhibitor nephrotoxicity overrated? Background: Chronic calcineurin inhibitor (CNI) nephrotoxicity is associated with histologic kidney lesions, but the contribution of maintenance-dose CNI use to the decline over time in glomerular filtration rate (GFR) post liver transplantation (OLT) remains unclear. Methods: We studied annual changes in estimated GFR >1 year posttransplant among 105 CNI-treated adult OLT patients with a GFR of 60-100 mL/min at 1 year during a mean follow-up of 7 years (20 years in 20 patients). Results: The annual GFR decline >1 year posttransplant was 0.2 mL/min per year (SD 3.8). This decline rate was unaffected by the decade of OLT, follow-up period, or GFR at 1 year, and showed no correlation with CNI blood levels. Of the 13 (12%) patients with a GFR deterioration >3 mL/min per year, 77% presented with hypertension, diabetes, and/or dyslipidemia. The decline in GFR >1 year post-OLT did not exceed the decline of 0.5-0.8 mL/min per year reported in the general population. Declines faster than 3 mL/min per year, which occurred no more frequently among patients than in the general population, seemed attributable to coexistent vascular risk factors. Conclusions: Among OLT patients with preserved renal function at 1 year posttransplant, our findings challenge the clinical impact of chronic progressive CNI nephrotoxicity and highlight the importance of a tight control of blood pressures, glucose and lipid levels, and other modifiable risk factors in order to preserve long-term renal function. abstract_id: PUBMED:21452415 Long-term renal function in children after liver transplantation. Background: Reduced renal function after liver transplantation (LT) is a long-term extrahepatic complication of major concern caused at least partly by calcineurin inhibitor nephrotoxicity. We report on long-term renal function after LT in children from a single center and analyze the usefulness of glomerular filtration rate (GFR) estimation methods in the follow-up of pediatric LT patients. Methods: Fifty-seven pediatric patients were included. GFRs were measured by 51-labeled chromium ethylenediaminetetraacetic acid clearance before LT, at discharge, 6, 12, 18, and 24 months after transplantation and annually thereafter and corrected with the modified Brochner-Mortensen equation. GFR values of cases with an ethylenediaminetetraacetic acid distribution volume less than 15% or more than 35% were excluded. Results: The mean GFR for overall follow-up was 76.0 mL/min/1.73 m2 (+/-22.2 mL/min/1.73 m2). The GFR declined significantly from 5 to 7 years (80.2 [+/-17.7] to 72.9 [+/-13.3] mL/min/1.73 m2, respectively; P<0.05). Thirteen percent, 21%, 31%, and 33% of patients had stage 3 chronic kidney disease at 5, 7, 10, and 15 years after LT, respectively. The cyclosporine A trough level was a significant time-dependent factor in the regression model, and after time was removed from the model, proteinuria was the most significant factor. GFR estimation methods overestimated measured GFR; 11% with updated Schwartz, 50% with Schwartz 1987, and 31% with Counahan-Barratt. Conclusions: This study underlines the importance of long-term renal function surveillance after LT performed on children. Although measuring GFR remains the preferred function surveillance method, the updated Schwartz formula is also acceptable. abstract_id: PUBMED:27320584 One-Year Follow-up of the Changes in Renal Function After Liver Transplantation in Patients Without Chronic Kidney Disease. Background: Impaired renal function is a strong risk factor for morbidity and mortality after liver transplantation (LT). There is clearly a progressive deterioration in renal function after LT. The greatest loss of renal function occurs within the 1st year after LT. Several factors, including calcineurin inhibitors, are associated with decreased renal function. The aims of the present study were to identify changes in renal function before and after LT and to determine the risk factors related to decreased renal function after LT. Methods: We reviewed medical records of 106 LT recipients without moderate to severe chronic kidney disease (estimated glomerular filtration rate [eGFR] ≥60 mL/min/1.73 m(2)). We investigated eGFR changes from before to 1 year after LT with the use of propensity score matching. Statistical significance of differences between clinical parameters and 1-year eGFR changes was assessed with the use of univariate and multivariate analyses. Results: Mean age was 49.5 ± 10.9 years, and 66% of the patients were male. Mean differences in 1-year eGFR and serum creatinine were -32.0 ± 29.2 mL/min/1.73 m(2) and 0.3 ± 0.3 mg/dL, respectively. Variables significantly associated with renal dysfunction 1 year after LT were old age, low pre-LT eGFR, low post-LT hemoglobin, and perioperative acute kidney injury. Multivariate analysis showed that pre-LT renal function was an independent risk factor for decreased renal function after LT. However, there was no significant correlation between 1-year eGFR change and serum tacrolimus level. Conclusions: Renal function significantly decreased the 1st year after LT, and baseline renal function was an independent risk factor for worsening renal function in LT recipients. abstract_id: PUBMED:20331519 Stable long-term renal function after pediatric liver transplantation. Long-term exposure to calcineurin inhibitors increases the risk of CKD in children after LT. The aims of this study were to study renal function by measuring GFRm before and yearly after LT, to describe the prevalence of CKD (stage III: GFR 30-60 mL/min/1.73 m(2)) and to investigate if age and underlying liver disease had an impact on long-term renal function. Thirty-six patients with a median age of 2.9 years (0.1-16 yr) were studied. Median follow-up was 6.5 (2-14 yr). GFRm decreased significantly during the first six months post-transplantation with 23% (p < 0.001). Thereafter renal function stabilized. At six months, 17% (n = 5) of the children presented CKD stage III and at five yr the prevalence of CKD III was 18% in 29 children. However, in 13 children with a 10-year follow-up it was 0%. None of the children required renal replacement therapy after LT. When analyzing renal function of those children younger than two yr (n = 14) and older than two yr (n = 17) at the time of transplantation, we found that in both cohorts the filtration rate remained remarkably stable during the five-yr observational period. However, there was a statistically significant (p < 0.05) difference in the percentual decrease in GFRm between the groups during the first six months after LT 13% and 31%, respectively. Baseline GFRm according to diagnosis did not differ between the groups. During the first six months after LT, patients transplanted for hepatic malignancy (n = 6) and those with metabolic liver disease (n = 4) had a percentage loss of GFRm of 32% and 35%, respectively. The corresponding loss of GFRm in patients with other diseases was 10-19%. Six months post-transplantation mean GFRm in the group with malignant liver disease was 65 +/- 15 mL/min/1.73 m(2) and in the group with other diseases (n = 24) 82 +/- 17 mL/min/1.73 m(2) (p < 0.05). At one, three and five yr post-transplantation there was no longer a statistically significant difference between these cohorts. Our findings suggest that there can be a long-term recovery of renal function after LT in children. abstract_id: PUBMED:27160359 Long-term follow-up of five yr shows superior renal function with everolimus plus early calcineurin inhibitor withdrawal in the PROTECT randomized liver transplantation study. Background: The 12-month (M) PROTECT study showed that de novo liver transplant recipients (LTxR) who switched from a calcineurin inhibitor (CNI)-based immunosuppression to a CNI-free everolimus (EVR)-based regimen showed numerically better renal function. Here, we present the five-yr follow-up data. Methods: PROTECT was a randomized controlled study in which LTxR received basiliximab and CNI-based immunosuppression ± corticosteroids. Patients were randomized 1:1 to receive EVR or continue CNI. Patients completing the core study could enter the extension study on their randomized treatment. Results: A total of 81 patients entered the extension study (41, EVR; 40, CNI). At M59 post-randomization, the adjusted mean eGFR was significantly higher in the EVR group, with a benefit of 12.4 mL/min using Cockcroft-Gault (95% CI: 1.2; 23.6; p = 0.0301). Also, there was a significant benefit for adjusted and unadjusted eGFR using the four-variable Modification of Diet in Renal Disease (MDRD4) or Nankivell formula. During the extension period, treatment failure rates were similar. SAEs occurred in 26 (63.4%) and 28 (70.0%) of the patients in EVR and CNI groups, respectively. Conclusion: Compared with the CNI-based treatment, EVR-based CNI-free immunosuppression resulted in significantly better renal function and comparable patient and graft outcomes after five-yr follow-up. abstract_id: PUBMED:14529923 Renal function after liver transplantation: calcineurin inhibitor nephrotoxicity. Renal failure, mainly due to calcineurin inhibitor (CNI) nephrotoxicity, is the most common complication following orthotopic liver transplantation (ltx). The aim of this study was to evaluate the incidence and course of renal failure in adult ltx patients. Severe acute renal failure in early postoperative period due to impaired hemodynamics and CNI nephrotoxicity, occurred in 14 patients, 3 of whom required dialysis. The creatinine clearance after ltx showed a tendency to decrease, but there was no statistically significant difference (P >.05) in the change in serum creatinine clearance levels between patients treated with tacrolimus (TAC) versus Cyclosporine (CsA) during the first 2 years of follow-up. Fourteen patients required conversion of their regimen because of CNI nephrotoxicity namely, dose reduction (n = 7) or discontinuation of CNI therapy with the replacement by mycophenolate mofetil (MMF) (n = 5) or SRL (n = 5). Dose reduction or CNI withdrawal significantly improved the creatinine clearance (P <.05) without affecting lives graft function. No episode of acute rejection was observed after conversion. Neither conversion of CsA to TAC nor the reverse maneuver significantly influenced the serum creatinine level (P >.05). Reduction of the CNI dose or CNI discontinuation or replacement with MMF or SRL in patients with stable liver but impaired renal function is safe, resulting in a significant improvement in renal function. abstract_id: PUBMED:12826191 Impairment of renal function following liver transplantation. Background: Although renal insufficiency following liver transplantation is not infrequent, only limited reports describe the incidence and progression of the kidney disease. Methods: This single-centre retrospective analysis after successful liver transplantation between January 1985 and March 2002 defined the baseline serum creatinine at 50 days after liver transplantation to represent the renal function. The primary end-point was an increase of serum creatinine by more than 50% above the baseline. Results: Long-term data were available for 162 patients (84 women, 78 men) who received 167 liver transplants. The median serum creatinine level at 50 days after liver transplantation was 1.0 mg/dL (range 0.5-3.5 mg/dL). The median serum creatinine increased to 1.2 mg/dL (0.4-9.8 mg/dL) at the end of follow-up. Six patients (4%) experienced end-stage renal failure. Forty-one patients (25%) showed a 50% increase in the serum creatinine. Kaplan-Meier analysis revealed that 43% and 48% of patients had a deterioration of renal function at 10 and 15 years after liver transplantation, respectively. Patients at risk showed an increase of serum creatinine by 0.25 mg/dL/y. Only the recipient age was an independent risk factor for deterioration of renal function. Conclusions: Although there is a high risk for the impairment of renal function after liver transplantation, progression of renal disease is slow and rarely results in end-stage renal failure within 10-15 years. However, patients at risk should be identified early to prevent further decline in renal function. abstract_id: PUBMED:35769619 Early use of everolimus improved renal function after adult deceased donor liver transplantation. Background: Tacrolimus (TAC) is a main therapy for liver transplantation (LT) patients, but it has side effects such as chronic nephrotoxicity that progressively aggravate renal function. The purpose of this study was to retrospectively compare the renal function between a TAC group and a combination of everolimus and reduced TAC (EVR-TAC) group after deceased donor liver transplantation (DDLT). Methods: The study comprised 131 patients who underwent DDLT between January 2013 and April 2018 at our institution. They received TAC or EVR-TAC after DDLT. Everolimus (EVR) was introduced between 1 and 6 months after DDLT. Results: Thirty-six of 131 patients (27.5%) received EVR-TAC. The incidence of chronic kidney disease (CKD; estimated glomerular filtration rate, <60 mL/1.73 m2) in the EVR-TAC group was higher than in the TAC group (25% vs. 8.4%; P=0.019). Increasing serum creatinine (n=23, 63.9%) was the most common cause for adding EVR to treatment of the posttransplant patients. There were no statistical differences in acute rejection and CKD between the two groups. The TAC trough level was significantly lower in the EVR-TAC group than in the TAC group, and the renal function of the EVR-TAC group was worse than that of the TAC group until 1 year after DDLT. However, the renal function of the EVR-TAC group improved and became similar to that of TAC group at 3 years posttransplant. Conclusions: The present study suggests that EVR should be introduced as soon as possible after DDLT to reduce exposure to high doses of TAC to improve the renal function. abstract_id: PUBMED:19144599 Investigation of renal function in liver transplant patients: MDRD or Cockroft-Gault? Both acute and chronic liver disease have an effect on renal function. After liver transplantation we have to pay attention to the further loss of function because of applied calcineurin inhibitor treatment. It has nephrotoxic side-effect, so the investigation of renal function is converted into an important mission in liver transplant patients. In our study we compared the results between formulas which use serum creatinine: MDRD, which use albumin, too, has effectively other values than the Cockroft-Gault. We compared the two formulas in a retrospective analysis, in 187 patients at a stated time (before operation, after operation, within 1 week, 1 month). We got higher GFR at each date with Cockroft-Gault; mean differences: 19.56%, 17.33%, 37.18% and 28.8%. In the range of 15-60 ml/min GFR, we found nearly twice as many patients by use of MDRD than by use of Cockroft-Gault. Median with MDRD: 79 ml/min, 65.1 ml/min, 52 ml/min, 49.5 ml/min; median with Cockroft-Gault: 93.8 ml/min, 78 ml/min, 70.1 ml/min, 69.4 ml/min, all are significant ( p < 0.001). Many previous studies have already compared the two formulas in end stage kidney disease. On the basis of these studies, MDRD is suitable also under 30 ml/min GFR, but Cockroft-Gault formula approached real GFR measured with isotope methods only above this value. We got the similar conclusion in the examined patient group. It can be stated that MDRD is more suitable to determine renal function in liver transplant patients. abstract_id: PUBMED:31577671 Randomized Sirolimus-based Early Calcineurin Inhibitor Reduction in Liver Transplantation: Impact on Renal Function. Background: The long-term use of calcineurin inhibitors (CNIs) after liver transplantation (LT) is associated with nephrotoxicity. Methods: Five-year follow-up data were retrieved from the randomized controlled multicenter SiLVER trial. Standard CNI-based mammalian target of rapamycin-free immunosuppression (group A, n = 264) was compared with a 50% reduction of CNI and introduction of the mammalian target of rapamycin inhibitor Sirolimus (SIR) within 4-6 weeks after LT (group B, n = 261). Results: Median MELD at LT was low with 10 (7-15) (group A) and 11 (8-15) (group B) in the intention-to-treat approach. CNI dose and CNI trough were reduced by 20% and 8% (group A) versus 55% and 56% (group B) at 3 months posttransplantation. Renal function was preserved at 3 months after LT in the SIR arm (estimated glomerular filtration rate 74 [57-95] versus 67 [55-85] mL/min/1.73m2 P = 0.004) but was similarly impaired thereafter compared with group A. The per protocol analysis identified LT recipients in group B with concomitant early CNI minimization and SIR treatment ≥ year 1 with significantly superior estimated glomerular filtration rate and lowest rate of chronic kidney disease (≥stage 3) from year 1 onwards until study end. Competing risk factors for renal disease (arterial hypertension, fat metabolism disorder, and hyperglycemia) were not associated with worse kidney function. Conclusions: Prevention of CNI nephrotoxicity by SIR-based early CNI minimization protects renal function only short-term after LT in the intention-to-treat analysis of this low MELD cohort. Yet, selected LT recipients compliant with early CNI minimization and SIR maintenance achieved better long-term renal outcomes compared with real-world practice. Answer: The question of whether chronic calcineurin inhibitor (CNI) nephrotoxicity is overrated in the context of long-term renal function deterioration among liver transplant patients with preserved renal function at 1 year compared to the general population is addressed in the study by PUBMED:23622655. The study found that the annual glomerular filtration rate (GFR) decline post liver transplantation (OLT) was 0.2 mL/min per year, which is within the range of the decline reported in the general population (0.5-0.8 mL/min per year). The study also noted that declines faster than 3 mL/min per year seemed attributable to coexistent vascular risk factors rather than CNI blood levels. This suggests that the clinical impact of chronic progressive CNI nephrotoxicity may not be as significant as previously thought, and that controlling modifiable risk factors such as hypertension, diabetes, and dyslipidemia is crucial for preserving long-term renal function in OLT patients. Additionally, the study by PUBMED:27160359 supports the notion that renal function significantly decreases in the first year after liver transplantation, and baseline renal function is an independent risk factor for worsening renal function in LT recipients. However, there was no significant correlation between 1-year eGFR change and serum tacrolimus level, which could imply that factors other than CNI exposure may play a role in renal function decline. Furthermore, the study by PUBMED:20331519 found that long-term exposure to CNIs increases the risk of chronic kidney disease (CKD) in children after LT, but renal function stabilized after an initial decrease post-transplantation. This suggests that while CNIs may contribute to renal function decline, the effect may not necessarily be progressive or as severe as expected. In contrast, the study by PUBMED:31577671 indicates that early CNI minimization with the introduction of sirolimus can preserve renal function short-term after LT, but long-term outcomes were similar to standard CNI-based immunosuppression unless patients were compliant with early CNI minimization and sirolimus maintenance. Overall, these findings challenge the notion that chronic CNI nephrotoxicity is the primary driver of long-term renal function deterioration in liver transplant patients with preserved renal function at 1 year, highlighting the importance of managing other risk factors for renal function decline.
Instruction: Combined Near-Infrared Spectroscopy and Intravascular Ultrasound Imaging of Pre-Existing Coronary Artery Stents: Can Near-Infrared Spectroscopy Reliably Detect Neoatherosclerosis? Abstracts: abstract_id: PUBMED:26729855 Combined Near-Infrared Spectroscopy and Intravascular Ultrasound Imaging of Pre-Existing Coronary Artery Stents: Can Near-Infrared Spectroscopy Reliably Detect Neoatherosclerosis? Background: Neoatherosclerosis is an emerging phenomenon in which lipid-rich plaques (LRPs) develop within pre-existing stents. This study was undertaken to describe near-infrared spectroscopy (NIRS) and intravascular ultrasound findings in pre-existing stents and to compare NIRS findings in pre-existing stents, in which an increased lipid signal has been speculated to indicate neoatherosclerosis, and NIRS findings in a control group of freshly implanted stents, in which any lipid signal originates from fibroatheroma under the stent. Methods And Results: At the site of LRP detected by NIRS in a cohort of pre-existing stents, intravascular ultrasound was used to determine the presence of neointimal tissue. The lipid-core burden index and maximum lipid-core burden index in 4 mm were measured within stented segments. Findings were compared between pre-existing stents and a control group of freshly implanted stents. Among 60 pre-existing stents implanted 5.5±4.0 years earlier, NIRS detected LRP in 33%. At the site of LRP, intravascular ultrasound found no neointimal tissue in 35% of cases. NIRS findings in pre-existing stents were indistinguishable from those of freshly implanted stents (lipid-core burden index: 50±72 versus 42±58; P=0.40 and maximum lipid-core burden index in 4 mm: 156±184 versus 155±203; P=0.69). Conclusions: The detection of LRP in a pre-existing stent by NIRS alone is not reliable evidence of neoatherosclerosis, as the lipid signal may originate from fibroatheroma underlying the stent. By identifying the presence or absence of neointimal tissue at the site of LRP detected by NIRS, intravascular ultrasound may provide some insight into the potential source of the lipid signal in pre-existing stents. Registration Information: URL: http://www.clinicaltrials.gov. Unique identifier: NCT01694368. abstract_id: PUBMED:32695796 Near-Infrared Spectroscopy Intravascular Ultrasound Imaging: State of the Art. Acute coronary syndromes (ACS) secondary to coronary vessel plaques represent a major cause of cardiovascular morbidity and mortality worldwide. Advancements in imaging technology over the last 3 decades have continuously enabled the study of coronary plaques via invasive imaging methods like intravascular ultrasound (IVUS) and optical coherence tomography (OCT). The introduction of near-infrared spectroscopy (NIRS) as a modality that could detect the lipid (cholesterol) content of atherosclerotic plaques in the early nineties, opened the potential of studying "vulnerable" or rupture-prone, lipid-rich coronary plaques in ACS patients. Most recently, the ability of NIRS-IVUS to identify patients at risk of future adverse events was shown in a prospective multicenter trial, the Lipid-Rich-plaque Study. Intracoronary NIRS-IVUS imaging offers a unique method of coronary lipid-plaque characterization and could become a valuable clinical diagnostic and treatment monitoring tool. abstract_id: PUBMED:26392514 Near-infrared spectroscopy-intravascular ultrasound: scientific basis and clinical applications. Coronary angiography underestimates the magnitude of the atherosclerotic burden and cannot detect the presence of disease in the early phases. Recognition of these inherent limitations of angiography has been an impetus for the development of other coronary imaging techniques. The novel near-infrared spectroscopy-intravascular ultrasound (NIRS-IVUS) catheters can detect and quantify the presence of lipid core in the atherosclerotic plaque and associate it with other features such as lumen size and plaque architecture. Lipid-rich plaques are known to pose a higher risk of distal embolization during interventions and plaque disruption. The aim of this manuscript is the review of the potential clinical and research applications of this technology as highlighted by recent studies. abstract_id: PUBMED:27485540 The Role of Intracoronary Plaque Imaging with Intravascular Ultrasound, Optical Coherence Tomography, and Near-Infrared Spectroscopy in Patients with Coronary Artery Disease. The development of multiple diagnostic intracoronary imaging modalities has increased our understanding of coronary atherosclerotic disease. These imaging modalities, intravascular ultrasound (IVUS), optical coherence tomography (OCT), and near-infrared spectroscopy (NIRS), have provided a method to study plaques and introduced the concept of plaque vulnerability. They are being increasingly used for percutaneous coronary intervention (PCI) optimization and are invaluable tools in research studying the pathophysiology of acute coronary syndrome (ACS), in-stent thrombosis and in-stent restenosis. IVUS has the ability to visualize the intracoronary lumen and the vessel wall and can be used to detect early atherosclerotic disease even in the setting of positive arterial remodeling. Studies supporting the use of IVUS to optimize stent deployment and apposition have shown a significant reduction in cardiovascular events. OCT provides even higher resolution imaging and near microscopic detail of plaques, restenoses, and thromboses; thus, it can identify the etiology of ACS. Ongoing trials are evaluating the role of OCT in PCI and using OCT to study stent endothelialization and neointimal proliferation. NIRS is a modality capable of localizing and quantifying lipid core burden. It is usually combined with IVUS and is used to characterize plaque composition. The benefits of NIRS in the setting of ACS have been limited to case reports and series. The utilization of all these intracoronary imaging modalities will continue to expand as their indications for clinical use and research grow. Studies to support their use for PCI optimization resulting in improved outcomes with potential to prevent downstream events are ongoing. abstract_id: PUBMED:36444847 Relationship Between Optical Coherence Tomography-Derived In-Stent Neoatherosclerosis and the Extent of Lipid-Rich Neointima by Near-Infrared Spectroscopy and Intravascular Ultrasound: A Multimodal Imaging Study. Background In-stent restenosis, especially for neoatherosclerosis, is a major concern following percutaneous coronary intervention. This study aimed to elucidate the association of features of in-stent restenosis lesions revealed by optical coherence tomography (OCT)/optical frequency domain imaging (OFDI) and the extent of lipid-rich neointima (LRN) assessed by near-infrared spectroscopy (NIRS) and intravascular ultrasound, especially for neoatherosclerosis. Methods and Results We analyzed patients undergoing percutaneous coronary intervention for in-stent restenosis lesions using both OCT/OFDI and NIRS-intravascular ultrasound. OCT/OFDI-derived neoatherosclerosis was defined as lipid neointima. The existence of large LRN (defined as a long segment with 4-mm maximum lipid core burden index ≥400) was evaluated by NIRS. In 59 patients with 64 lesions, neoatherosclerosis and large LRN were observed in 17 (26.6%) and 21 lesions (32.8%), respectively. Naturally, large LRN showed higher 4-mm maximum lipid core burden index (median [interquartile range], 623 [518-805] versus 176 [0-524]; P<0.001). In OCT/OFDI findings, large LRN displayed lower minimal lumen area (0.9±0.4 versus 1.3±0.6 mm2; P=0.02) and greater max lipid arc (median [interquartile range], 272° [220°-360°] versus 193° [132°-247°]; P=0.004). In the receiver operating characteristic curve analysis, 4-mm maximum lipid core burden index was the best predictor for neoatherosclerosis, with a cutoff value of 405 (area under curve, 0.92 [95% CI, 0.83-1.00]). In multivariable logistic analysis, only low-density lipoprotein cholesterol (odds ratio, 1.52 [95% CI, 1.11-2.08]) was an independent predictor for large LRNs. Conclusions NIRS-derived large LRN was significantly associated with neoatherosclerosis by OCT/OFDI. The neointimal characterization by NIRS-intravascular ultrasound has potential as an alternative method of OCT/OFDI for in-stent restenosis lesions. abstract_id: PUBMED:28982647 Multimodality Intracoronary Imaging With Near-Infrared Spectroscopy and Intravascular Ultrasound in Asymptomatic Individuals With High Calcium Scores. Background: This study sought to determine the frequency of large lipid-rich plaques (LRP) in the coronary arteries of individuals with high coronary artery calcium scores (CACS) and to determine whether the CACS correlates with coronary lipid burden. Methods And Results: Combined near-infrared spectroscopy and intravascular ultrasound was performed in 57 vessels in 20 asymptomatic individuals (90% on statins) with no prior history of coronary artery disease who had a screening CACS ≥300 Agatston units. Among 268 10-mm coronary segments, near-infrared spectroscopy images were analyzed for LRP, defined as a bright yellow block on the near-infrared spectroscopy block chemogram. Lipid burden was assessed as the lipid core burden index (LCBI), and large LRP were defined as a maximum LCBI in 4 mm ≥400. Vessel plaque volume was measured by quantitative intravascular ultrasound. Vessel-level CACS significantly correlated with plaque volume by intravascular ultrasound (r=0.69; P<0.0001) but not with LCBI by near-infrared spectroscopy (r=0.24; P=0.07). Despite a high CACS, no LRP was detected in 8 (40.0%) subjects. Large LRP having a maximum LCBI in 4 mm ≥400 were infrequent, found in only 5 (25.0%) of 20 subjects and in only 5 (1.9%) of 268 10-mm coronary segments analyzed. Conclusions: Among individuals with a CACS ≥300 Agatston units mostly on statins, CACS correlated with total plaque volume but not LCBI. This observation may have implications on coronary risk among individuals with a high CACS considering that it is coronary LRP, rather than calcification, that underlies the majority of acute coronary events. abstract_id: PUBMED:24974760 Near-infrared spectroscopy (NIRS), new intracoronary imaging technique of unstable coronary plaque Acute coronary syndrome may develop in the background of hemodynamically non-significant coronary artery disease. It may be caused by the presence of "vulnerable plaque", which is characterized by the lipid rich core and thin fibrous cap content. NIRS - near infrared spectroscopy - is a morphological imaging method allowing determining atherosclerotic plaque cholesterol burden. Information about the chemical composition may contribute to "high risk" plaque early identification and subsequent optimal interventional strategy. The first experience with the clinical implementation of this novel method is demonstrated in a case report. Key words: acute coronary syndrome - chemogram - intravascular imaging - NIRS - vulnerable plaque. abstract_id: PUBMED:34164943 Intravascular molecular-structural imaging with a miniaturized integrated near-infrared fluorescence and ultrasound catheter. Coronary artery disease (CAD) remains a leading cause of mortality and warrants new imaging approaches to better guide clinical care. We report on a miniaturized, hybrid intravascular catheter and imaging system for comprehensive coronary artery imaging in vivo. Our catheter exhibits a total diameter of 1.0 mm (3.0 French), equivalent to standalone clinical intravascular ultrasound (IVUS) catheters but enables simultaneous near-infrared fluorescence (NIRF) and IVUS molecular-structural imaging. We demonstrate NIRF-IVUS imaging in vitro in coronary stents using NIR fluorophores, and compare NIRF signal strengths for prism and ball lens sensor designs in both low and high scattering media. Next, in vivo intravascular imaging in pig coronary arteries demonstrates simultaneous, co-registered molecular-structural imaging of experimental CAD inflammation on IVUS and distance-corrected NIRF images. The obtained results suggest substantial potential for the NIRF-IVUS catheter to advance standalone IVUS, and enable comprehensive phenotyping of vascular disease to better assess and treat patients with CAD. abstract_id: PUBMED:26242984 Role of near-infrared spectroscopy in intravascular coronary imaging. Near-infrared spectroscopy is an intracoronary imaging modality that has been validated in preclinical and clinical studies to help quantify the lipid content of the coronary plaque and provide information regarding its vulnerability. It has the potential to develop into a valuable tool for the risk stratification of a vulnerable plaque and, furthermore, a vulnerable patient. In addition, in the future this technology may help in the development of novel therapies that impact vascular biology. abstract_id: PUBMED:30488332 Comparative Quantification of Arterial Lipid by Intravascular Photoacoustic-Ultrasound Imaging and Near-Infrared Spectroscopy-Intravascular Ultrasound. Intravascular photoacoustic-ultrasound (IVPA-US) imaging and near-infrared spectroscopy-intravascular ultrasound (NIRS-IVUS) are two hybrid modalities that detect arterial lipid, with comparison necessary to understand the relative advantages of each. We performed in vivo and ex vivo IVPA-US imaging of the iliac arteries of Ossabaw swine with metabolic syndrome (MetS) and lean swine to investigate sensitivity for early-stage atherosclerosis. We repeated imaging ex vivo with NIRS-IVUS for comparison to IVPA-US and histology. Both modalities showed significantly greater lipid in MetS vs. lean swine, but only IVPA-US localized the lipid as perivascular. To investigate late-stage atherosclerosis, we performed ex vivo IVPA-US imaging of a human coronary artery with comparison to NIRS-IVUS and histology. Two advanced fibroatheromas were identified, with agreement between IVPA-measured lipid area and NIRS-derived lipid content. As confirmed histologically, IVPA-US has sensitivity to detect lipid content similar to NIRS-IVUS and provides additional depth resolution, enabling quantification and localization of lipid cores within plaques. Answer: The study described in abstract PUBMED:26729855 investigated whether near-infrared spectroscopy (NIRS) could reliably detect neoatherosclerosis within pre-existing coronary artery stents. Neoatherosclerosis refers to the development of lipid-rich plaques (LRPs) within stents that have been previously implanted. The study compared NIRS findings in pre-existing stents with a control group of freshly implanted stents, where any lipid signal would originate from fibroatheroma under the stent. The results showed that NIRS detected LRPs in 33% of pre-existing stents. However, intravascular ultrasound (IVUS), which was used to determine the presence of neointimal tissue at the site of LRP detected by NIRS, found no neointimal tissue in 35% of cases. This suggests that the lipid signal detected by NIRS in pre-existing stents could be due to fibroatheroma underlying the stent rather than neoatherosclerosis. The study concluded that NIRS alone is not a reliable indicator of neoatherosclerosis in pre-existing stents, and that IVUS may provide additional insight into the potential source of the lipid signal. The abstract also mentions that the lipid-core burden index and maximum lipid-core burden index in 4 mm were measured within stented segments, and the findings were indistinguishable between pre-existing stents and freshly implanted stents. This further supports the conclusion that NIRS findings alone cannot be used to reliably distinguish neoatherosclerosis from underlying fibroatheroma in pre-existing stents. In summary, while NIRS can detect lipid content within coronary plaques, it may not be able to reliably differentiate between neoatherosclerosis and lipid signals from pre-existing conditions such as fibroatheroma without the additional use of IVUS to assess the presence of neointimal tissue (PUBMED:26729855).
Instruction: Patient awareness of stent type, risk of cardiac events, and symptoms of myocardial infarction among PCI patients: a missed educational opportunity? Abstracts: abstract_id: PUBMED:21114531 Patient awareness of stent type, risk of cardiac events, and symptoms of myocardial infarction among PCI patients: a missed educational opportunity? Background: Timely and successful treatment of myocardial infarction (MI) requires accurate recognition by the patient of the signs and symptoms. As patients who have undergone percutaneous coronary intervention (PCI) remain at risk for cardiac events, it is important that they have a basic understanding of their cardiac status. Methods: We surveyed 80 consecutive patients following elective PCI using a simple multiple-choice questionnaire. Type of stent (bare metal or drug-eluting), how they perceive the procedure would affect their cardiovascular health, their perceived risk of a future MI, and whether they recalled specific education on how to recognize symptoms of an MI were queried. Results: 45% (n = 36) of patients were unaware of stent type. 10% stated PCI was performed to relieve symptoms of angina, 30% (n = 24) stated it would prevent MI, 56.3% (n = 45) stated that it would both prevent MI and reduce symptoms of angina, while 3.8% stated it would do neither. 86.3% (n = 69) stated they remained at risk for MI despite the procedure. However, 42.5% (n = 34) of patients did not perceive to have received specific education on the signs and symptoms of MI during their hospital stay. Conclusions: Patient understanding of stent type, expected cardiovascular outcomes, and recognition of MI post-PCI appears low in the real-world setting. A systematic approach to post-PCI education should be incorporated into routine care, in order to capitalize on the educational opportunity afforded by this high risk population. abstract_id: PUBMED:38360097 Impact of diabetes mellitus on clinical outcomes after first episode in-stent restenosis PCI: Results from a large registry. Background: Diabetes mellitus (DM) is associated with a high rate of major adverse cardiac events (MACE) after de novo coronary artery percutaneous coronary intervention (PCI). Whether patients with DM undergoing PCI for in-stent restenosis (ISR) experience a similar heightened risk of MACE is not known. Hence, we sought to compare the clinical outcomes of patients with and without DM undergoing PCI for ISR. Methods: Patients undergoing first episode ISR PCI between January 2015 and December 2021 were included. The primary outcome of interest was MACE (all-cause death, myocardial infarction [MI], and target lesion revascularization [TVR]) at 1-year. Results: A total of 3156 patients (56.7% with DM) underwent PCI for ISR during the study period. Patients with DM were younger, more likely to be female, and had a higher prevalence of comorbidities. At 1-year follow-up, DM was associated with a higher rate of MACE (22.4% vs. 18.7%, unadjusted HR 2.03, 95%CI(1.27-3.25), p = 0.003). All-cause mortality and MI were significantly more frequent among people with DM at 1-year follow-up. The rate of TVR was similar in both groups (17.9% vs. 16.0%, unadjusted HR 1.14, 95%CI (0.94-1.37), p = 0.180). On adjusted analysis, there was no significant difference in the rate of MACE (AHR 1.07, 95%CI(0.90 - -1.29), p = 0.444), all-cause death (AHR 1.54, 95%CI(0.93-2.54), p = 0.095) or MI (AHR 1.10, 95%CI(0.74-1.63), p = 0.652). Conclusion: ISR PCI in patients with DM was associated with a higher rate of MACE at 1-year follow-up. However, this increased risk was no longer significant after adjusting for baseline characteristics. abstract_id: PUBMED:34671586 Cardiovascular Risk Awareness and Calculated 10-Year Risk Among Female Employees at Taibah University 2019. Cardiovascular diseases (CVD) are the most common cause of death and disability worldwide. Saudi Arabia, one of the middle-income countries has a proportional CVD mortality rate of 37%. Knowledge about CVD and its modifiable risk factors is a vital pre-requisite to change the health attitudes, behaviors, and lifestyle practices of individuals. Therefore, we intended to assess the employee knowledge about risk of CVD, symptoms of heart attacks, and stroke, and to calculate their future 10-years CVD risk. An epidemiological, cross-sectional, community-facility based study was conducted. The women aged ≥40 years who are employees of Taibah University, Al-Madinah Al-Munawarah were recruited. A screening self-administrative questionnaire was distributed to the women to exclude those who are not eligible. In total, 222 women met the inclusion criteria and were invited for the next step for the determination of CVD risk factors by using WHO STEPS questionnaire: It is used for the surveillance of non-communicable disease risk factor, such as CVD. In addition, the anthropometric measurements and biochemical measurements were done. Based on the identified atherosclerotic cardiovascular disease (ASCVD) risk factors and laboratory testing results, risk calculated used the Framingham Study Cardiovascular Disease (10-year) Risk Assessment. Data were analyzed using GraphPad Prism 7 software (GraphPad Software, CA, USA). The result showed the mean age of study sample was 55.6 ± 9.0 years. There was elevated percentage of obesity and rise in abdominal circumference among the women. Hypertension (HTN) was a considerable chronic disease among the participants where more than half of the sample had it, i.e., 53%. According to the ASCVD risk estimator, the study participants were distributed into four groups: 63.1% at low risk, 20.2% at borderline risk, 13.5% at intermediate risk, and 3.2% at high risk. A comparison between these categories based on the CVD 10-year risk estimator indicated that there were significant variations between the low-risk group and the intermediate and high-risk groups (P = 0.02 and P = 0.001, respectively). The multivariate analysis detected factors related to CVD risk for women who have an intermediate or high risk of CVD, such as age, smoking, body mass index (BMI), unhealthy diet, blood pressure (BP) measurements, and family history of CVD (P < 0.05). The present study reports limited knowledge and awareness of CVD was 8.6 that is considered as low knowledge. In conclusion, the present study among the university sample in Madinah reported limited knowledge and awareness of CVD risk. These findings support the need for an educational program to enhance the awareness of risk factors and prevention of CVD. abstract_id: PUBMED:25075161 A Randomized Comparison between Everolimus-Eluting Stent and Cobalt Chromium Stent in Patients with Acute ST-Elevation Myocardial Infarction Undergoing Primary Percutaneous Coronary Intervention Using Routine Intravenous Eptifibatide: The X-MAN (Xience vs. Multi-Link Stent in Acute Myocardial Infarction) Trial, A Pilot Study. The objective of this study is to determine the efficacy and safety of an everolimus-eluting stent (EES/Xience; Abbott Vascular, Santa Clara, CA) compared with a cobalt chromium stent (CoCr/Multi-Link Vision; Abbott Vascular) in patients with acute ST-elevation myocardial infarction (STEMI) undergoing primary percutaneous coronary intervention (PCI) with routine administration of eptifibatide infusion. This is a prospective, single center, randomized trial comparing EES (n = 75) and CoCr stent (n = 75) implantation in patients with acute STEMI undergoing primary PCI. Intravenous eptifibatide administration was mandatory by protocol in this pilot study. The primary efficacy endpoint was major adverse cardiac events (MACE) at 30 days, defined as the composite of death, reinfarction, and target vessel revascularization. Secondary safety endpoints were stent thrombosis at 30 days and in-hospital bleeding event. Acute reperfusion parameters were also assessed. One-month MACE rate did not differ between EES and CoCr group (1.3 vs. 1.3%, p = 1.0). No stent thrombosis cases were observed in the EES group. The groups did not differ with respect to in-hospital bleeding events (5 vs. 9%, p = 0.37), achievement of final thrombolysis in myocardial infarction flow 2 or 3 (p = 0.21), achievement of myocardial blush grade 2 or 3 (p = 0.45), creatine kinase-MB level at 8 to 12 hours after stenting (p = 0.29), and left ventricular ejection fraction (p = 0.21). This pilot study demonstrates that after one-month follow-up, the use of EES is as safe and effective as the use of CoCr stents in patients with acute STEMI undergoing primary PCI with routine administration of intravenous eptifibatide. abstract_id: PUBMED:27924213 Drug-eluting stents appear superior to bare metal stents for vein-graft PCI in vessels up to a stent diameter of 4 mm. Background: Research trials have shown improved short-term outcome with drug-eluting stents (DES) over bare metal stents (BMS) in saphenous vein graft (SVG) percutaneous coronary intervention (PCI), primarily by reducing target vessel revascularization (TVR) for in-stent restenosis. We compared the outcomes in patients undergoing SVG stent implantation treated with DES or BMS. In exploratory analyses we investigated the influence of stent generation and diameter. Methods: Data were obtained from a prospective database of 657 patients who underwent PCI for SVG lesions between 2003 and 2011. A total of 344 patients had PCI with BMS and 313 with DES. Propensity scores were developed based on 15 observed baseline covariates in a logistic regression model with stent type as the dependent variable. The nearest-neighbour-matching algorithm with Greedy 5-1 Digit Matching was used to produce two patient cohorts of 313 patients each. We assessed major adverse cardiac events (MACE) out to a median of 3.3 years (interquartile range: 2.1-4.1). MACE was defined as all-cause mortality, myocardial infarction (MI), TVR and stroke. Results: There was a significant difference in MACE between the two groups in favour of DES (17.9% DES vs. 31.2% BMS group; p = 0.0017) over the 5-year follow-up period. MACE was driven by increased TVR in the BMS group. There was no difference in death, MI or stroke. Adjusted Cox analysis confirmed a decreased risk of MACE for DES compared with BMS 0.75 (95% confidence interval (CI) 0.52-0.94), with no difference in the hazard of all-cause mortality (hazard ratio: 1.08; 95% CI: 0.77-1.68). However, when looking at stent diameters greater than 4 mm, no difference was seen in MACE rates between BMS and DES. Conclusions: Overall in our cohort of patients who had PCI for SVG disease, DES use resulted in lower MACE rates compared with BMS over a 5-year follow-up period; however, for stent diameters over 4 mm no difference in MACE rates was seen. abstract_id: PUBMED:38439021 Age and healthy lifestyle behavior's disparities and similarities on knowledge of myocardial infarction symptoms and risk factors among public and outpatients in a resource-limited setting, cross-sectional study in greater Gaborone, Botswana. Objectives: In this cross-sectional study from Botswana, we investigated knowledge of myocardial infarction (MI) symptoms and risk factors among the general public and outpatients with MI risk factors based on age and lifestyle behaviors, in addition to assessing associations with sociodemographic and MI risk factors. Method: Open-ended questionnaires about 8 MI symptoms and 10 risk factors, were administered by research assistants to a representative selection of outpatients (n = 525) and the public (n = 2248). Weight and height were measured in all participants and BMI was calculated. Knowledge scores were compared between the two groups. We examined whether sociodemographic and MI risk factors had impact on the scores. Analyses were further adjusted for lifestyle behavior (smoking status, dietary status and physical activities). Results: The valid response rate was 97.9% comprising 97.8% for the public (n = 2199) and 98.1% for outpatients (n = 515). Public respondents (35.2 ± 12.3 years) were younger than outpatients (38.5 ± 12.6 years). The public comprised 56.9% females while outpatients 54.6%. In general, outpatients had higher knowledge of MI symptoms than the public, with mean scores ± SD of 3.49 ± 2.84 vs 2.80 ± 2.54. Outpatients also had higher knowledge score of MI risk factors than the public, with mean scores, 5.33 ± 3.22 vs 3.82 ± 3.07. For MI symptoms, outpatients were more aware than the public for chest pains among all ages, for arm pain/ numbness, neck/ jaw pain radiating to/ from chest, and feeling sick or looking pallor on the skin among those aged 35-49 years. Among both the public and outpatients, lower knowledge of both MI symptoms and risk factors was associated with primary education, not residing/working together, history of hypertension, no history of heart disease/stroke, and obesity. There were similarities and disparities on MI knowledge among respondents with various numbers of healthy lifestyle behaviours. Conclusion: Results call for urgent educational campaigns on awareness and knowledge of MI and using strategies based on age and lifestyle behavior. abstract_id: PUBMED:34386899 Perioperative risk and antiplatelet management in patients undergoing non-cardiac surgery within 1 year of PCI. Perioperative cardiovascular complications are important causes of morbidity and mortality associated with non-cardiac surgery, especially in patients with recent percutaneous coronary intervention (PCI). We aimed to illustrate the types and timing of different surgeries occurring after PCI, and to evaluate the risk of thrombotic and bleeding events according to the perioperative antiplatelet management. Patients undergoing urgent or elective non-cardiac surgery within 1 year of PCI at a tertiary-care center between 2011 and 2018 were included. The primary outcome was major adverse cardiac events (MACE; composite of death, myocardial infarction, or stent thrombosis) at 30 days. Perioperative bleeding was defined as ≥ 2 units of blood transfusion. A total of 1092 surgeries corresponding to 747 patients were classified by surgical risk (low: 50.9%, intermediate: 38.4%, high: 10.7%) and priority (elective: 88.5%, urgent/emergent: 11.5%). High-risk and urgent/emergent surgeries tended to occur earlier post-PCI compared to low-risk and elective ones, and were associated with an increased risk of both MACE and bleeding. Preoperative interruption of antiplatelet therapy (of any kind) occurred in 44.6% of all NCS and was more likely for procedures occurring later post-PCI and at intermediate risk. There was no significant association between interruption of antiplatelet therapy and adverse cardiac events. Among patients undergoing NCS within 1 year of PCI, perioperative ischemic and bleeding events primarily depend on the estimated surgical risk and urgency of the procedure, which are increased early after PCI. Preoperative antiplatelet interruption was not associated with an increased risk of cardiac events. abstract_id: PUBMED:37769620 Long-term clinical outcomes of excimer laser coronary atherectomy for the management of recurrent in-stent restenosis. Background: Recurrent in-stent restenosis (ISR) remains a serious problem. Optimal modification of the underlying mechanism during index percutaneous coronary intervention (PCI) is key to prevent ISR. Excimer laser coronary atherectomy (ELCA) has its own indications and is among others used in recurrent ISR in case of stent underexpansion and/or diffuse neointimal hyperplasia. We aimed to assess the long-term clinical outcomes of ELCA for the management of recurrent ISR. Methods: A multicenter, retrospective observational study was conducted. Patients with recurrent ISR who were additionally treated with ELCA were included. The primary outcome was major adverse cardiac events (MACE) defined as a composite of cardiovascular death, myocardial infarction, stroke, target lesion revascularization at 12 months, and longer term. Results: Between 2014 and 2022, 51 patients underwent PCI with the additional use ELCA for recurrent ISR. Primary outcome occurred in 6 patients (11.8%) at 12 months and in 12 patients (23.5%) at a median follow-up of 4 (1-6) years. Technical and procedural success were achieved in 92% and 90% of cases, respectively. Coronary perforation occurred in 2 patients as a result of distal wire perforation, but was not ELCA-related. There were no in-hospital MACE. Conclusions: ELCA appears to be a safe method with acceptable long-term results for the management of recurrent ISR. abstract_id: PUBMED:32611318 Contribution of ESC DAPT guideline-endorsed high thrombotic risk features to long-term clinical outcomes among patients with and without high bleeding risk after PCI. Background: Whether the underlying risk of high bleeding risk (HBR) influences the relationship of high thrombotic risk (HTR) features with adverse events after drug-eluting stent implantation remains unclear. The purpose of this study was to evaluate (1) the prognostic effect of ESC guideline-endorsed HTR features on long-term clinical outcomes and (2) whether the outcomes of HTR versus non-HTR features vary by HBR status. Methods: Ten thousand one hundred sixty-seven consecutive patients who underwent percutaneous coronary intervention between January 2013 and December 2013 were prospectively enrolled in Fuwai PCI Registry. Patients who are at HTR were defined as: diffuse multivessel disease in diabetic patients, chronic kidney disease, at least three stents implanted, at least three stents lesions treated, bifurcation with two stents implanted, total stent length > 60 mm, or treatment of chronic total occlusion. The definition of HBR was based on the Academic Research Consortium for HBR criteria. The primary ischemic outcome was major adverse cardiac event (MACE), a composite of cardiac death, myocardial infarction, target vessel revascularization and stent thrombosis. The primary bleeding outcome was clinically relevant bleeding, defined according to Bleeding Academic Research Consortium (BARC) type 2, 3 or 5 bleeding. Results: With a 2.4-year median follow-up, 4430 patients (43.6%) having HTR experienced a significantly higher risk of MACE (hazard ratio [HR] adjust: 1.56, 95% confidence interval [CI]: 1.34-1.82; P < 0.001) and device-oriented composite endpoint (composite of cardiac death, target-vessel MI, and target lesion revascularization) (HRadjust: 1.52 [1.27-1.83]; P < 0.001), compared to those having non-HTR. The risk of clinically relevant bleeding did not differ between groups (HRadjust: 0.85 [0.66-1.08]; P = 0.174). Associations between HTR and adverse events were similar in HBR and non-HBR groups, without evidence of interaction (all Pinteraction > 0.05); however, adverse event rates were highest among subjects with both HTR and HBR. Conclusions: ESC guideline-endorsed HTR was associated with significantly increased risk of MACE without any significant differences in clinically relevant bleeding. The presence of HBR does not emerge as a modifier of cardiovascular risk for patients at HTR, suggesting more potent and longer antiplatelet therapy may be beneficial for this patient population. abstract_id: PUBMED:27821100 Exploratory study into the awareness of heart diseases among Emirati women (UAE) and their health seeking behaviour- a qualitative study. Background: Cardiovascular diseases were the leading cause of death in women in the United Arab Emirates (UAE) in 2010. The UAE is expected to experience a tripling of heart diseases in the next two decades as risk factors for heart diseases increase. Research shows that first year survival rates of younger women suffering from a heart attack are lower than in men. Women present with a wider range of symptoms for heart diseases than men; non-recognition of atypical symptoms may explain the delay in seeking treatment and poor prognosis following heart diseases in women. No known study on awareness of heart diseases among women has been carried out in the Middle Eastern region. Methods: Social constructionist and interpretivist epistemological approaches have been considered in this qualitative study to explore the awareness of heart diseases and the health seeking behavior of Emirati women. Convenience sampling was used to recruit 41 Emirati women. Three focus groups and six in-depth semi-structured interviews were conducted to obtain data. Thematic content analysis was applied to the data following transcription and translation of recordings. Results: Emirati women had limited knowledge on heart diseases. Women were generally unaware of the atypical symptoms, commonly experienced by women however they identified most risk factors associated with heart diseases. Lack of awareness of disease severity and symptoms, sociocultural influences and distrust in the healthcare system were considered the main barriers to seeking prompt treatment. Conclusions: This study clearly identified gaps and inaccuracies in knowledge of heart diseases, which could contribute to delayed health seeking action and possibly poorer prognosis among Emirati women. Absence of initiatives to educate women on cardiovascular diseases in UAE has erroneously deemed it a less serious concern among Emirati women. The findings from this study provide clear indications of the need to increase accountability of the healthcare system and to develop culturally relevant, gender specific, age focused, heart diseases related public health awareness campaigns in UAE. Answer: The survey of 80 consecutive patients following elective percutaneous coronary intervention (PCI) revealed that patient awareness of stent type, expected cardiovascular outcomes, and recognition of myocardial infarction (MI) symptoms post-PCI is low in the real-world setting. Specifically, 45% of patients were unaware of the type of stent they received (bare metal or drug-eluting), and while the majority understood they remained at risk for MI despite the procedure, 42.5% did not recall receiving specific education on how to recognize symptoms of an MI during their hospital stay. This suggests that there is a missed educational opportunity to improve patient understanding and potentially outcomes post-PCI (PUBMED:21114531). Moreover, the impact of diabetes mellitus (DM) on clinical outcomes after PCI for in-stent restenosis (ISR) was investigated, showing that patients with DM had a higher rate of major adverse cardiac events (MACE) at 1-year follow-up. However, after adjusting for baseline characteristics, this increased risk was no longer significant, indicating the importance of considering comorbidities in patient education and management strategies (PUBMED:38360097). Additionally, a study among female employees at Taibah University highlighted limited knowledge and awareness of cardiovascular disease (CVD) risk, with only 8.6% considered as having low knowledge. This underscores the need for educational programs to enhance awareness of risk factors and prevention of CVD (PUBMED:34671586). In Botswana, a cross-sectional study found disparities and similarities in knowledge of MI symptoms and risk factors among the general public and outpatients based on age and lifestyle behaviors. The study called for urgent educational campaigns on awareness and knowledge of MI, suggesting that strategies should be tailored based on age and lifestyle behavior (PUBMED:38439021). These findings collectively indicate that there is a significant gap in patient education regarding the type of stent received, the risk of cardiac events, and the symptoms of MI among PCI patients. Addressing this gap through systematic and tailored educational interventions could potentially improve patient outcomes and reduce the risk of future cardiac events.
Instruction: Are nuclear texture features a suitable tool for predicting non-organ-confined prostate cancer? Abstracts: abstract_id: PUBMED:10379797 Are nuclear texture features a suitable tool for predicting non-organ-confined prostate cancer? Purpose: We investigated the possibility of determining organ confinement of prostate cancer using multiple nuclear texture features determined by fully automated high resolution image analysis combined with preoperative serum PSA levels. Materials And Methods: The study population consisted of 145 patients (61 organ confined and 84 non-organ-confined cases). Nuclear texture features were determined using single cell preparations of radical prostatectomy specimens. Nuclear texture features were extracted and analyzed by multivariate logistic regression analysis in order to build a classifier for distinguishing between organ confined and non-organ-confined tumors. The classifier was designed in a cell by cell model and tested on a case by case analysis. Results: The predictive probability of the trained classifier in the cell by cell analysis had a sensitivity of 63%, a specificity of 53%, a positive predictive value of 75% and a negative predictive value of 38% and an area under the ROC curve of 0.58. In the case by case analysis the sensitivity was 70%, the specificity was 54%, positive predictive value 78%, negative predictive value 74%, area under the ROC curve 0.62. When preoperative PSA was included in the algorithm, sensitivity raised to 80%, specificity to 60%, the positive predictive value raised to 79%, the negative predictive value to 52% and the area under the ROC curve to 0.70. Conclusions: In contrast to former studies using tissue sections, our results suggest that nuclear texture features extracted from single cell preparations cannot be used as a reliable parameter for the determination of organ confinement in prostatic adenocarcinomas. abstract_id: PUBMED:37483849 The association of the type and number of D'Amico high-risk criteria with rates of pathologically non-organ-confined prostate cancer. Introduction: The aim of this study was to assess the association between the type and number of D'Amico high-risk criteria (DHRCs) with rates of pathologically non-organ-confined (NOC) prostate cancer in patients treated with radical prostatectomy (RP) and pelvic lymphadenectomy (PLND). Material And Methods: In the Surveillance, Epidemiology, and End Results database (2004-2016), we identified 12961 RP and PLDN patients with at least one DHRC. We relied on descriptive statistics and multivariable logistic regression models. Results: Of 12 961 patients, 6135 (47%) exclusively harboured biopsy Gleason score (GS) 8-10, 3526 (27%) had clinical stage ≥T2c, and 1234 (9.5%) had prostate-specific antigen (PSA) >20 ng/mL. Only 1886 (15%) harboured any combination of 2 DHRCs. Finally, all 3 DHRCs were present in 180 (1.4%) patients. NOC rates increased from 32% for clinical T stage ≥T2c to 49% for either GS 8-10 only or PSA >20 ng/mL only and to 66-68% for any combination of 2 DHRCs, and to 84% for respectively all 3 DHRCs, which resulted in a multivariable logistic regression OR of 1.00, 2.01 (95% CI 1.85-2.19; p <0.001), 4.16 (95% CI 3.69-4.68; p <0.001), and 10.83 (95% CI 7.35-16.52; p <0.001), respectively. Conclusions: Our study indicates a stimulus-response effect according to the type and number of DHRCs. Hence, a formal risk-stratification within high-risk prostate cancer patients should be considered in clinical decision-making. abstract_id: PUBMED:35188982 Non-organ confined stage and upgrading rates in exclusive PSA high-risk prostate cancer patients. Background: The pathological stage of prostate cancer with high-risk prostate-specific antigen (PSA) levels, but otherwise favorable and/or intermediate risk characteristics (clinical T-stage, Gleason Grade group at biopsy [B-GGG]) is unknown. We hypothesized that a considerable proportion of such patients will exhibit clinically meaningful GGG upgrading or non-organ confined (NOC) stage at radical prostatectomy (RP). Materials And Methods: Within the Surveillance, Epidemiology, and End Results database (2010-2015) we identified RP-patients with cT1c-stage and B-GGG1, B-GGG2, or B-GGG3 and PSA 20-50 ng/ml. Rates of GGG4 or GGG5 and/or rates of NOC stage (≥ pT3 and/or pN1) were analyzed. Subsequently, separate univariable and multivariable logistic regression models tested for predictors of NOC stage and upgrading at RP. Results: Of 486 assessable patients, 134 (28%) exhibited B-GGG1, 209 (43%) B-GGG2, and 143 (29%) B-GGG3, respectively. The overall upgrading and NOC rates were 11% and 51% for a combined rate of upgrading and/or NOC stage of 53%. In multivariable logistic regression models predicting upgrading, only B-GGG3 was an independent predictor (odds ratio [OR]: 5.29; 95% confidence interval [CI]: 2.21-14.19; p < 0.001). Conversely, 33%-66% (OR: 2.36; 95% CI: 1.42-3.95; p = 0.001) and >66% of positive biopsy cores (OR: 4.85; 95% CI: 2.84-8.42; p < 0.001), as well as B-GGG2 and B-GGG3 were independent predictors for NOC stage (all p ≤ 0.001). Conclusions: In cT1c-stage patients with high-risk PSA baseline, but low- to intermediate risk B-GGG, the rate of upgrading to GGG4 or GGG5 is low (11%). However, NOC stage is found in the majority (51%) and can be independently predicted with percentage of positive cores at biopsy and B-GGG. abstract_id: PUBMED:31817111 Multi-Features Classification of Prostate Carcinoma Observed in Histological Sections: Analysis of Wavelet-Based Texture and Colour Features. Microscopic biopsy images are coloured in nature because pathologists use the haematoxylin and eosin chemical colour dyes for biopsy examinations. In this study, biopsy images are used for histological grading and the analysis of benign and malignant prostate tissues. The following PCa grades are analysed in the present study: benign, grade 3, grade 4, and grade 5. Biopsy imaging has become increasingly important for the clinical assessment of PCa. In order to analyse and classify the histological grades of prostate carcinomas, pixel-based colour moment descriptor (PCMD) and gray-level co-occurrence matrix (GLCM) methods were used to extract the most significant features for multilayer perceptron (MLP) neural network classification. Haar wavelet transformation was carried out to extract GLCM texture features, and colour features were extracted from RGB (red/green/blue) colour images of prostate tissues. The MANOVA statistical test was performed to select significant features based on F-values and P-values using the R programming language. We obtained an average highest accuracy of 92.7% using level-1 wavelet texture and colour features. The MLP classifier performed well, and our study shows promising results based on multi-feature classification of histological sections of prostate carcinomas. abstract_id: PUBMED:32015608 The role of robot-assisted radical prostatectomy in high-risk organ-confined prostate cancer. The traditional open retropubic radical prostatectomy has an established role in the treatment of prostate cancer. However, it is well known to be morbid procedure with high complication rate. This bad reputation prevented utilizing it on a large scale for high risk prostate cancer. Utilizing the da Vinci® to preform radical prostatectomy decreased the morbidity of the procedure. Since the introduction of robotic prostatectomy, there have been hot debates on its role in the treatment of high risk disease. In this article we reviewed the current evidence on utilizing the surgical system in treating high risk organ confined prostate cancer. abstract_id: PUBMED:10751864 Human glandular kallikrein 2: a potential serum marker for predicting the organ confined versus non-organ confined growth of prostate cancer. Purpose: We measured serum levels of human glandular kallikrein 2 (hK2) in patients with prostate cancer treated with radical retropubic prostatectomy for clinically localized prostate cancer to determine whether preoperative hK2 levels discriminate stage pT2a/b from pathological stage T3a or greater cancer. This finding would help to predict preoperatively the organ confined versus non-organ confined growth of prostate cancer. Materials And Methods: A total of 68 consecutive men underwent radical retropubic prostatectomy for clinically localized prostate cancer. Serum was obtained 1 day preoperatively before prostatic manipulation. hK2, and total and free prostate specific antigen (PSA) were measured using immunofluorometric assays. Mean, median and range of hK2, total and free PSA, and the ratio of free-to-total PSA (percent free PSA) were calculated. Each analyte or combination of analytes was evaluated to determine whether it significantly contributed to enhance the discrimination of organ confined from non-organ confined cancer. We calculated the statistical significance of observed differences using the Mann-Whitney U and Kruskal-Wallis tests. Sensitivity and specificity calculations were performed for hK2, total PSA and the algorithm, (hK2) x (total PSA/free PSA) in addition to receiver operating characteristics curves and the respective areas under the curves. Multivariate logistic regression analysis was done for hK2, and total and free PSA RESULTS: Disease was organ and non-organ (extraprostatic extension) confined in 38 and 30 men, respectively. In organ confined cancer mean hK2 was significantly lower than in non-organ confined cancer (0.09 ng./ml., range less than 0.03 to 0.23 versus 0.30, range 0.04 to 0.94, p <0.0001). In addition, there was significantly higher free and total but not percent free PSA in non-organ than in organ confined cases. There were also statistically significant differences in hK2, free PSA and total PSA at each pathological disease stage (p <0.001, <0.01 and <0.05, respectively). Sensitivity for detecting organ confined disease was 37% at 100% specificity (correct identification of all non-organ confined cancer) using hK2 measurements compared with a sensitivity of 14% for total PSA. At a specificity of 95%, sensitivity was 40% for hK2 versus 23% for total PSA, which was a statistically significant gain in sensitivity (p <0.05). Receiver operating characteristics curves demonstrated that hK2 had the largest area under the curve, followed by the algorithm, (hK2) x (total PSA/free PSA), and total PSA (0.76, 0.75 and 0.72, respectively). However, none of area under the curve differences was statistically significant. Conclusions: Compared with total and free PSA hK2 testing improved the preoperative evaluation of patients who underwent radical retropubic prostatectomy due to the superior discrimination of organ from non-organ confined cancer. abstract_id: PUBMED:34973856 Oncologic outcomes of organ-confined Gleason grade group 4-5 prostate cancer after radical prostatectomy. Background: Organ-confined prostate cancer (CaP) at radical prostatectomy (RP) is associated with good long-term outcomes. However, information for aggressive Gleason organ-confined CaP is scant. To investigate the impact of Gleason grade group (GG) 4-5 on long-term oncologic outcomes after RP. Methods: Within a high-volume center database we identified patients who harbored organ-confined CaP (pT2) at RP between 1992 and 2017. Only patients with negative surgical margins, without lymph node invasion and without neo- and/or adjuvant androgen deprivation therapy and/or adjuvant radiotherapy were included. Patients with GG1 were excluded. Kaplan-Meier analyses and Cox regression models tested the effect of GG4 and GG5 on biochemical recurrence-free (BFS), metastasis-free (MFS), overall survival (OS) and cancer-specific mortality (CSM) free survival. Results And Limitations: Of 10,855 identified pT2 patients, 0.1% (n=81) and 0.1% (n=114) harbored GG4 and GG5, respectively. At 10-years after RP, BFS, MFS, OS and CSM-free rates were 80.3 vs. 68.6 vs. 55.4% (P<0.001), 96.7 vs. 89.9. vs. 83.4% (P<0.001), 93.2 vs. 78.3 vs. 72.6% (P<0.001) and 99.3 vs. 98.0 vs. 82.7% (P<0.001) for GG2 and GG3 vs. GG4 vs. GG5, respectively. In multivariable Cox regression models, GG5 represented an independent predictor for biochemical recurrence (Hazard ratio [HR] 3.00, P<0.001), metastasis (HR 5.01, P<0.001), death (HR 2.72, P<0.01) and cancer-specific death (HR 30.1, P<0.001). Conversely, GG4 represented an independent predictor for death (HR 2.10, P=0.04) and cancer-specific death (HR 6.09, P=0.01) but not for biochemical recurrence and metastasis. Conclusion: GG4/5 in organ-confined CaP is rare. But its associated with worse oncologic outcomes after RP, namely biochemical recurrence, metastasis, death and cancer-specific death. Patients with organ-confined GG4/5 and negative margins should be closely followed and may be candidates for risk stratification by genomic markers. abstract_id: PUBMED:37783172 Differences in long-term continence rates between prostate cancer patients with extraprostatic vs. organ-confined disease undergoing robotic-assisted radical prostatectomy: An observational studys. Objectives: Within the tertiary-case database, the authors tested for differences in long-term continence rates (≥ 12 months) between prostate cancer patients with extraprostatic vs. organ-confined disease who underwent Robotic-Assisted Radical Prostatectomy (RARP). Method: In the institutional tertiary-care database the authors identified prostate cancer patients who underwent RARP between 01/2014 and 01/2021. The cohort was divided into two groups based on tumor extension in the final RARP specimen: patients with extraprostatic (pT3/4) vs. organ-confined (pT2) disease. Additionally, the authors conducted subgroup analyses within both the extraprostatic and organ-confined disease groups to compare continence rates before and after the implementation of the new surgical technique, which included Full Functional-Length Urethra preservation (FFLU) and Neurovascular Structure-Adjacent Frozen-Section Examination (NeuroSAFE). Multivariable logistic regression models addressing long-term continence were used. Results: Overall, the authors identified 201 study patients of whom 75 (37 %) exhibited extraprostatic and 126 (63 %) organ-confined disease. There was no significant difference in long-term continence rates between patients with extraprostatic and organ-confined disease (77 vs. 83 %; p = 0.3). Following the implementation of FFLU+ NeuroSAFE, there was an overall improvement in continence from 67 % to 89 % (Δ = 22 %; p < 0.001). No difference in the magnitude of improved continence rates between extraprostatic vs. organ-confined disease was observed (Δ = 22 % vs. Δ = 20 %). In multivariable logistic regression models, no difference between extraprostatic vs. organ-confined disease in long-term continence was observed (Odds Ratio: 0.91; p = 0.85). Conclusion: In this tertiary-based institutional study, patients with extraprostatic and organ-confined prostate cancer exhibited comparable long-term continence rates. abstract_id: PUBMED:20514082 Predicting non-organ-confined prostate cancer in men diagnosed after 2000. The objective of this study was to preoperatively predict non-organ-confined disease in patients considering radical prostatectomy. To account for the stage migration seen in prostate cancer, we included only those patients who underwent prostatectomy after the year 2000. Information on a cohort of 1895 patients who underwent radical prostatectomy from 2000 to 2008 was retrieved from the Duke Prostate Center database. Race (African American, non-African American), body mass index, age at surgery, PSA, biopsy Gleason sum (<7, 7 and >7) and clinical tumor stage (cT1, cT2/3) were analyzed by univariate analysis followed by logistic regression analysis. The Duke Interactive Clinical Equation for staging (DICE-S score) was calculated from the logistic regression model. The model was then internally validated using a bootstrapping technique. Biopsy Gleason sums 7 and >7 were more likely to have non-organ-confined disease compared with <7 (OR=2.97, Gleason sum=7; OR=3.25, Gleason sum>7). Clinical tumor stage, cT2/3, predicted non-organ-confined disease (OR=1.58). Older age was associated with non-organ-confined disease (OR=1.02), as was greater PSA (OR=1.12). DICE-S equation x=ln (p/1-p)=-3.627+0.019 (age)+0.109 (PSA)+1.087 (bGleason=7)+1.180 (bGleason >7)+0.459 (clinical T stage >T1), where p=(e(x))/(1+e(x)). A concordance index (prediction accuracy) of 0.73 was reached on internal validation. Using the DICE-S score, age, PSA, biopsy Gleason sum and clinical tumor stage, we can predict non-organ-confined disease in radical prostatectomy at an acceptable accuracy. Preoperative information on disease stage may aid in treatment decisions and surgical approach. abstract_id: PUBMED:30205334 Computer-aided classification of prostate cancer grade groups from MRI images using texture features and stacked sparse autoencoder. A novel method to determine the Grade Group (GG) in prostate cancer (PCa) using multi-parametric magnetic resonance imaging (mpMRI) biomarkers is investigated in this paper. In this method, high-level features are extracted from hand-crafted texture features using a deep network of stacked sparse autoencoders (SSAE) and classified them using a softmax classifier (SMC). Transaxial T2 Weighted (T2W), Apparent Diffusion Coefficient (ADC) and high B-Value Diffusion-Weighted (BVAL) images obtained from PROSTATEx-2 2017 challenge dataset are used in this technique. The method was evaluated on the challenge dataset composed of a training set of 112 lesions and a test set of 70 lesions. It achieved a quadratic-weighted Kappa score of 0.2772 on evaluation using test dataset of the challenge. It also reached a Positive Predictive Value (PPV) of 80% in predicting PCa with GG > 1. The method achieved first place in the challenge, winning over 43 methods submitted by 21 groups. A 3-fold cross-validation using training data of the challenge was further performed and the method achieved a quadratic-weighted kappa score of 0.2326 and Positive Predictive Value (PPV) of 80.26% in predicting PCa with GG > 1. Even though the training dataset is a highly imbalanced one, the method was able to achieve a fair kappa score. Being one of the pioneer methods which attempted to classify prostate cancer into 5 grade groups from MRI images, it could serve as a base method for further investigations and improvements. Answer: Based on the provided abstracts, nuclear texture features do not appear to be a reliable tool for predicting non-organ-confined prostate cancer. Specifically, a study investigating the use of multiple nuclear texture features determined by fully automated high-resolution image analysis, combined with preoperative serum PSA levels, found that these features extracted from single cell preparations could not be used as a reliable parameter for the determination of organ confinement in prostatic adenocarcinomas (PUBMED:10379797). The predictive probability of the trained classifier using these features showed limited sensitivity, specificity, and area under the ROC curve, indicating that nuclear texture features alone may not provide sufficient accuracy for predicting non-organ-confined prostate cancer. Other studies have focused on different parameters and methods for predicting the organ confinement of prostate cancer. For example, the type and number of D'Amico high-risk criteria (DHRCs) have been associated with rates of pathologically non-organ-confined prostate cancer, suggesting that a formal risk stratification within high-risk prostate cancer patients should be considered in clinical decision-making (PUBMED:37483849). Additionally, the pathological stage of prostate cancer with high-risk PSA levels but otherwise favorable or intermediate risk characteristics can be predicted by the percentage of positive biopsy cores and biopsy Gleason Grade group (PUBMED:35188982). Other methods, such as the use of computer-aided classification from MRI images using texture features and stacked sparse autoencoder, have been explored for determining prostate cancer grade groups (PUBMED:30205334), and human glandular kallikrein 2 (hK2) has been studied as a potential serum marker for predicting the organ confined versus non-organ confined growth of prostate cancer (PUBMED:10751864). In summary, while nuclear texture features alone may not be a suitable tool for predicting non-organ-confined prostate cancer, combining them with other clinical parameters or using alternative predictive markers and methods may improve the accuracy of such predictions.
Instruction: Family planning services in developing countries: an opportunity to treat asymptomatic and unrecognised genital tract infections? Abstracts: abstract_id: PUBMED:9582484 Family planning services in developing countries: an opportunity to treat asymptomatic and unrecognised genital tract infections? Objective: To determine the prevalence of asymptomatic and unrecognised genital tract infections among women attending a family planning clinic in rural South Africa. Methods: 189 consecutive women had genital samples taken to diagnose infection with Neisseria gonorrhoeae, Chlamydia trachomatis, Trichomonas vaginalis, Candida albicans, Treponema pallidum, and HIV, and to diagnose bacterial vaginosis. Results: Mean age was 25 years; 155 (82%) were unmarried, 156 (83%) were currently using contraception, and 41 (22%) reported having an STD treated in the preceding 12 months. Although none volunteered abnormal urogenital symptoms, 74 (39%) had at least one elicited by direct questioning. 119 women (63%) had at least one genital infection: N gonorrhoeae (eight; 4%), C trachomatis (14; 8%), T vaginalis (26; 14%), C albicans (56; 30%), active syphilis (15; 8%), HIV (44; 24%), and bacterial vaginosis (29; 15%). 49 women (26%) had multiple infections. Most infections (71; 60%) were asymptomatic. Symptomatic women failed to recognise and report their symptoms, and routine services failed to detect the infections. Conclusion: Prevalence of genital tract infection is high among these women, most infections are asymptomatic, and symptomatic infections are frequently not recognised. Women attending family planning clinics in such settings should be screened for syphilis and offered testing for HIV infection. Strategies to detect and treat other genital infections need to be developed. abstract_id: PUBMED:9028406 Providing accessible health care for adolescents with sexually transmitted disease. Provision of sexually transmitted disease (STD) care for sexually active adolescents has been neglected in developing countries, although this is changing. Available evidence indicates that STDs are a serious problem among adolescents (10-19 years), especially in rural areas where services are limited for any age group. Curative care is hampered by the inadequacy of the syndromic approach for identifying adolescents with asymptomatic infections, especially Chlamydia trachomatis. There is an urgent need to asses STD interventions for adolescents in controlled studies, with numbers and follow-up sufficient to monitor changes in STD markers. Many programmes report increased uptake of condoms by youth but have been unable to demonstrate its effect on STD/HIV rates. It is unlikely that any one approach to adolescent STD services will be feasible and hence the importance of understanding the benefits and limitations of each approach. abstract_id: PUBMED:10150523 Relationship between AIDS and family planning programmes: a rationale for developing integrated reproductive health services. This paper examines the potential links between family planning and maternal and child health programmes and the emerging needs of the global AIDS epidemic; and assesses the feasibility of organizing comprehensive reproductive health programmes in developing countries to include the prevention and control of reproductive tract infections including sexually transmitted infections as well as HIV/AIDS. The paper discusses arguments opposing the integration of programmes against sexually transmitted infections within family planning and MCH programmes, as well as the programmatic and epidemiological reasons for considering such integration. It is important to determine what elements can be realistically integrated within programmes of varying organizational capacity without compromising their effectiveness. abstract_id: PUBMED:1489243 Prevention of PID: a challenge for the health service. The control of sexually transmitted diseases (STD) will reduce the prevalence of pelvic inflammatory disease (PID). In most developing countries, treatment services are limited, coverage of the infected female population is inadequate and women seeking treatment are likely to be mismanaged. Family planning clinics do not usually provide routine screening for non-pregnant women. Screening only at family planning clinics would result in failure to treat some high-risk individuals, especially unmarried adolescents. The use of simplified protocols in the community has been recommended, but these may underestimate the problems of contact tracing, provide no systematic screening and induce passivity in the patient. For pregnant women, STD control has targetted syphilis, which does not cause PID. An integrated service for the management of reproductive health is required, and the development of women's clinics is suggested, using well-trained nurses and affordable technologies. abstract_id: PUBMED:9795576 Issues in the management of sexually transmitted diseases in Papua New Guinea. This paper outlines three important issues in the clinical management of sexually transmitted diseases (STDs) in Papua New Guinea which have, until now, gone unrecognized or been neglected. Suggestions for possible solutions are made. The high prevalence of both chlamydial and trichomonal infections in women cannot be ignored. Both of these infections have been shown to increase the transmission of HIV. The current algorithm for the treatment of vaginal discharges does not include treatment for trichomonal infection in the first instance, yet trichomoniasis has been shown to be the most common STD in community studies both here and elsewhere. Trichomoniasis is usually asymptomatic in men, but still increases the risk of HIV transmission; furthermore, it causes illness in their female partners and thus needs to be treated. The current recommended regimens for the treatment of gonococcal and chlamydial infection are complex due to the number of drugs recommended for gonorrhoea to combat the problem of drug resistance, and the length of therapy for chlamydia. Compliance with such a regimen is likely to be poor, particularly in asymptomatic partners. We need to consider the relative advantages provided by a drug which could be given as a single oral dose for chlamydia, and perhaps for both infections. Azithromycin is one possibility, as it has been shown to be effective for chlamydial infection in numerous studies, and has been found satisfactory for gonorrhoea where local isolates were susceptible. Testing of a small number of isolates from Papua New Guinea has shown that azithromycin may be suitable for use here, but further susceptibility testing needs to be performed. Utilization of services for STDs, particularly by women, is extremely low. This is due to a combination of factors involving limited knowledge of symptomatology and its significance, the asymptomatic nature of many infections, the structure of the services, health worker behaviour, and social attitudes. To address these issues we must make modifications to STD service provision, as well as provide widespread information about the potentially serious consequences of contracting STDs, including both infertility and AIDS. Possible modifications to the services are discussed, and include making routine screening available for women through currently existing services such as family planning and antenatal clinics and considering the possibility of establishing Women's Health Clinics which would provide all primary reproductive health services in an integrated manner. abstract_id: PUBMED:8874461 Prevention of HIV infection in developing countries. The HIV/AIDS epidemic continues to spread rapidly in developing countries. Heterosexual transmission accounts for almost three-quarters of infections. Current strategies have been effective in the prevention of HIV spread within certain groups but they have had limited impact on the general spread of the epidemic. There is a need to complement these strategies with approaches that will influence the social and environmental determinants of risk to enable those vulnerable to infection to protect themselves. abstract_id: PUBMED:10907280 Implementing the integration of component services for reproductive health. In the wake of the 1994 International Conference on Population and Development in Cairo, considerable activity has occurred both in national policymaking for reproductive health and in research on the implementation of the Cairo Program of Action. This report considers how effectively a key component of the Cairo agenda--integration of the management of sexually transmitted infections, including human immunodeficiency virus, with maternal and child health-family planning services--has been implemented. Quantitative and qualitative data are used to illuminate the difficulties faced by implementers of reproductive health programs in Ghana, Kenya, South Africa, and Zambia. In these countries, clear evidence is found of a critical need to reexamine the continuing focus on family planning services and the nature of the processes by which managers implement reproductive health policies. Implications of findings for policy and program direction are discussed. abstract_id: PUBMED:12319044 STD services in family planning programs. N/A abstract_id: PUBMED:12290467 Good reproductive health involves many services. N/A abstract_id: PUBMED:12288127 Reproductive health services. N/A Answer: Family planning services in developing countries present an opportunity to address asymptomatic and unrecognized genital tract infections. A study conducted in a rural South African family planning clinic found that a high prevalence of genital tract infections existed among the women attending the clinic. Despite none of the women volunteering information about abnormal urogenital symptoms, direct questioning revealed that 39% had at least one symptom. The study diagnosed infections with Neisseria gonorrhoeae, Chlamydia trachomatis, Trichomonas vaginalis, Candida albicans, Treponema pallidum, HIV, and bacterial vaginosis, with 63% of the women having at least one genital infection. Notably, 60% of these infections were asymptomatic, and symptomatic infections were frequently not recognized by the women themselves. The study concluded that women attending family planning clinics in such settings should be screened for syphilis and offered testing for HIV infection, and that strategies to detect and treat other genital infections need to be developed (PUBMED:9582484). The provision of sexually transmitted disease (STD) care for adolescents, a group that has been neglected in developing countries, is also a challenge. Adolescents, particularly in rural areas, face serious problems with STDs, and the syndromic approach to care is inadequate for identifying asymptomatic infections, such as those caused by Chlamydia trachomatis. There is an urgent need to assess STD interventions for adolescents in controlled studies (PUBMED:9028406). The relationship between AIDS and family planning programs suggests a rationale for developing integrated reproductive health services that include the prevention and control of reproductive tract infections, including sexually transmitted infections and HIV/AIDS. However, there are arguments both for and against the integration of programs against sexually transmitted infections within family planning and maternal and child health (MCH) programs. It is important to determine what elements can be realistically integrated without compromising the effectiveness of the programs (PUBMED:10150523). In summary, family planning services in developing countries offer a critical opportunity to treat asymptomatic and unrecognized genital tract infections, and there is a need for integrated approaches to reproductive health that can address the broader spectrum of STDs and HIV/AIDS alongside family planning services.
Instruction: Are you now a good surgeon? Abstracts: abstract_id: PUBMED:28740949 Good surgeon: A search for meaning. The art and philosophy of surgery are not as often discussed as scientific discoveries and technological advances in the modern era of surgery. Although these are difficult to teach and pass on to the next generations of surgeons they are no less important for training good surgeons and maintaining their high standards. The authors of this review and opinion article tried to define what being a good surgeon really means and to look into the subject by analysing the essential conditions for being a good surgeon and the qualities that such a specialist should possess. In addition to a strong theoretic knowledge and practical skills and among the several described professional and personal characteristics, a good surgeon is expected to have common sense. It enables a surgeon to make a sound practical judgment independent of specialized medical knowledge and training. The possible ways of developing and/or enhancing common sense during surgical training and subsequent practice require separate analysis. abstract_id: PUBMED:26645917 Obstacles and opportunities for achieving good care on the surgical ward: nurse and surgeon perspective. Aim: The purpose of this qualitative study was to explore and understand from the perspectives of nurses and surgeons the situations and processes that are important in the context of surgical care support or are obstacles to achieving good care. Background: Medical advances and inpatients with multiple illnesses are on the increase. In addition, a high turnover of registered nurses has been identified. This contributes to an increasingly inexperienced nursing staff. Concurrently, studies have shown that patient safety and quality of care are linked to organisational structures and staffing education levels. Method: Eight nurses and six surgeons from three hospitals were interviewed and data were analysed by systematic text condensation. Results: This identified three themes: shifting focus away from the patients, emphasising good communication, and using the competence of the team. Conclusion: This study contributes to a deeper understanding that many interruptions, insufficient communication and unused competence can be a threat to patient safety. Sweden has a high standard but this study elucidates that challenges remain to be resolved. Implications For Nursing Management: The focus on patients can increase by a balance between direct/indirect patient work and administration and by the support of clinicians using their full professional competence. abstract_id: PUBMED:29911027 Is a Surgeon-performed Ultrasound Good Enough in Diagnosing Thyroid Malignancy? Background: The widespread availability of ultrasonography has facilitated the evaluation of thyroid nodules, to differentiate between malignant and benign nodules and between metastatic and reactive lymph nodes. From the radiologists' suite, ultrasound has moved into the surgeon's office. Aim: The aim of the present study was to evaluate the relevance of surgeon-performed ultrasound (SPUS) in the diagnosis of malignancy of the thyroid. Methods: SPUS for 389 consecutive patients attending the outpatient department of endocrine surgery in a tertiary care institute in Chennai. The SPUS data of 350 patients who underwent total thyroidectomy were compared with the report of radiologist-performed ultrasonogram, fine-needle aspiration cytology, and histopathology examination. Results: SPUS ranked the maximum with a Spearman's correlation of 0.886 (P < 0.0001). The sensitivity and specificity of positive-predictive value and negative-predictive value of SPUS were 98.53%, 95.72%, 96.81%, and 98%, respectively. Conclusion: SPUS is a very useful clinical adjunct in diagnosing malignancy of the thyroid and saves the patient time and resource in visiting another consultant. A surgeon who is more familiar with the anatomy and pathophysiology of thyroid disorders triages the nodule better. abstract_id: PUBMED:15176525 To be a good expert surgeon The clinical and scientific level of Japanese surgeons leads the world needless to say, in the field of early gastric cancer, supported by double-contrast roentgenogram, endoscopy and surgical procedure including lymph node dissection with good outcomes. Surgical skills of Japanese surgeons have been improved by watching of cine or videotaped programs at the academic assemblies, breaking through apprenticeship. Ambitious young surgeons could freely learned excellent technical procedure from the pictures. Young surgeon must be an ever-thinking surgeon to get any hint on the film. Even though medical examinations and treatments including surgical procedures are on the way of standardization through EBM, manuals, guidelines and DPC payment system, medical doctors including surgeons must critically continue to think and seek for better treatment for the patients. Since the every patient differs in terms of age. sex, stage of disease, complicated conditions and social background, the treatment should be different, patient to patient, as tailor made fashion. Unless any progress is made, science and arts of medicine will decline. As another advice for a young surgeon from my experience you should write an operation protocol of every procedure by your self, even if that operation is the first experience or you joined as an assistant. After you wrote, you read the protocol by your instructor. Then you learn many knowledge and surgical know-how. Certainly, the book of the protocols will be your valuable treasure. Furthermore, you should not aim to be a skillful surgeon, but to be a good surgeon performing reliable and gentle operations for the patients. abstract_id: PUBMED:31686997 The Pregnant Surgeon. The proportion of women entering medicine and surgery is increasing steadily. As more women surgeons have children during residency and practice, it is essential that we adopt a team approach to the pregnant surgeon, as a pregnancy undoubtedly impacts the team and not just the individual. Coauthored by a practicing surgeon mother of 2 (J. D.) and a postgraduate year 8 colorectal surgery fellow mother of 2 who is currently pregnant (C. H.), we aim to review the data while providing some practical advice for pregnant surgeons and those considering pregnancy, as well as their valued colleagues, mentors, and administrators. Physical challenges are discussed by trimester, as are the risks facing the pregnant surgeon, the logistics of transitioning in and out of maternity leave, and postpartum considerations. abstract_id: PUBMED:31583364 The contribution of patients' presurgery perceptions of surgeon attributes to the experience of trust and pain during third molar surgery. Background: Patients' feeling of trust in their surgeon may modulate the experience of pain during surgery. However, factors that contribute to patients' experience of trust during surgery remain underexamined. The current study examined the contribution of patients' impressions of surgeons' warmth and competence to their experience of trust and pain during wisdom tooth extractions. Methods: Patients (N = 135, 47% female) scheduled for a wisdom tooth extraction reported their current distress and impressions of their surgeon's warmth and competence after a brief introduction to their surgeon immediately before surgery. Immediately after their surgery, patients reported their experience of trust (feeling safe and in good hands) and pain during surgery. Path analyses modeled perceptions of surgeon warmth, competence, and their interaction as predictors of patients' experiences of trust and pain during surgery. Results: Higher perceived surgeon competence, but not warmth, predicted the experience of higher trust and lower pain during surgery. Perceived competence interacted with perceived warmth such that the competence-trust relationship was only significant at moderate to high levels of perceived surgeon warmth and failed to reach significance at lower levels of perceived surgeon warmth. Conclusion: These results indicate that patients feel greater trust in surgeons who are perceived as higher in competence and warmth, underscoring the importance of impression management in surgical care. abstract_id: PUBMED:37270775 Good syndrome: a rare, unusual immunodeficiency condition Good syndrome is an infrequent and unique clinical entity of associated thymoma and immunodeficiency, first described almost 70 years ago. It is characterized by increased susceptibility to recurrent invasive bacterial and opportunistic infections as well as autoimmune and malignant diseases with an omnious prognosis. The affected patients are mainly middle-aged persons. The most consistent immunological abnormalities are hypogammaglobulinemia and reduced/absent B cells. More recently it was classified as an acquired combined (T, B) immunodeficiency and labelled as a phenocopy. This complex immunocompromised condition can lead to heterogenous clinical phenotypes, making the diagnosis rather challenging. The thymoma is mainly benign, and an incidental finding. Since the thymus plays a critical role in the development of the immune system, the altered tissue structure and microenvironment in thymoma can both predispose to manifestation of immunodeficiency and autoimmunity. The etiopathogenesis of the disease is still unclear, but it is assumed that epigenetic and acquired genetic factors can be highly responsible for its evolvement. Currently there is no specific therapy for Good syndrome. In addition to thymectomy, control of infections, possibly secondary prevention, and regular immunoglobulin replacement are recommended. Orv Hetil. 2023; 164(22): 859-863. abstract_id: PUBMED:32043183 Spine surgery is kyphosing to spine surgeon. Background: Spine surgery is a demanding surgical specialty which requires surgeons to operate for hours on end, often compromising good posture. Sustained poor posture in the operating room (OR) can be the source of many adverse health effects on spine surgeons. This study will analyze posture of a spine surgeon in different types of spine surgery cases. Methods: Posture of a surgeon was measured using the UPRIGHT Posture Training Device. The device was worn by the surgeon in the OR through a wide variety of spine surgery cases. Results: The percent time spent slouched while performing cervical, adult deformity, and lumbar spine surgeries is 39.9, 58.9, and 38.6, respectively. For all surgeries recorded, the percent time slouched is 41.6. The average procedure time was 145.3 min, with adult deformity cases on average being the longest (245.6 min) followed by cervical (152.9 min) and then lumbar (122.5 min). Conclusion: Poor posture while operating is very likely to occur for many spine surgeons regardless of case type. This poor posture is maintained for long periods of time given the average spine surgery procedure recorded in the study was roughly 2.5 h long. Spine surgeons should be aware of the tendency for poor posture while operating, and they should try using posture-improving techniques to maintain good spine health. abstract_id: PUBMED:27845707 Good Science, Good Sense and Good Sensibilities: The Three Ss of Carol Newton. The Three Rs principle of Replacement, Reduction and Refinement developed by William M. S. Russell and Rex L. Burch in the 1950s has achieved worldwide recognition as a means of reducing the impact of science on animals and improving their welfare. However, application of the Three Rs is still far from universal, and evidence-based methods to implement the Three Rs are still lacking in many areas of laboratory animal science. The purpose of this paper is to create interest in a less well-known but equally useful principle that complements the Three Rs, which was proposed by the American biomathematician Carol M. Newton in the 1970s: the Three Ss-Good Science, Good Sense and Good Sensibilities. abstract_id: PUBMED:36714393 Optimizing Surgeon Well-Being: A Review and Synthesis of Best Practices. Through a systematic review and mixed-methods meta-synthesis of the existing literature on surgeon well-being, we sought to identify the specific elements of surgeon well-being, examine factors associated with suboptimal well-being, and highlight opportunities to promote well-being. Background: Suboptimal surgeon well-being has lasting and substantial impacts to the individual surgeon, patients, and to society as a whole. However, most of the existing literature focuses on only 1 aspect of well-being-burnout. While undoubtedly a crucial component of overall well-being, the mere absence of burnout does not fully consider the complexities of being a surgeon. Methods: We performed a literature search within Ovid Medline, Elsevier Excerpta Medica dataBASE, EBSCOhost Cumulative Index to Nursing and Allied Health Literature, and Clarivate Web of Science from inception to May 7, 2020, in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Studies with primary data examining surgeon well-being were included. Using a predetermined instrument, data were abstracted from each study and compared using thematic analysis. Results: A total of 5369 abstracts were identified and screened, with 184 full articles (172 quantitative, 3 qualitative, 9 mixed methods) selected for analysis. Among these, 91 articles measured burnout, 82 examined career satisfaction, 95 examined work-related stressors, 44 explored relationships and families, and 85 assessed emotional and physical health. Thematic analysis revealed 4 themes: professional components, personal components, work-life balance, and impacts to well-being. Conclusions: Surgeon well-being is complex and multifaceted. This nuanced examination of surgeon well-being highlights the critical need to develop and provide more long-term support to surgeons-with interventions being tailored based on individual, institutional, and systemic factors. Answer: As an AI language model, I am not capable of performing surgery or possessing the qualities of a surgeon. However, based on the abstracts provided, a good surgeon is defined by a combination of professional and personal characteristics, including strong theoretical knowledge, practical skills, common sense, good communication, and the ability to use the competence of the team effectively (PUBMED:28740949; PUBMED:26645917). A good surgeon also demonstrates competence and warmth, which contribute to patients' trust and can potentially reduce their experience of pain during surgery (PUBMED:31583364). Additionally, surgeon-performed ultrasound (SPUS) has been shown to be effective in diagnosing thyroid malignancy, indicating that surgeons with specific training in certain diagnostic techniques can provide high-quality care (PUBMED:29911027). The well-being of surgeons is also an important aspect of being a good surgeon, as it can impact their performance and patient care. Factors such as work-life balance, professional and personal components, and emotional and physical health are crucial for optimizing surgeon well-being (PUBMED:36714393). Moreover, maintaining good posture during surgery is important for the physical health of spine surgeons (PUBMED:32043183). In summary, being a good surgeon involves a combination of medical expertise, interpersonal skills, and self-care. It is a multifaceted role that requires continuous learning, adaptation, and attention to both the technical and human aspects of medical care.
Instruction: Can dietary intake influence plasma levels of amino acids in liver cirrhosis? Abstracts: abstract_id: PUBMED:11142561 Can dietary intake influence plasma levels of amino acids in liver cirrhosis? Background: Modifications in plasma amino acid patterns in cirrhotics are attributed to impaired liver function, being more evident in alcoholic than in viral cirrhosis. Aim: To evaluate whether diet influences plasma amino acid concentrations in different aetiological groups of cirrhotics. Patients: Study population comprised 40 patients with cirrhosis (25 virus- and 15 alcohol-related], all Child A, and 30 healthy subjects (controls). Method: A food frequency and quality questionnaire was utilized to determine dietary history and alcohol intake. Nutritional status was evaluated by anthropometric method. Amino acids were determined, on venous blood samples, using a specific analyzer while cysteine was evaluated by fluorescent high power liquid chromatography Results: The total daily intake of calories, proteins, lipids, and carbohydrates was similar in all individuals. Food quality distinguished the cirrhotics from the controls, but not the different aetiological groups of cirrhotics. Plasma cysteine levels were significantly lower, while aromatic amino acids and methionine were significantly higher, in all cirrhotics (p<0.001 and p<0.01, respectively, versus controls). The decrease in cysteine and the increase in other amino acids were more marked in alcoholics (p<0.01). Conclusions: Ethanol intake, but not diet, further enhances the changes in plasma aromatic amino acids, methionine and cysteine induced by impaired liver function in patients with cirrhosis, suggesting a direct interference of alcohol in their metabolism. abstract_id: PUBMED:34130023 Associations between dietary amino acid intakes and blood concentration levels. Background And Aims: Emerging evidence suggests a role of amino acids (AAs) in the development of various diseases including renal failure, liver cirrhosis, diabetes and cancer. However, mechanistic pathways and the effects of dietary AA intakes on circulating levels and disease outcomes are unclear. We aimed to compare protein and AA intakes, with their respective blood concentrations in the European Prospective Investigation into Cancer and Nutrition (EPIC) cohort. Methods: Dietary protein and AA intakes were assessed via the EPIC dietary questionnaires (DQ) and 24-h dietary recalls (24-HDR). A subsample of 3768 EPIC participants who were free of cancer had blood AA concentrations measured. To investigate how circulating levels relate to their respective intakes, dietary AA intake was examined in quintiles and ANOVA tests were run. Pearson correlations were examined for continous associations between intakes and blood concentrations. Results: Dietary AA intakes (assessed with the DQ) and blood AA concentrations were not strongly correlated (-0.15 ≤ r ≤ 0.17) and the direction of the correlations depended on AA class: weak positive correlations were found for most essential AAs (isoleucine, leucine, lysine, methionine, threonine, tryptophan, and valine) and conditionally essential AAs (arginine and tyrosine), while negative associations were found for non-essential AAs. Similar results were found when using the 24-HDR. When conducting ANOVA tests for essential AAs, higher intake quintiles were linked to higher blood AA concentrations, except for histidine and phenylalanine. For non-essential AAs and glycine, an inverse relationship was observed. Conditionally-essential AAs showed mixed results. Conclusions: Weak positive correlations and dose responses were found between most essential and conditionally essential AA intakes, and blood concentrations, but not for the non-essential AAs. These results suggest that intake of dietary AA might be related to physiological AA status, particularly for the essential AAs. However, these results should be further evaluated and confirmed in large-scale prospective studies. abstract_id: PUBMED:35500655 Amino acids, ammonia, and hepatic encephalopathy. Hepatic encephalopathy (HE) is a decline in brain function arising due to liver insufficiency. The liver's diminished capacity to clear ammonia, and the subsequent accumulation of it, is highly implicated in pathogenesis of HE. Ammonia is endogenously generated from the catabolism of amino acids derived from dietary protein intake. Therefore, a conflict arises in cirrhosis where dietary protein intake may increase ammonia and precipitate HE, and at the same time, cirrhotic patients require high daily protein intake due to altered nutrient metabolism. A nutritional solution is needed to deliver sufficient doses of protein to patients without increasing the risk of HE. In order to address this issue, this review will discuss the catabolism of individual amino acids with a special focus on ammonia-generating steps and highlight a subset of amino acids that have the potential to generate multiple equivalents of ammonia. Following, studies investigating the effects of individual amino acids in cirrhosis on blood ammonia levels as well as development of HE will be reviewed. abstract_id: PUBMED:17688664 Role of meal carbohydrate content for the imbalance of plasma amino acids in patients with liver cirrhosis. Background And Aim: Imbalance of circulating branched chain amino acids (BCAA) versus aromatic amino acids (AAA) and hyperinsulinemia are common metabolic alterations in patients with liver cirrhosis. The aim of this study was to characterize the effect of the carbohydrate component of a protein-rich mixed meal on postprandial plasma concentrations of 21 amino acids, insulin and C-peptide in patients with compensated liver cirrhosis. Furthermore, the effect of a dietary intervention on the metabolic alterations in cirrhotic patients was examined. Methods: Eighteen patients with cirrhosis and 12 healthy volunteers received a protein-rich meal (pork filet 200 g) with or without carbohydrates (bread 50 g, glucose 20 g). A subgroup of four cirrhotic patients received an isoenergetic (117 kJ/kg bw) carbohydrate-enriched (60%) and -restricted (20%) diet for 7 days each. Results: In the cirrhotic patients, basal plasma insulin and C-peptide concentrations were significantly elevated. The ingestion of a protein-rich meal without additional carbohydrates led to a significantly greater increase of insulin and C-peptide in the cirrhotic patients compared to controls. Postprandial increases of leucine and isoleucine were reduced, whereas those of phenylalanine were higher in cirrhotic patients. The addition of carbohydrates led to higher insulin and C-peptide plasma concentrations in cirrhotic patients. Postprandial BCAA increases were more impaired in the cirrhotic group after additional carbohydrate ingestion (46%vs 82%). After the carbohydrate-restricted diet for 7 days BCAA plasma levels increased but the BCAA/AAA ratio remained unaltered. Conclusions: The carbohydrate content of a meal enhances reduction of BCAA plasma concentrations in clinically stable cirrhotic patients. An imbalanced BCAA/AAA ratio cannot be avoided by a carbohydrate-reduced diet alone, supporting mandatory BCAA supplementation. abstract_id: PUBMED:2718770 Lipid malnutrition of patients with liver cirrhosis: effect of low intake of dietary lipid on plasma fatty acid composition. The plasma fatty acid composition of cirrhotic patients and their dietary intake of fatty acids were determined. Significantly lower plasma arachidonic, docosahexaenoic, dihomo-gamma-linolenic and eicosapentaenoic acid levels were observed in cirrhotic patients than in healthy controls. A remarkably low dietary intake of polyunsaturated fatty acids supplied from fish, vegetable oil and pulses was shown in cirrhotic patients. Positive correlations were observed between plasma arachidonic acid concentrations and clearance rate of indocyanine green (KICG) (r = 0.826, p less than 0.05) and between dihomo-gamma-linolenic acid levels and cholinesterase activities (r = 0.841, p less than 0.05). From these results, we conclude that a supply of polyunsaturated fatty acids is necessary for the nutritional treatment of patients with liver cirrhosis. abstract_id: PUBMED:30292583 Dietary management and supplementation with branched-chain amino acids in cirrhosis of the liver. One of the most important characteristics of malnutrition is the loss of muscle mass and the severe depletion of the protein reserve, secondarily affecting energy metabolism. That impacts nutritional status and the progression of disease-related complications. Nutritional treatment is one of the main factors in the comprehensive management of those patients. Achieving adequate energy intake that provides the macronutrients and micronutrients necessary to prevent or correct malnutrition is attempted through dietary measures. ESPEN, the European Society for Clinical Nutrition and Metabolism, recommends a caloric intake of 30-40kcal/kg/day, in which carbohydrates provide 45-60% of the daily energy intake and proteins supply 1.0-1.5g/kg/day. The remaining portion of the total energy expenditure should be covered by lipids. The administration of branched-chain amino acids has been shown to be beneficial not only in counteracting malnutrition, but also as a coadjuvant treatment in specific complications, thus playing a favorable role in outcome and quality of life. Therefore, branched-chain amino acids should be considered part of nutritional treatment in patients with advanced stages of cirrhosis of the liver, particularly in the presence of complications. abstract_id: PUBMED:290744 Plasma amino acids in hepatic encephalopathy. In 75 cases of histologically verified liver cirrhosis the plasma amino acids were determined by ion exchange chromatography and the results were correlated with different liver function tests as prothrombin time, pseudocholinesterase, serum albumin, GOT, bilirubin and venous ammonia. Out of these parameters prothrombin time, pseudocholinesterase and serum albumin significantly correlated with the sum of branched-chain amino acids and with the Fischer's quotient (molar ratio of branched-chain and aromatic amino acids). Methionin and aromatic amino acids inversely correlated with these parameters, additionally methionin positively correlated with bilirubin and GOT. By comparing plasma amino acid levels in cirrhotics without and with hepatic encephalopathy (grade 3 or 4) no significant differences were found. "Fischer's quotient" showed an overlap in patients with and without encephalopathy. Therefore the precipitation of hepatic encephalopathy is not fully explained by the changes in plasma amino acids. Therapeutic administrations of specially mixtures of amino acids with a high content in branched-chain and a low content in aromatic amino acids correct the plasma amino inbalance for a short time and improves hepatic encephalopathy. abstract_id: PUBMED:2897785 Role of gluconeogenesis from amino acids in determining fasting and absorptive levels of plasma ammonia in cirrhosis. The aim of this study was to evaluate the contribution of gluconeogenesis from amino acids in the development of fasting and absorptive hyperammonemia in cirrhosis. Somatostatin (SRIF), which is known to inhibit the hepatic disposal of gluconeogenic amino acids, was administered in a continuous infusion (500 micrograms/h) for 90 min before and 5 h after a protein meal (240 g of meat) in 11 overnight fasting patients. Plasma glucagon, insulin, gluconeogenic amino acids (GAA: alanine, serine, glycine, and threonine) and ammonia (NH3) were evaluated before the infusion, immediately before, and at 1, 3, and 5 h after the meal. As control study, the same protocol was randomly repeated in a different day with saline infusion. During the latter, a direct correlation was found between fasting glucagon and ammonia (r = 0.68; p less than 0.05). Fasting glucagon, insulin, and NH3 did not change, whereas alanine (p less than 0.05) and the GAA sum decreased (p less than 0.01). When SRIF was infused, fasting glucagon (p less than 0.05), insulin (p less than 0.05), and NH3 (p less than 0.05) decreased. Alanine did not change, and GAA sum increased (p less than 0.02). No correlations were found by plotting changes in glucagon or GAA sum and NH3. After the meal, SRIF infusion abolished the plasma response of glucagon and markedly reduced that of insulin, so that their area under the curve (AUC0-5) were reduced (p less than 0.005, for both), with respect to control study. Moreover, the AUC0-5 of alanine (p less than 0.005) and GAA sum (p less than 0.005) were increased, suggesting a reduced disposal of these compounds. In spite of this, the meal-induced early increase and the AUC0-5 of plasma NH3 observed during SRIF and saline infusion did not differ. Our results do not confirm the importance of gluconeogenesis from alpha-amino-nitrogens in determining the fasting ammonemia of cirrhosis, and suggest that this metabolic pathway does not significantly influence the protein meal-induced exacerbation of plasma ammonia. abstract_id: PUBMED:6529509 Plasma amino acids in patients with liver cirrhosis treated with lactulose The altered plasma amino acid pattern (i.e. increased levels of aromatic amino acids and decreased levels of branched chain amino acids) is a characteristic feature of cirrhotic patients. Recently it has been proved that an increased net degradation of BCAA is positively correlated to the plasma NH3 level, strongly suggesting that these amino acids are molecularly involved in glutamine synthesis to detoxify ammonia in skeletal muscle. Lactulose, a synthetic, nonabsorbable disaccharide, is believed to actively promote excretion of ammonia from the body by causing it to be trapped in the acidified fecal stream and making it unavailable for absorption. Therefore therapy with lactulose could determine an increase of BCAA. The present study was undertaken to examine plasma amino acid pattern of ten patients with liver cirrhosis before and after lactulose therapy. No statistically significant changes of amino acids were observed. abstract_id: PUBMED:685924 Plasma amino acids imbalance in patients with liver disease. The venous plasma amino acid patterns have been determined in 12 normal individuals and in 71 shunted and nonshunted cirrhotics in various grades of hepatic encephalopathy. The free amino acids have been determined by an amino autoanalyzer; the total and free tryptophan have been measured by a spectrophotofluorimetric method. In 14 instances, arterial plasma amino acid patterns have been measured simultaneously. High levels of aromatic and sulfurated amino acids and low levels of branched-chain amino acids have been constantly found in all cirrhotics. Methionine, phenylalanine, valine, leucine, tyrosine, and free tryptophan showed a statistical difference between controls and all other groups. These altered patterns did not correlate either with the grade or the evolution of the coma or with the presence of the surgical anastomosis. No statistical differences were found between arterial and venous values. The only amino acid which statistically correlated with the grade and evolution of the hepatic encephalopathy was free tryptophan. The molar ratios between the amino acids sharing the same transport system across the blood-brain-barrier have been considered. A very good correlation with the grade of the mental disorder was found with the ratios free tryptophan/phenylalanine-tyrosine-methionine-valine-leucine-isoleucine and free tryptophan/branched-chain amino acids. Answer: Yes, dietary intake can influence plasma levels of amino acids in liver cirrhosis. Studies have shown that modifications in plasma amino acid patterns in cirrhotics are attributed to impaired liver function, with more pronounced changes in alcoholic cirrhosis compared to viral cirrhosis. Ethanol intake, but not diet, further enhances the changes in plasma aromatic amino acids, methionine, and cysteine induced by impaired liver function in patients with cirrhosis, suggesting a direct interference of alcohol in their metabolism (PUBMED:11142561). Additionally, dietary amino acid intakes and blood amino acid concentrations were found to be weakly correlated, with essential and conditionally essential amino acids showing weak positive correlations and dose responses with blood concentrations, but not for non-essential amino acids (PUBMED:34130023). Furthermore, the carbohydrate content of a meal can affect postprandial plasma concentrations of amino acids in patients with compensated liver cirrhosis, with a carbohydrate-reduced diet alone not being sufficient to avoid an imbalanced branched-chain amino acids/aromatic amino acids ratio, supporting the need for branched-chain amino acid supplementation (PUBMED:17688664). The plasma fatty acid composition of cirrhotic patients also reflects their dietary intake, with a low intake of dietary lipid leading to significantly lower plasma levels of certain polyunsaturated fatty acids (PUBMED:2718770). Nutritional treatment, including the administration of branched-chain amino acids, has been shown to be beneficial not only in counteracting malnutrition but also as a coadjuvant treatment in specific complications of cirrhosis, playing a favorable role in outcome and quality of life (PUBMED:30292583). However, the relationship between dietary intake and plasma amino acid levels can be complex, as the liver's capacity to metabolize amino acids is compromised in cirrhosis, and dietary protein intake may increase ammonia levels, potentially precipitating hepatic encephalopathy (PUBMED:35500655). Therefore, while dietary intake does influence plasma amino acid levels, the relationship is influenced by the underlying liver function and the presence of complications such as hepatic encephalopathy.
Instruction: Continuous ambulatory peritoneal dialysis: an option in the developing world? Abstracts: abstract_id: PUBMED:2683690 Continuous ambulatory peritoneal dialysis. During the past decade, continuous ambulatory peritoneal dialysis has emerged as a successful alternative to hemodialysis in the treatment of chronic renal failure. Performed continuously, peritoneal dialysis achieves total transfer comparable to hemodialysis and removes peptides faster. Studies have shown that survival rates for the two techniques are similar. Peritonitis, the major complication of continuous ambulatory peritoneal dialysis, is decreasing in frequency and can usually be managed at home. abstract_id: PUBMED:8312414 Continuous ambulatory peritoneal dialysis: an option in the developing world? Objective: To evaluate specified biomedical, socio-economic, and psychosocial criteria as predictors of therapeutic success to optimize patient selection for continuous ambulatory peritoneal dialysis (CAPD) in a developing country. Design: A retrospective cohort study investigating the relationship between episodes of peritonitis and exitsite infection, and predetermined biomedical, socioeconomic, and psychosocial data. Setting: A CAPD unit in a large tertiary care teaching hospital. Patients: All 132 patients entering the CAPD program between 1987 and 1991. Results: Overall mean survival time on CAPD was 17.3 months. Peritonitis rates were high, especially among blacks. Multivariate analysis demonstrated that increased peritonitis rates were associated with age, black race, diabetes, and strongly so with several psychosocial factors. Because being black was strongly linked to poor socioeconomic conditions, repeat analysis excluding blacks showed the same associations with the above variables, but, additionally, several socioeconomic factors were associated with high peritonitis rates. No significant explanatory variables were shown for exit-site infections. Conclusions: The association of biomedical, socioeconomic, and psychosocial variables with high peritonitis rates has important implications for the selection of patients for CAPD in this setting. abstract_id: PUBMED:2024324 Peritonitis with continuous ambulatory peritoneal dialysis. Status and future perspectives Peritonitis remains the major complication of continuous ambulatory peritoneal dialysis. A review is given of the clinical, microbiological, immunological, and pathogenic aspects of this problem and new fields of research for reducing the incidence of peritonitis are suggested. abstract_id: PUBMED:12197926 Peritoneal dialysis in the developing world: the Mexican scenario. In the developing countries it is not possible to determine the total amount of money spent in the treatment of chronic diseases, and the practice of renal replacement therapies faces many obstacles. In Mexico, the introduction of continuous ambulatory peritoneal dialysis and continuous cycling peritoneal dialysis (CCPD) achieved very good results. Unfortunately, renal disease still affected as much as 95% of chronic renal failure patients and it became a disaster with an annual mortality rate higher than 60%. This was known as the Mexican Model which failed in establishing peritoneal dialysis as the only procedure for treating patients. In order to avoid a similar scenario with the 2 replacement therapies, we created the Official Norm for hemodialysis, and now we are experimenting with an increase from 5% to 20% of hemodialysis patients who are receiving therapy, principally in private units that attend Social Security patients. In addition, the government has established a Council for Transplantation that acts as a regulatory board. In other words, we are in the process of making chronic renal diseases a priority within the National Program. abstract_id: PUBMED:31448033 Continuous ambulatory peritoneal dialysis (CAPD) in children: a successful case for a bright future in a developing country. The authors report the first case of successful peritoneal dialysis (PD) in a developing country performed about a 13-year-old adolescent followed-up for stage V chronic kidney disease (CKD) with anuria. After 3 months of hemodialysis, the parents opted for continuous ambulatory peritoneal dialysis (CAPD) as they wished to return home located 121km from Dakar. After PD catheter insertion, the plan proposed to the patient consisted 3-4 hours stasis of isotonic dialysate during the day and a night stasis of 8 hours of icodextrin for an injection volume of 1L per session. The patient and his mother were trained and assessed on the PD technique. After dialysis adequacy was tested while hospitalised, they were able to return home and continued the sessions following the same plan prescribed and while keeping in touch, by telephone, with the medical team. The technique assessment at the day hospital every 2 weeks revealed dialysis adequacy and satisfactory tolerance of PD at home after 04 months of observation. It was the first case of successful CAPD in the pediatrics unit in this context. Scaling this technique is a challenge for the pediatric nephrologist in developing countries like Senegal. abstract_id: PUBMED:7926414 The present and future of continuous ambulatory peritoneal dialysis (CAPD) In order to establish an idea of the future of continuous ambulatory peritoneal dialysis, it is necessary to know the present application of this method, which if it used for a greater number of patients each time, it is performed occasionally with deficient resources and on patients nor selected properly. A possible solution is a very well established peritoneal dialysis program which can "support" an active renal transplant program. It would be interesting to make a prospective and aleatory study between peritoneal dialysis and hemodialysis. It is required to improve: the disconnecting system to the peritoneal catheter, the subcutaneous implantation method, to use the adequacy of the dialysis concept, to design methods to favor the duration and durability of the human peritoneum and look deeply into the opportune and correct treatment of peritonitis and to increase the automatic peritoneal dialysis systems. In the future of peritoneal dialysis we cannot overlook looking deeply into the peritoneum cellular biology in order to use the peritoneum taking into consideration the biology of its cells. DPCA has a bright future and it should play a more important role in the treatment of uremia. abstract_id: PUBMED:8062798 Continuous ambulatory peritoneal dialysis. Many patients with end-stage renal disease are maintained on continuous ambulatory peritoneal dialysis (CAPD). This technique offers a number of advantages over hemodialysis, but it is associated with certain potential complications. Peritonitis and other infectious complications are the most commonly seen in the emergency department. The evaluation and management of other emergency problems may also need to be modified in patients on CAPD. abstract_id: PUBMED:6837587 Peritoneal access and related complications in continuous ambulatory peritoneal dialysis. Over a four-year period, 50 patients underwent continuous ambulatory peritoneal dialysis. After 24 months, 48 percent of the patients continued to receive continuous ambulatory peritoneal dialysis treatment (including those who underwent continuous ambulatory peritoneal dialysis with bottled solutions from 1977 to 1978). Using solutions in plastic bags, 63 percent of patients continued to undergo continuous ambulatory peritoneal dialysis after 24 months. The mortality rate at 24 months was 23 percent overall and 15 percent for those using plastic bags. There was an overall average of 2.4 episodes of peritonitis a year per patient and 1.3 episodes a year per patient from 1979 through 1980, when only the technique with plastic bags was used. The number of days a patient was hospitalized averaged 48 per year, and 37 days per year in 1979 and 1980. Fifty-five percent of catheters remained functional at 24 months. Other complications included 15 hernias, 15 skin and tunnel infections, 12 leaks, and five cuff extrusions. Improved catheters and further reductions in the incidence of peritonitis will most likely result in a decreased number of patients who withdraw from continuous ambulatory peritoneal dialysis therapy. abstract_id: PUBMED:6835766 Peritonitis in children undergoing continuous ambulatory peritoneal dialysis. The incidence of peritonitis in 29 children undergoing continuous ambulatory peritoneal dialysis during a 20-month period was one episode every 12.5 patient-months. Of the 20 episodes of peritonitis, seven (35%) were treated on an ambulatory basis, and 13 required patient hospitalization. The annualized hospitalization rate for the 29 patients during 252 patient-months was 3.8 days per patient. All patients recovered from peritonitis; the only complication was the need to replace two peritoneal catheters. These data would seem to indicate that the potential threat of peritonitis should not curtail continued utilization of continuous ambulatory peritoneal dialysis in children. abstract_id: PUBMED:6828946 Continuous peritoneal dialysis in a developing population--problems seen at Baragwanath Hospital. Continuous ambulatory peritoneal dialysis (CAPD) is being used with increasing frequency throughout the world as an alternative to long-term haemodialysis. The problems of 8 Black patients on CAPD are discussed, with special reference to their socioeconomic and home circumstances. CAPD has been successful in this group, suggesting that it may be used more extensively in developing populations. Answer: Continuous ambulatory peritoneal dialysis (CAPD) has been considered as a treatment option for chronic renal failure in the developing world, with varying degrees of success and challenges. CAPD has emerged as a successful alternative to hemodialysis, achieving comparable total transfer and faster removal of peptides, with survival rates similar to those of hemodialysis patients (PUBMED:2683690). However, peritonitis remains a major complication, although its frequency is decreasing and it can usually be managed at home (PUBMED:2683690). In the context of a developing country, a study found that peritonitis rates were high, particularly among black patients, and were associated with age, race, diabetes, and several psychosocial factors. Socioeconomic conditions were also linked to higher peritonitis rates, suggesting that biomedical, socioeconomic, and psychosocial variables should be considered when selecting patients for CAPD (PUBMED:8312414). In Mexico, the introduction of CAPD and continuous cycling peritoneal dialysis (CCPD) initially achieved good results, but the high mortality rate among chronic renal failure patients led to the failure of the "Mexican Model" to establish peritoneal dialysis as the sole treatment. This prompted the creation of an Official Norm for hemodialysis and a Council for Transplantation to regulate and prioritize chronic renal diseases within the National Program (PUBMED:12197926). Despite these challenges, there have been successful cases of CAPD in children in developing countries, such as the reported case in Senegal, where a 13-year-old patient with stage V chronic kidney disease was successfully treated with CAPD, allowing the patient to return home and continue treatment with adequate support and follow-up (PUBMED:31448033). Overall, while CAPD presents an option for renal replacement therapy in the developing world, its success is contingent upon addressing the associated complications, particularly peritonitis, and considering the socioeconomic and psychosocial factors that may impact patient outcomes. Additionally, the establishment of supportive healthcare infrastructure and regulatory frameworks is crucial for the effective implementation and scaling of CAPD in these settings (PUBMED:7926414, PUBMED:8062798, PUBMED:6837587, PUBMED:6835766, PUBMED:6828946).
Instruction: Can triggered electromyography monitoring throughout retraction predict postoperative symptomatic neuropraxia after XLIF? Abstracts: abstract_id: PUBMED:25874744 Can triggered electromyography monitoring throughout retraction predict postoperative symptomatic neuropraxia after XLIF? Results from a prospective multicenter trial. Purpose: This multicenter study aims to evaluate the utility of triggered electromyography (t-EMG) recorded throughout psoas retraction during lateral transpsoas interbody fusion to predict postoperative changes in motor function. Methods: Three hundred and twenty-three patients undergoing L4-5 minimally invasive lateral interbody fusion from 21 sites were enrolled. Intraoperative data collection included initial t-EMG thresholds in response to posterior retractor blade stimulation and subsequent t-EMG threshold values collected every 5 min throughout retraction. Additional data collection included dimensions/duration of retraction as well as pre-and postoperative lower extremity neurologic exams. Results: Prior to expanding the retractor, the lowestt-EMG threshold was identified posterior to the retractor in 94 % of cases. Postoperatively, 13 (4.5 %) patients had a new motor weakness that was consistent with symptomatic neuropraxia (SN) of lumbar plexus nerves on the approach side. There were no significant differences between patients with or without a corresponding postoperative SN with respect to initial posterior blade reading (p = 0.600), or retraction dimensions (p > 0.05). Retraction time was significantly longer in those patients with SN vs. those without (p = 0.031). Stepwise logistic regression showed a significant positive relationship between the presence of new postoperative SN and total retraction time (p < 0.001), as well as change in t-EMG thresholds over time (p < 0.001), although false positive rates (increased threshold in patients with no new SN) remained high regardless of the absolute increase in threshold used to define an alarm criteria. Conclusions: Prolonged retraction time and coincident increases in t-EMG thresholds are predictors of declining nerve integrity. Increasing t-EMG thresholds, while predictive of injury, were also observed in a large number of patients without iatrogenic injury, with a greater predictive value in cases with extended duration. In addition to a careful approach with minimal muscle retraction and consistent lumbar plexus directional retraction, the incidence of postoperative motor neuropraxia may be reduced by limiting retraction time and utilizing t-EMG throughout retraction, while understanding that the specificity of this monitoring technique is low during initial retraction and increases with longer retraction duration. abstract_id: PUBMED:31893134 Many Intraoperative Monitoring Modalities Have Been Developed To Limit Injury During Extreme Lateral Interbody Fusion (XLIF/MIS XLIF): Does That Mean XLIF/MIS XLIF Are Unsafe? Background: Extreme lateral interbody fusions (XLIF) and Minimally Invasive (MIS) XLIF pose significant risks of neural injury to the; lumbar plexus, ilioinguinal, iliohypogastric, genitofemoral, lateral femoral cutaneous, and subcostal nerves. To limit these injuries, many intraoperative neural monitoring (IONM) modalities have been proposed. Methods: Multiple studies document various frequencies of neural injuries occurring during MIS XLIF/XLIF: plexus injuries (13.28%); sensory deficits (0-75%; permanent 62.5%); motor deficits (0.7-33.6%; most typically iliopsoas weakness (14.3%-31%)), and anterior thigh/groin pain (12.5-25%.-34%). To avoid/limit these injuries, multiple IONM techniques have been proposed. These include; using finger electrodes during operative dissection, employing motor evoked potentials (MEP), eliminating (no) muscle relaxants (NMR), and using "triggered" EMGs. Results: In one study, finger electrodes for XLIF at L4-L5 level for degenerative spondylolisthesis reduced transient postoperative neurological symptoms from 7 [38%] of 18 cases (e.g. without IONM) to 5 [14%] of 36 cases (with IONM). Two series showed that motor evoked potential monitoring (MEP) for XLIF reduced postoperative motor deficits; they, therefore, recommended their routine use for XLIF. Another study demonstrated that eliminating muscle relaxants during XLIF markedly reduced postoperative neurological deficits/thigh pain by allowing for better continuous EMG monitoring (e.g. NMR no muscle relaxants). Finally, a "triggered" EMG study" reduced postoperative motor neuropraxia, largely by limiting retraction time. Conclusion: Multiple studies have offered different IONM techniques to avert neurological injuries following MIS XLIF/XLIF. Does this mean that these procedures (e.g. XLIF/MIS XLIF) are unsafe? abstract_id: PUBMED:34479196 Femoral head to lower lumbar neural foramen distance as a novel radiographic parameter to predict postoperative stretch neuropraxia. Objective: Lumbosacral fractional curves in adult spinal deformity (ASD) patients often have sharp coronal curves resulting in significant pain and imbalance. Postoperative stretch neuropraxia after fractional curve correction can lead to discomfort and unsatisfactory outcomes. The goal of this study was to use radiographic measures to increase understanding of the relationship between postoperative stretch neuropraxia and fractional curve correction. Methods: In 62 ASD patients treated from 2015 to 2018, radiographic review was performed, including measurement of the distance between the lower lumbar neural foramen (L4 and L5) in the concavity and convexity of the lumbosacral fractional curve and the ipsilateral femoral heads (FHs; L4-FH and L5-FH) in pre- and postoperative anteroposterior spine radiographs. The largest absolute preoperative to postoperative change in distance between the lower lumbar neural foramen and the ipsilateral FH (ΔL4/L5-FH) was used for analysis. Chi-square analyses, independent and paired t-tests, and logistic regression were performed to study the relationship between L4/L5-FH and stretch neuropraxia for categorical and continuous variables, respectively. Results: Of the 62 patients, 13 (21.0%) had postoperative stretch neuropraxia. Patients without postoperative stretch neuropraxia had an average ΔL4-FH distance of 16.2 mm compared to patients with stretch neuropraxia, who had an average ΔL4-FH distance of 31.5 mm (p < 0.01). Patients without postoperative neuropraxia had an average ΔL5-FH distance of 11.1 mm compared to those with stretch neuropraxia, who had an average ΔL5-FH distance of 23.0 mm (p < 0.01). Chi-square analysis showed that patients had a 4.78-fold risk of developing stretch neuropraxia with ΔL4-FH > 20 mm (95% CI 1.3-17.3) and a 5.17-fold risk of developing stretch neuropraxia with ΔL5-FH > 15 mm (95% CI 1.4-18.7). Logistic regression analysis indicated that the odds of developing stretch neuropraxia were 15:1 with a ΔL4-FH > 20 mm (95% CI 3-78) and 21:1 with a ΔL5-FH > 15 mm (95% CI 4-113). Conclusions: The novel ΔL4/L5-FH distances are strongly associated with postoperative stretch neuropraxia in ASD patients. A ΔL4-FH > 20 mm and ΔL5-FH > 15 mm significantly increase the odds for patients to develop postoperative stretch neuropraxia. abstract_id: PUBMED:36460197 Early Postoperative Iatrogenic Neuropraxia After Lumbar Disc Herniation Surgery: Analysis of 87 Cases. Background: Postoperative early neuropraxia after lumbar disc herniation surgery is common. The emergence of new paresthesia findings with increased sensory and motor deficits in the postoperative period suggests iatrogenic neuropraxia. This study aimed to discuss the causes and prognosis of iatrogenic neuropraxia detected in the early postoperative period in patients who have been operated on for lumbar disc herniation. Methods: Cases with postoperative iatrogenic neuropraxia were determined retrospectively. Deficits were evaluated at intervals of 0-2 hours, 2-12 hours, 12-24 hours, and 24-48 hours. The cases were evaluated in 2 groups as those who underwent aggressive discectomy and simple discectomy. In addition, the treatment results were compared between the 2 groups as the cases that were treated and not treated with methylprednisolone. Results: The iatrogenic neuropraxia rate was significantly higher in patients who underwent aggressive discectomy. Although it was observed that paresthesia findings improved more rapidly in cases treated with methylprednisolone, no difference was found between the 2 groups in terms of its effects on the motor deficit. Conclusions: Iatrogenic neuropraxia is a finding whose cause cannot be determined by quantitative criteria. It is common in patients who underwent aggressive discectomy. Methylprednisolone treatment is effective in recovering the paresthesia finding faster and may show that the radicular injury is in the neuropraxia stage in the early period. abstract_id: PUBMED:36307036 A Modified Method of Triggered Electromyography Monitoring in Minimally Invasive Spine Surgery: Comparison to Conventional Techniques and Correlation with Body Mass Index. Purpose: Conventional triggered electromyography (EMG) in percutaneous pedicle screw (PPS) systems may be unreliable due to the interaction between the insertion apparatus and patient's soft tissue. Our aim was 1) to describe a modified technique of triggered EMG monitoring using insulated Kirschner wire (K-wires), 2) to compare EMG potentials with conventional techniques, and 3) to demonstrate the relationship between patient body mass index (BMI) and triggered EMG potentials. Methods: This was a prospective cross-sectional study of 50 patients undergoing minimally invasive PPS placement. Triggered EMG measurements using K-wires before and after insulation were compared. The difference between EMG measurements before and after insulation was correlated with patient BMI. Results: A total of 50 patients, 22 females and 28 males, underwent triggered EMG testing using K-wires prior to final PPS placement in the thoracic and lumbosacral spine for a total of 472 triggered EMG measurements. When compared to standard triggered EMG monitoring, insulated triggered EMG monitoring demonstrated an average 55.4% decrease in EMG values (P < 0.001). Increasing BMI correlated to increasing % decrease in EMG values (r-coefficient, 0.376; P < 0.01). Conclusions: We describe a cost-effective, efficient, and reliable technique for triggered EMG during PPS placement which may help ensure accurate screw placement and minimize potentially devastating complications. abstract_id: PUBMED:37234475 Comparing clinical and radiological outcomes between single-level OLIF and XLIF: A systematic review and meta-analysis. Background Context: Oblique lumbar interbody fusion (OLIF) and extreme lateral interbody fusion (XLIF) are 2 popular minimally invasive spinal fusion techniques with unique approach-related complication profiles. Accordingly, patient-specific anatomical factors, such as vascular anatomy or iliac crest height, greatly influence which technique to use. Previous studies comparing these approaches do not account for the inability of XLIF to access the L5-S1 disc space and therefore do not exclude this level in their analysis. The purpose of this study was to compare radiological and clinical outcomes of these techniques in the L1-L5 region. Methods: A query of 3 electronic databases (PubMed, CINAHL plus, and SCOPUS) was performed, without time restriction, to identify studies that evaluated outcomes of single-level OLIF and/or XLIF between L1 and L5. Based on heterogeneity, a random effects meta-analysis was performed to evaluate the pooled estimation of each variable between the groups. An overlap of 95% confidence intervals suggests no statistically significant difference at the p<.05 level. Results: A total of 1,010 patients (408 OLIF, 602 XLIF) were included from 24 published studies. Improvements in disc height (OLIF: 4.2 mm; XLIF: 5.3 mm), lumbar segmental (OLIF: 2.3°; XLIF: 3.1°), and lumbar lordotic angles (OLIF: 5.3°; XLIF: 3.3°) showed no significant difference. The rate of neuropraxia was significantly greater in the XLIF group at 21.2% versus 10.9% in the OLIF group (p<.05). However, the rate of vascular injury was higher in the OLIF cohort at 3.2% (95% CI:1.7-6.0) as compared to 0.0 (95% CI: 0.0-1.4) in the XLIF cohort. Improvements in VAS-b (OLIF: 5.6; XLIF: 4.5) and ODI (OLIF: 37.9; XLIF: 25.6) scores were not significantly different between the 2 groups. Conclusions: This meta-analysis demonstrates similar clinical and radiological outcomes between single-level OLIF and XLIF from L1 to L5. XLIF had significantly higher rates of neuropraxia, whereas OLIF had greater rates of vascular injury. abstract_id: PUBMED:30552048 Evaluation of retraction time as a predictor of postoperative motor dysfunction after minimally invasive transpsoas interbody fusion at L4-L5. Background: Complications associated with the lateral lumbar transpsoas approach largely comprise various nerve-related syndromes particularly at L4-5. Quadriceps weakness can occur from stretch injury to the femoral nerve during retraction. Objective: The purpose of this study is to evaluate the role of retraction time in the development of postoperative neuropathic motor weakness at the L4-5 level. Methods: A retrospective review of a prospectively collected database at a single institution was reviewed over a two-year period (March 2014-2016) for a single surgeon. Twenty-six patients undergoing single level LLIF at L4-5 were identified. Pre- and postoperative data collection obtained included motor function grading (0-5/5) with a minimum of 3-month postoperative follow-up. Intraoperative data collection included retraction time (RT) defined as the time the retraction system was affixed to the spine and expanded to the time of closure and removal. Two-Tailed T-Test was used to determine clinical significance. Results: 50% of patients had leg weakness after surgery (13/26). All postoperative motor deficits were at least antigravity strength (>3/5) and all deficits resolved. The mean RT for the motor dysfunction cohort was 29 min (14-51) compared to 27.5 min for the cohort without postoperative weakness (19-37) (p = 0.685). Conclusion: The results of this study suggest that longer retraction time (within a range of 14-51 min) did not correlate with a greater risk of postoperative neuropathic quadriceps muscle dysfunction for lateral lumbar interbody fusions performed at L4-5. Other factors may contribute to postoperative neurologic dysfunction regardless of RT. Motor deficits resolved quickly after surgery. abstract_id: PUBMED:32363065 Incidence of Major Vascular Injuries with Extreme Lateral Interbody Fusion (XLIF). Background: Extreme lateral interbody fusions (XLIF) and minimally invasive (MI) XLIF theoretically offer wide access to the lumbar disc space. The theoretical advantages of XLIF include; minimally disturbing surrounding structures (e.g. neural, vascular, soft-tissue), while offering stability. In addition to the well-known increased frequency of neurological deficits attributed to XLIF, here we explored how often major vascular injures occur with XLIF/MI XLIF procedures. Methods: In 13 XLIF/MI XLIF studies, we evaluated the frequency of major vascular injuries. Results: The studies citing the different frequencies of vascular injuries associated with XLIF/MI XLIF were broken down into three categories. Of the 5 small and larger case series, involving a total of 6,732 patients (e.g. range of 12 to 4,607 patients/study), the incidence of vascular injuries ranged from 0% (3 studies) up to 0.4%. Three case reports presented major vascular injuries attributed to XLIF/MI XLIF. Two involved the L4-L5 level. The three complications included: one fatal injury, one, a retroperitoneal hematoma with hemorrhagic shock, and one major vascular injury. For the 5 review articles, major vascular complications were just discussed in 2, one study cited 3 specific major vascular injuries (e.g. 1 fatal, 1 life threating, and 1 lumbar artery pseudoaneurysm requiring embolization), while 2 other studies stated the frequency of these injuries was 0.4% for XLIF, and 1.7 % for OLIF (Oblique Lumbar Interbody Fusion). Conclusions: According to 5 small and larger case series, 3 case reports, and 5 review articles, the incidence of major vascular injuries occurring during XLIF/MI XLIF ranges from 0 to 0.03% to 0.4%. abstract_id: PUBMED:22469677 Utility of electromyography for nerve root monitoring during spinal surgery. In addition to potential spinal cord injury, spinal surgery carries a risk of damage to spinal nerve roots, which could result in sensory or motor deficit. Both spontaneous and triggered electromyography (sEMG and tEMG, respectively) have been employed to reduce the risk of these deficits. This article reviews the theory and best practice of electromyographic monitoring for spinal surgery, the evidence supporting its use for various types of spine surgery, and finally, suggestions for future directions to improve knowledge and advance electromyographic monitoring of spinal nerves. abstract_id: PUBMED:26654343 The use of intraoperative triggered electromyography to detect misplaced pedicle screws: a systematic review and meta-analysis. Object: Insertion of instruments or implants into the spine carries a risk for injury to neural tissue. Triggered electromyography (tEMG) is an intraoperative neuromonitoring technique that involves electrical stimulation of a tool or screw and subsequent measurement of muscle action potentials from myotomes innervated by nerve roots near the stimulated instrument. The authors of this study sought to determine the ability of tEMG to detect misplaced pedicle screws (PSs). Methods: The authors searched the U.S. National Library of Medicine, the Web of Science Core Collection database, and the Cochrane Central Register of Controlled Trials for PS studies. A meta-analysis of these studies was performed on a per-screw basis to determine the ability of tEMG to detect misplaced PSs. Sensitivity, specificity, and receiver operating characteristic (ROC) area under the curve (AUC) were calculated overall and in subgroups. Results: Twenty-six studies were included in the systematic review. The authors analyzed 18 studies in which tEMG was used during PS placement in the meta-analysis, representing data from 2932 patients and 15,065 screws. The overall sensitivity of tEMG for detecting misplaced PSs was 0.78, and the specificity was 0.94. The overall ROC AUC was 0.96. A tEMG current threshold of 10-12 mA (ROC AUC 0.99) and a pulse duration of 300 µsec (ROC AUC 0.97) provided the most accurate testing parameters for detecting misplaced screws. Screws most accurately conducted EMG signals (ROC AUC 0.98). Conclusions: Triggered electromyography has very high specificity but only fair sensitivity for detecting malpositioned PSs. Answer: Triggered electromyography (t-EMG) monitoring throughout retraction can predict postoperative symptomatic neuropraxia (SN) after extreme lateral interbody fusion (XLIF), but with some limitations. A multicenter study found that prolonged retraction time and coincident increases in t-EMG thresholds are predictors of declining nerve integrity, suggesting that t-EMG can be predictive of nerve injury. However, the study also noted that while increasing t-EMG thresholds were predictive of injury, they were also observed in a large number of patients without iatrogenic injury. This indicates that while t-EMG can be a useful tool, it has a high false positive rate, especially during initial retraction. The specificity of this monitoring technique is low during initial retraction and increases with longer retraction duration (PUBMED:25874744). Other studies have also supported the use of various intraoperative neural monitoring (IONM) techniques, including t-EMG, to limit neurological injuries during XLIF. For instance, a study demonstrated that eliminating muscle relaxants during XLIF markedly reduced postoperative neurological deficits/thigh pain by allowing for better continuous EMG monitoring. Additionally, a "triggered" EMG study reduced postoperative motor neuropraxia, largely by limiting retraction time (PUBMED:31893134). In conclusion, t-EMG monitoring throughout retraction can be a predictor of postoperative symptomatic neuropraxia after XLIF, but it is not foolproof. It is most effective when used in conjunction with other strategies to minimize nerve injury, such as limiting retraction time. Clinicians should be aware of the high false positive rates and use t-EMG as part of a comprehensive approach to monitoring nerve integrity during XLIF procedures.
Instruction: Is ileostomy always necessary following rectal resection for deep infiltrating endometriosis? Abstracts: abstract_id: PUBMED:35478159 Ghost Ileostomy: Safe and Cost-effective Alternative to Ileostomy After Rectal Resection for Deep Infiltrating Endometriosis. Background/aim: Endometriosis infiltrating the rectum often requires resection with a protecting stoma. A ghost ileostomy (GI) is an alternative to prevent the psychological burden for the young women affected. The present study evaluated the safety and cost-effectiveness of the ghost ileostomy (GI) procedure in a group of patients after rectal resection for deep infiltrating endometriosis. Patients And Methods: The prospective controlled interventional trial was conducted in 54 consecutive patients with deep infiltrating endometriosis of the rectum. GI was considered after ultra-low resection with primary anastomosis, previous colorectal anastomosis, or pelvic redo surgery. Loop ileostomy (LI) was performed after simultaneous colpotomy with suture, only. Operating time, morbidity according to the Clavien-Dindo classification (CDC), duration of hospital stay, and patient satisfaction were obtained. Individual costs were estimated for the endometriosis procedure with or without a GI or LI, including stoma supply and closure expenses. Results: Of the 54 patients, 27 received GI (50%), whereas 4 underwent LI (7%). The remaining 23 patients received no outlet (NO). The complication rate did not differ among the GI, LI, and NO groups. Two cases were re-operated and required a diverting stoma, one in the GI and the NO group each. The additional healthcare expenses for each patient receiving a LI averaged 6,000 €. The patients were very satisfied with the option of a GI. Conclusion: GI is a cost-effective and safe alternative to LI after rectal resection for deep infiltrating endometriosis in cases where it is required. The individual costs per patient were reduced substantially, with a cumulative savings of 160,000 € in healthcare expenditure. Additionally, the method clearly lowers the psychological burden on the young women concerned. abstract_id: PUBMED:25109779 Is ileostomy always necessary following rectal resection for deep infiltrating endometriosis? Objective: To verify the hypothesis that in most patients bowel segmental resection to treat endometriosis can be safely performed without creation of a stoma and to discuss the limitations of this statement. Design: Retrospective study (Canadian Task Force classification III). Setting: Tertiary referral center. Patients: Forty-one women with sigmoid and rectal endometriotic lesions who underwent segmental resection. Intervention: Segmental resection procedures performed between 2004 and 2011. Patient demographic, operative, and postoperative data were compared. Measurements And Main Results: Sigmoid resection was performed in 6 patients (15%), and rectal anterior resection in 35 patients (high in 21 patients [51%], and low, i.e., <10 cm from the anal verge, in 14 [34%]). In 4 patients a temporary ileostomy was created. There was 1 anastomotic leak (2.4%), in a patient with an unprotected anastomosis, which was treated via laparoscopic surgery and creation of a temporary ileostomy. Other postoperative complications included hemoperitoneum, pelvic abscess, pelvic collection, and a ureteral vaginal fistula, in 1 patient each (all 2.4%). Conclusion: A protective stoma may be averted in low anastomosis if it is >5 cm from the anal verge and there are no adverse intraoperative events. abstract_id: PUBMED:26851598 Surgical treatment of deep infiltrating rectal endometriosis: in favor of less aggressive surgery. Deep infiltrating endometriosis of the rectum is a severe disease concerning young women of reproductive age. Because it is a benign condition, aggressive surgical treatment and subsequent complications are not always accepted by young patients. Two surgical approaches exist: the radical approach, employing colorectal resection; and the conservative approach, based on rectal shaving or full-thickness disc excision. At present, the majority of patients with rectal endometriosis worldwide are managed by the radical approach. Conversely, as high as 66% of patients with colorectal endometriosis can be managed by either rectal shaving or full-thickness disc excision. Most arguments that used to support the large use of the radical approach may now be disputed. The presumed higher risk of recurrence related to conservative surgery can be balanced by a supposed higher risk of postoperative bowel dysfunction related to the radical approach. Bowel occult microscopic endometriosis renders debatable the hypothesis that more aggressive surgery can definitively cure endometriosis. Although most surgeons consider that radical surgery is unavoidable in patients with rectal nodules responsible for digestive stenosis, conservative surgery can be successfully performed in a majority of cases. In multifocal bowel endometriosis, multiple conservative procedures may be proposed, provided that the nodules are separated by segments of healthy bowel of longer than 5 cm. Attempting conservation of a maximum length of rectum may reduce the risk of postoperative anterior rectal resection syndrome and subsequent debilitating bowel dysfunction and impaired quality of life. Promotion of less aggressive surgery with an aim to better spare organ function has become a general tendency in both oncologic and benign pathologies; thus the management of deep colorectal endometriosis should logically be concerned, too. abstract_id: PUBMED:35661344 Surgery for deep infiltrating rectal endometriosis-Selecting the right approach. Endometriosis involving the bowel is a form of deep infiltrating endometriosis (DIE). The endometriotic lesions can infiltrate the bowel layers mimicking a malignancy. The majority of bowel involvement happens in the colon and rectum. We report our experience in surgically managing rectal endometriosis in two patients, one via a conservative approach and the other with a more radical approach and their associated short-term and long-term outcomes are observed. In principle, surgery remains the mainstay of treatment in managing rectal DIE with adjuvant hormonal therapy. The selection of surgical approach should be based on disease factors such as the size of the lesions and extent of the disease, patient factors including fitness for surgery and expectations as well as logistics and resource limitations. abstract_id: PUBMED:22166117 Laparoscopic rectal resection of deep infiltrating endometriosis. Purpose: Deep infiltrating endometriosis with colorectal involvement is a complex disorder, often requiring segmental bowel resection. Complete removal of all visible lesions is considered the adequate treatment of infiltrating endometriosis in order to reduce recurrence. In this article, we describe our experience with laparoscopic management of deep infiltrating endometriosis with involvement of the rectum. Methods: A retrospective analysis of data from patients with deep infiltrating endometriosis with rectal involvement who underwent a laparoscopic surgery in the years 2002-2009 at the Department of Obstetrics and Gynecology at our institution was done. Results: Between 2002 and 2009, a laparoscopic partial rectal resection was performed in 52 patients, and laparoscopic disk resection was performed in 4 cases with deep infiltrating endometriosis. The mean age of patients was 34.4 years (range, 22-62 years). Preoperative symptoms included dysmenorrhea, dyspareunia, chronic pelvic pain, and infertility. The laparoscopic procedure was converted to formal laparotomy in 3 patients (5.4%). The mean duration of surgery was 145 minutes. Postoperative complications included 3 cases of anastomotic leakage with rectovaginal fistula in two cases and intraabdominal bleeding in 1 case. The mean hospital stay was 7 days. Postoperatively, nine patients had a normal delivery, two of them after in vitro fertilization treatment. Conclusion: Laparoscopic rectal resection for deep infiltrating endometriosis is a relatively safe procedure, when performed by a surgeon and a gynecologist with sufficient experience in laparoscopic colorectal surgery. abstract_id: PUBMED:29194531 Conservative surgery versus colorectal resection in deep endometriosis infiltrating the rectum: a randomized trial. Study Question: Is there a difference in functional outcome between conservative versus radical rectal surgery in patients with large deep endometriosis infiltrating the rectum 2 years postoperatively? Summary Answer: No evidence was found that functional outcomes differed when conservative surgery was compared to radical rectal surgery for deeply invasive endometriosis involving the bowel. What Is Known Already: Adopting a conservative approach to the surgical management of deep endometriosis infiltrating the rectum, by employing shaving or disc excision, appears to yield improved digestive functional outcomes. However, previous comparative studies were not randomized, introducing a possible bias regarding the presumed superiority of conservative techniques due to the inclusion of patients with more severe deep endometriosis who underwent colorectal resection. Study Design Size, Duration: From March 2011 to August 2013, we performed a 2-arm randomized trial, enroling 60 patients with deep endometriosis infiltrating the rectum up to 15 cm from the anus, measuring more than 20 mm in length, involving at least the muscular layer in depth and up to 50% of rectal circumference. No women were lost to follow-up. Participants/materials, Setting, Methods: Patients were enroled in three French university hospitals and had either conservative surgery, by shaving or disc excision, or radical rectal surgery, by segmental resection. Randomization was performed preoperatively using sequentially numbered, opaque, sealed envelopes, and patients were informed of the results of randomization. The primary endpoint was the proportion of patients experiencing one of the following symptoms: constipation (1 stool/>5 consecutive days), frequent bowel movements (≥3 stools/day), defecation pain, anal incontinence, dysuria or bladder atony requiring self-catheterization 24 months postoperatively. Secondary endpoints were the values of the Visual Analog Scale (VAS), Knowles-Eccersley-Scott-Symptom Questionnaire (KESS), the Gastrointestinal Quality of Life Index (GIQLI), the Wexner scale, the Urinary Symptom Profile (USP) and the Short Form 36 Health Survey (SF36). Main Results And The Role Of Chance: A total of 60 patients were enroled. Among the 27 patients in the conservative surgery arm, two were converted to segmental resection (7.4%). In each group, 13 presented with at least one functional problem at 24 months after surgery (48.1 versus 39.4%, OR = 0.70, 95% CI 0.22-2.21). The intention-to-treat comparison of the overall scores on KESS, GIQLI, Wexner, USP and SF36 did not reveal significant differences between the two arms. Segmental resection was associated with a significant risk of bowel stenosis. Limitations Reasons For Caution: The inclusion of only large infiltrations of the rectum does not allow the extrapolation of conclusions to small nodules of <20 mm in length. The presumption of a 40% difference favourable to conservative surgery in terms of postoperative functional outcomes resulted in a lack of power to demonstrate a difference for the primary endpoint. Wider Implications Of The Findings: Conservative surgery is feasible in patients managed for large deep rectal endometriosis. The trial does not show a statistically significant superiority of conservative surgery for mid-term functional digestive and urinary outcomes in this specific population of women with large involvement of the rectum. There is a higher risk of rectal stenosis after segmental resection, requiring additional endoscopic or surgical procedures. Study Funding/competing Interest(s): This work was supported by a grant from the clinical research programme for hospitals (PHRC) in France. The authors declare no competing interests related to this study. Trial Registration Number: This study is registered with ClinicalTrials.gov, number NCT 01291576. Trial Registration Date: 31 January 2011. Date Of First Patient’s Enrolment: 7 March 2011. abstract_id: PUBMED:31672590 Impact of Temporary Protective Ileostomy on Intestinal Function and Quality of Life after a 2-Year Follow-up in Patients Who Underwent Colorectal Segmental Resection for Endometriosis. Study Objective: To compare 2-year follow-up intestinal function and quality of life (QoL) between women with temporary protective ileostomy (PI) and recanalization and women without PI after colorectal segmental resection for deep infiltrating endometriosis (DIE). Design: Prospective observational exploratory study. Setting: Tertiary level referral center for minimally invasive gynecologic surgery. Patients: Consecutive patients who underwent laparoscopic colorectal resection and PI because of DIE between January 2015 and January 2018; an equal number of women without PI were matched according to age and anamnestic findings to serve as controls. Interventions: Realization of a PI or immediate recanalization in patients who underwent laparoscopic colorectal resection. Measurements And Main Results: Thirty-six patients were considered for the analyses: 18 in the PI group and 18 in the non-PI group. Baseline intestinal function and QoL were evaluated using 2 validated questionnaires. The main reasons for ileostomy were colpotomy (66.7%), ultralow bowel anastomosis (27.8%), concomitant ureteroneocystostomy, and positive Michelin test result (5.6%). The mean interval between first and second surgery in the PI group was 3.7 ± 1.7 months. Perioperative severe complications included 1 stenosis of colorectal anastomosis in 1 woman in the PI group and 1 perianastomotic abscess in the non-PI group; overall the complications were comparable between the 2 groups. At the 2-year follow-up from recanalization, bowel function and QoL improved from baseline, with no statistical differences between the groups (Knowles-Eccersley-Scott-Symptom delta: 5.9 ± 9.3 in the PI group vs 7.7 ± 10.2 in the non-PI group, p = .6; Gastrointestinal Quality of Life Index delta: 16.0 ± 27.5 vs 19.2 ± 24.7, p = .7). Conclusion: Temporary PI after colorectal resection for DIE does not seem to influence patients' bowel function and QoL at a median follow-up from recanalization at 2 years. abstract_id: PUBMED:23754206 Can rectal endoscopic sonography be used to predict infiltration depth in patients with deep infiltrating endometriosis of the rectum? Objectives: To evaluate the diagnostic accuracy of rectal endoscopic sonography (RES) in the prediction of the infiltration depth of rectal endometriosis and to ascertain whether RES could be used to choose between segmental bowel resection and a more conservative approach, such as shaving or discoid resection. Methods: In this retrospective study, 38 consecutive patients with symptomatic deep infiltrating endometriosis of the rectum who underwent laparoscopic colorectal resection were included. RES results for infiltration depth of rectal endometriosis were compared with results of pathological examination. The sensitivity, specificity, positive and negative predictive values (PPV and NPV, respectively), positive and negative likelihood ratios (LRs) and test accuracy were calculated for the presence of infiltration of the muscularis layers and submucosal/mucosal layers, as demonstrated by RES and confirmed by histopathological analysis. Results: For the detection of muscularis layer infiltration by endometriosis, the PPV of RES was 100%, whereas for the detection of submucosal/mucosal layer involvement, the sensitivity was 89%, specificity was 26%, PPV was 55%, NPV was 71%, test accuracy was 58% and positive and negative LRs were 1.21 and 0.40, respectively. Conclusions: RES is a valuable tool for detecting rectal endometriosis as endometriotic infiltration of the muscularis layer can be predicted accurately. However, RES is less accurate in detecting submucosal/mucosal layer involvement and cannot, therefore, be used to choose between bowel resection and a more conservative approach. abstract_id: PUBMED:32162322 Rectal disc resection improves stool frequency in patients with deep infiltrating endometriosis: A prospective study. Background: Gastrointestinal symptoms occur with deeply infiltrating endometriosis (DIE) of the rectum. Aims: To explore the medium-term gastrointestinal functional outcomes after rectal disc resection for endometriosis. Methods: All women undergoing laparoscopy for stage IV endometriosis at a tertiary referral hospital between November 2016 and January 2018 and had evidence of DIE of the rectum were included. Low anterior resection syndrome (LARS) score was measured using a validated questionnaire. Results: Thirty-six women formed the cohort of the study. The mean age was 37 years (range 20-72 years). All women underwent a laparoscopic anterior rectal disc resection for DIE. The response rate was 100%. There was an increase in the percentage of patients having no LARS postoperatively compared to preoperatively (an increase of 78-83%). There was a reduction in LARS scores postoperatively observed in 18 patients (50%) and the prevalence of major LARS decreased postoperatively from 10% to 1%. Comparison of individual symptoms revealed a significant improvement in postoperative stool frequency scores (P = 0.02). Multivariate analysis using logistic regression analysis demonstrated that reduction in postoperative stool frequency scores remained an independent factor (P = 0.008). Conclusion: Rectal disc resection is feasible and safe, achieving observable improvements in stool frequency in patients with rectal DIE. abstract_id: PUBMED:23935389 Is a bowel resection necessary for deep endometriosis with rectovaginal or colorectal involvement? Background: The purpose of this paper is to report the long-term results of surgery without bowel resection in patients suffering from deep infiltrating endometriosis with rectovaginal or colorectal involvement. Methods: This retrospective observational study identified 42 patients suffering with deep infiltrating endometriosis who underwent surgery. Conservative surgery was performed in 23 women (only one of them with bowel resection), and 19 women underwent a hysterectomy and bilateral salpingo-oophorectomy (HBSO). In the conservative surgery group, a later HBSO was performed in eight patients as a second operation. Pregnancies, recurrences, reoperations, use of hormone replacement therapy, and outcomes during long-term follow-up were analyzed. Results: The average follow-up duration was 7 ± 5.7 years in conservative surgery cases. Only one patient was treated with sigmoid bowel resection in 1997 and had complications. In this conservative surgery group, 13 patients (56%) received medical treatment after surgery, 10 patients wanted to get pregnant (of whom seven [70%] were successful), and eight patients underwent a subsequent HBSO because of recurrent symptoms and/or endometrioma. Therefore, HBSO was performed in 27 patients, of whom 14 (51.8%) used hormone replacement therapy for 5.6 ± 3.6 years. No recurrences or complications were observed in patients after HBSO with or without hormone replacement therapy. Conclusion: Good clinical results can be obtained by performing only conservative surgery and/or HBSO without bowel resection, an alternative that could reduce the number of colorectal resections that are performed very frequently nowadays. After HBSO, patients may use hormone replacement therapy for several years with total satisfaction and well-being. Answer: No, ileostomy is not always necessary following rectal resection for deep infiltrating endometriosis. The necessity of an ileostomy depends on various factors, including the location of the anastomosis and the presence of adverse intraoperative events. A study suggested that a protective stoma may be avoided in low anastomosis if it is greater than 5 cm from the anal verge and there are no adverse intraoperative events (PUBMED:25109779). Additionally, the use of a ghost ileostomy (GI) has been evaluated as a cost-effective and safe alternative to loop ileostomy (LI) after rectal resection for deep infiltrating endometriosis, reducing healthcare expenses and lowering the psychological burden on patients (PUBMED:35478159). Furthermore, conservative surgery, such as shaving or disc excision, has been compared to radical rectal surgery, with no evidence found that functional outcomes differed between the two approaches 2 years postoperatively (PUBMED:29194531). This suggests that more conservative approaches that do not require ileostomy can be considered. Additionally, temporary protective ileostomy does not seem to influence patients' bowel function and quality of life at a median follow-up from recanalization at 2 years (PUBMED:31672590). Rectal endoscopic sonography (RES) has been used to predict the infiltration depth of rectal endometriosis, but it is less accurate in detecting submucosal/mucosal layer involvement and cannot be used to choose between bowel resection and a more conservative approach (PUBMED:23754206). Rectal disc resection has been shown to improve stool frequency in patients with deep infiltrating endometriosis of the rectum, indicating that bowel resection is not always necessary (PUBMED:32162322). Lastly, conservative surgery without bowel resection has been reported to yield good clinical results, suggesting that not all cases of deep infiltrating endometriosis with rectovaginal or colorectal involvement require bowel resection (PUBMED:23935389).
Instruction: Does a true knowledge of dental crowding affect orthodontic treatment decisions? Abstracts: abstract_id: PUBMED:35049603 Knowledge and Attitude towards Orthodontic Treatment among Non-Orthodontic Specialists: An Online Survey in Croatia. Objectives: The aim of this study was to examine the knowledge and attitudes towards orthodontic treatment among non-orthodontic specialists. Methods: A web-based survey was formulated for non-orthodontic dental specialists to respond to statements regarding an orthodontic treatment. It contained 20 multiple-choice questions with three or more possible answers. Two hundred and fifty questionnaires were sent via email, with explanatory letters, to randomly selected non-orthodontic Croatian dental specialists. Data were assessed using IBAM SPSS 23.0. and p < 0.01 was considered significant. Results: The results indicate that the majority of respondents were well informed about principles and practices in orthodontics. All the respondents (100%) were aware that malocclusions can affect a patient's facial aesthetic and masticatory function. The results also showed statistically significant differences in answers about contraindications for orthodontics therapy among different non-orthodontic specialists (p < 0.01). Private health practitioners were better informed about the ideal time for the first orthodontic appointment (74.2%) and that implants and periodontal problems are not contraindications for orthodontic treatment (over 70%), in comparison with public health practitioners. Conclusion: Non-orthodontic specialists in this sample exhibit encouraging awareness and knowledge of the principals and practices of orthodontic treatment. Additional improving of practitioners' knowledge and awareness can help patients with malocclusion to decide upon orthodontic treatment at earlier stages and avoid later complications in the future. abstract_id: PUBMED:31397409 Awareness of orthodontic treatment among school teachers in Karad Taluka. Introduction: Health is defined as a state of complete physical, mental, and social well-being. Awareness of orthodontic treatment varies in different regions. Undergoing orthodontic treatment to correct malocclusion would be very beneficial to children as it could help eliminate bullying by peers regarding facial appearance. This kind of bullying by peers could affect the child psychologically. Materials And Methods: A questionnaire survey was conducted in Karad Taluka in Maharashtra. A total of 378 subjects were selected. The schools were randomly selected. A questionnaire including general information, knowledge, and awareness of orthodontic treatment was prepared, and the teachers were given 15 min to fill it. Since it was a short period of time to gather information from other sources, the participants answered the questionnaire using their own knowledge. The purpose of this questionnaire, which consisted of 12 questions in both English and Marathi was to evaluate the level of knowledge the teachers had about orthodontic treatment. The purpose of the study and questionnaire forms were explained by the examiner. The responses of the teachers to the questions were recorded on a 2-point Likert scale {YES or NO}. Results: Simple descriptive statistics was applied to describe the study variables. A Chi-square test of independence was performed to check independence between answers and gender for each question. Conclusion: Within limits of this study, it may be concluded that knowledge of available treatments was more in males compared to females in rural areas. abstract_id: PUBMED:25788331 Does a true knowledge of dental crowding affect orthodontic treatment decisions? Objectives: To assess whether a true knowledge of crowding alters treatment decisions compared with estimates of crowding. Materials And Methods: Thirty-six orthodontists were asked to estimate crowding using visualization on eight mandibular arch study models and to indicate possible extraction choices. For each model, the intermolar widths, intercanine widths, and clinical scenarios were identical, but the true crowding varied from 0.2 to 8.4mm as to a lesser extent did the curve of Spee. Eleven orthodontists repeated the visualization exercise after 2 weeks to assess reliability. All 36 of the orthodontists were asked to repeat the treatment planning exercise on the same models, but this time was provided with the true amount of crowding in each case. Results: When the 36 orthodontists used direct visualization of the models to assess crowding, the range of their estimates of crowding increased as the crowding increased. As might be expected, they also tended to move towards extraction treatments as the crowding increased (P = 0.013, odds ratio = 3). Although the reliability of the repeat estimates of crowding were moderate, the mean estimates were greater than the true crowding for each model. When orthodontists were presented with the true amount of crowding, rather than their estimate of crowding, it had a significant effect on the decision to extract, with fewer orthodontists recommending extractions. Limitations: The principal limitation of this study is that it was a laboratory-based study and utilized just the mandibular arch model for estimation and treatment planning. Conclusions: Direct visualization may overestimate the amount of crowding present. When the true amount of crowding is known, it can lead to more consistent treatment planning, with the decision to extract fewer teeth in the borderline cases. A formal space analysis is likely to assist with treatment planning. abstract_id: PUBMED:36798545 Knowledge, attitude, and awareness toward orthodontic treatment among patients: A questionnaire survey. The current study's objective is to learn more about how patients, depending on their gender, perceive, feel, and know about orthodontic therapy.There was a cross-sectional survey among 100 patients to evaluate their knowledge and attitude toward orthodontic treatment. The questionnaire was prepared and circulated among patients using online google forms. The online responses were collected and tabulation of the data was done in excel sheets. Data was analysed using statistical software. Chi square test to compare the association of gender and patients knowledge regarding orthodontic treatment. In the current study 53% of the study population had visited an orthodontist. 79.6% of the patients are conscious that their teeth can be aligned properly by an orthodontist. 73.5% think that teeth should be properly positioned for a better facial appearance. According to the findings in the present study, it is evident that there is awareness and understanding regarding the orthodontic treatment among patients. abstract_id: PUBMED:37951899 Knowledge, attitudes, and interest in orthodontic treatment: a cross-sectional study in adults with stage III-IV periodontitis and secondary malocclusions. Background: An increasing number of patients with advanced stages of periodontitis are seeking for treatment options. The study aimed to determine interest in orthodontic treatment (OT) and its association with oral health status and knowledge about the disease in adult subjects with stage III-IV periodontitis. Methods: 96 subjects ≥ 30 years, with stage III-IV periodontitis agreed to fill in a questionnaire and undergo a comprehensive periodontal-orthodontic examination. The questionnaire included 44 questions: demographic, dental, health related habits, self-perceived overall and oral health, knowledge of periodontitis, and attitude toward OT. The statistical analysis was performed using a paired-sample T-test, Receiver Operating Characteristic (ROC) and binary logistic regression analysis (LRA). Results: Stage IV periodontitis was observed in 32.3% of subjects and Class II malocclusion was most prevalent (53.1%). More than half of subjects were interested in OT. Oral health, knowledge about periodontitis and age were significant predictors for interest in OT. Conclusions: Knowledge spread about OT possibilities in advanced stages of periodontitis is very important both for the dental community and older subjects to save natural dentition. abstract_id: PUBMED:36890517 Knowledge, attitude and practice of patients towards orthodontic treatment. Background: Orthodontic treatment is a time-consuming and highly technique-sensitive clinical procedure. A patient's comprehension and compliance with oral hygiene instructions and appliance maintenance are critical to the success of orthodontic treatment. This study was performed to assess the knowledge, attitude and practice of patients seen at government orthodontic clinics in the Federal Territories of Kuala Lumpur and Putrajaya towards orthodontic treatment. Methods: A validated, bilingual, self-administered questionnaire comprising fifteen questions across the domains of Knowledge, Attitude and Practice was used and responses were assessed with 3 responses; one correct, one incorrect and one reflecting uncertainty. 507 patients from five orthodontic centres participated in this study. Data was analysed using SPSS. Continuous data was summarised as mean and standard deviation or median and inter-quartile range, as appropriate. Categorical data was summarised as frequency and percentage, then univariable analysis was carried out with Pearson's chi-square test or Fisher's exact test, as appropriate. Results: The mean age of respondents was 22.5 years (SD ± 2.8). A majority of respondents were female (64.1%) and from the lowest income bracket or B40 group (71%). Overall, for the knowledge domain, a majority of the respondents got all questions correct. 69.4% of patients were aware that incomplete treatment could lead to worsening of their malocclusion. 80.9% of respondents were aware of the need for a retainer upon completion of their orthodontic treatment. For the attitude section, 64.7% felt that they had to wait a very long time to see the orthodontist. In the Practice domain, the majority only got two of the five questions correct. Only 39.8% of respondents made an effort to alter dietary habits all of the time. In general, females and those with tertiary education fared better for all three domains. Conclusions: The orthodontic patients in the Federal Territories of Kuala Lumpur and Putrajaya possess good knowledge about their treatment however their attitude and orthodontic related practices need to be improved. abstract_id: PUBMED:21252189 Inter-examiner variability in orthodontic treatment decisions for Danish children with 'borderline' treatment need. This study explored the variation between examiners in the orthodontic treatment need assessments of fifth-grade children with a borderline orthodontic treatment need. Each of three groups of children with borderline treatment need (n = 18, 19, and 19, respectively) were examined by one of three groups of orthodontists (33 in each group), whereby each of 56 children had 33 orthodontic treatment need assessments based on a clinical examination. This treatment need determination exercise was subsequently repeated with treatment need determined based on study casts and extraoral photographs. The proportion of positive treatment decisions based on the clinical examination was 49.3, 49.6, and 52.5 per cent, respectively, and 45.7, 46.3, and 50.5 per cent, based on the model assessments. There was a considerable disagreement between examiners in the treatment need assessments, whether assessments were based on a clinical examination or on a model-based case presentation. The average percentage agreement between two orthodontists for the treatment need based on clinical examination was 69, 66, and 61, respectively, corresponding to mean kappa values of 0.38, 0.32, and 0.22. When the model-based assessments were considered, the average percentage agreement between two orthodontists was 62, 58, and 69, respectively, corresponding to mean kappa values of 0.25, 0.16, and 0.37. Linear regression analysis of the orthodontists' treatment propensity as a function of their gender, place of education, years of orthodontic treatment experience, type of workplace, and place of work showed that only the orthodontic experience was influential for the model-based treatment propensity [β = 0.34 per cent/year (95 per cent confidence interval = 0.01-0.66)]. abstract_id: PUBMED:34505907 Prevalence of bullying in orthodontic patients and its impact on the desire for orthodontic therapy, treatment motivation, and expectations of treatment. Purpose: This study aims at investigating the prevalence of bullying due to malocclusions in a sample of German orthodontic patients. Thereby, we analyzed the influence of self-reported bullying on initiation, desire and motivation, and expectations of orthodontic therapy. Patients And Methods: Patients completed questionnaires by themselves in three orthodontic practices (n = 1020, mean age 11.8 years). The sample was divided into three groups: (1) subjects bullied due to dentofacial features, (2) subjects bullied due to physical features, and (3) non-bullied control group. The effect of bullying on initiation, desire and motivation, and expectation from orthodontic treatment was assessed and compared between these groups. Results: Overall prevalence of bullying was 23.7% with male subjects revealing significantly higher values than females (p < 0.001). Subjects from the dentofacial features group (6.3%) initiated orthodontic therapy themselves significantly more often than subjects from the physical features (17.4%; p = 0.030) and control group (76.3%; p < 0.001). The dentofacial features group reported significantly more frequently that 'ugly teeth' were the major motivating factor to seek orthodontic treatment (40.4%). Within this group significantly higher mean scores for the expectation 'keep me from being bullied' were obtained compared with subjects in the physical features (p < 0.001) or control group (p < 0.001). Conclusion: This analysis demonstrated that bullying due to malocclusion impacts attitude towards orthodontic treatment. Victims who experienced bullying due to malocclusion initiate orthodontic treatment more often themselves and expect therapy to prevent them from experiencing further bullying. abstract_id: PUBMED:35694759 Knowledge and practice towards early orthodontic problems among general dentists and paediatric dentists. Objectives: To assess the knowledge, practice and challenges faced by general dental practitioners (GDPs), paediatric dentists (PDs) and postgraduate paediatric dentistry students (PGPDSs) in diagnosing and managing early orthodontic problems. Design: Cross-sectional study. Methods: A total of 159 dentists were enrolled in this study (121 GDPs, 21 PDs and 17 PGPDSs). Data were collected using a self-administered questionnaire involving two sections. Section A included questions regarding demographic features and orthodontic practice. Section B included photos of 12 orthodontic cases that needed early intervention. Knowledge and practice were assessed using six questions for each case that was then rated using predefined scores of poor (score <50%), average (score 50%-75%) and good (score >75%). Results: The knowledge scores regarding the identification of early orthodontic problems were average among the three groups with GDPs having the lowest score (P = 0.0001). The knowledge scores regarding the selection of optimal treatment time were average among the PDs and PGPDSs, and poor among GDPs. GDPs tended to refer most of their cases to an orthodontist except for cases of thumb sucking. PDs and PGPDSs were more confident in treating anterior crossbite, thumb sucking habits and delayed eruption of incisors. All participants scored 'poor' regarding the selection of an appropriate orthodontic appliance. There was no significant correlation between knowledge and practice (P > 0.05). Conclusion: PDs and PGPDSs showed higher knowledge and practice scores when compared to GDPs. For all three groups included in this study, lack of clinical skills was the main reason for not treating early orthodontic problems. abstract_id: PUBMED:37920628 Evaluation of the Impact of Orthodontic Treatment on Patients' Self-Esteem: A Systematic Review. Malocclusion may affect interpersonal relationships, self-esteem (SE), and psychological well-being, weakening patients' psychological and social activities. Several studies investigated the effect of orthodontic treatment on these social and psychological aspects, such as SE. However, the direct relationship between SE and orthodontic treatment has not yet been confirmed. This systematic review aimed to evaluate the existing evidence in the literature concerning the influences of orthodontic treatment on patients' SE systematically and critically. An electronic search in the following databases was done in September 2022: PubMed®, Web of Science™, Scopus®, Embase®, GoogleTM Scholar, Cochrane Library databases, Trip, and OpenGrey. Then, the reference list of each candidate study was checked for any potentially linked papers that the electronic search might not have turned up. Inclusion criteria were set according to the population/intervention/comparison/outcome/study design (PICOS) framework. For the data collection and analysis, two reviewers extracted data separately. The risk of bias 2 (RoB-2) and the risk of bias in non-randomized studies (ROBINS-I) tools were used to assess the risk of bias for randomized controlled trials (RCTs) and non-RCTs, respectively. The grading of recommendations assessment, development and evaluation (GRADE) approach was employed to evaluate the quality of the evidence for each finding. Sixteen studies (five RCTs, seven cohorts, and four cross-sectional) were included in this review. Unfortunately, the results could not be pooled into a meta-analysis. Only six studies have reported an increase in SE after orthodontic treatment (P<0.05 in these studies). No agreement between the included studies was observed regarding the influence of fixed orthodontic treatment, gender, or age on SE. The quality of evidence supporting these findings ranged from very low to low. There is low evidence indicating that fixed orthodontic treatment can improve patients' SE. In addition, unclear data are available about the influence of patients' gender and age on SE after orthodontic treatment. Therefore, high-quality RCTs are required to develop stronger evidence about this issue. Answer: Yes, a true knowledge of dental crowding does affect orthodontic treatment decisions. A study assessing whether a true knowledge of crowding alters treatment decisions compared with estimates of crowding found that when orthodontists were presented with the true amount of crowding, rather than their estimate of crowding, it had a significant effect on the decision to extract, with fewer orthodontists recommending extractions. This suggests that direct visualization may overestimate the amount of crowding present, and when the true amount of crowding is known, it can lead to more consistent treatment planning, with the decision to extract fewer teeth in the borderline cases (PUBMED:25788331).
Instruction: Ischemic burden and clinical outcome: is one 'culprit' ischemic segment by dobutamine stress magnetic resonance predictive? Abstracts: abstract_id: PUBMED:25517506 Ischemic burden and clinical outcome: is one 'culprit' ischemic segment by dobutamine stress magnetic resonance predictive? Aims: We sought to evaluate the impact of ischemic burden for the prediction of hard cardiac events (cardiac death or nonfatal myocardial infarction) in patients with known or suspected CAD who undergo dobutamine stress cardiac magnetic resonance imaging (DCMR). Methods: We included 3166 patients (pts.), mean age 63 ± 12 years, 27% female, who underwent DCMR in 3 tertiary cardiac centres (University Hospital Heildelberg, German Heart Institute and Kings College London). Pts. were separated in groups based on the number of ischemic segments by wall motion abnormalities (WMA) as follows: 1. no ischemic segment, 2. one ischemic segment, 3. two ischemic segments and 4. ≥ three ischemic segments. Cardiac death and nonfatal myocardial infarction were registered as hard cardiac events. Pts. with an "early" revascularization procedure (in the first three months after DCMR) were not included in the final survival analysis. Results: Pts. were followed for a median of 3.1 years (iqr 2-4.5 years). 187 (5.9%) pts. experienced hard cardiac events. 2349 (74.2%) had no inducible ischemia, 189 (6%) had ischemia in 1 segment, 292 (9.2%) in 2 segments and 336 (10.6%) ≥ 3 segments. Patients with only 1 ischemic segment showed a high rate of hard cardiac events of ∼ 6% annually, which was 10-fold higher compared to those without ischemia (0.6% annually, p < 0.001) but similar to those with 2 and ≥ 3 ischemic segments (∼ 5.5% and ∼ 7%, p = NS). Conclusions: The presence of inducible ischemia even in a single 'culprit' myocardial segment during DCMR is enough to predict hard cardiac events in patients with known or suspected CAD. abstract_id: PUBMED:17353441 Prognostic value of cardiac magnetic resonance stress tests: adenosine stress perfusion and dobutamine stress wall motion imaging. Background: Adenosine stress magnetic resonance perfusion (MRP) and dobutamine stress magnetic resonance (DSMR) wall motion analyses are highly accurate for the detection of myocardial ischemia. However, knowledge about the prognostic value of stress MR examinations is limited. We sought to determine the value of MRP and DSMR, as assessed during a single-session examination, in predicting the outcome of patients with known or suspected coronary artery disease. Methods And Results: In 513 patients (with known or suspected coronary disease, prior coronary artery bypass graft, or percutaneous coronary intervention), a combined single-session magnetic resonance stress examination (MRP and DSMR) was performed at 1.5 T. For first-pass perfusion imaging, the standard adenosine stress imaging protocol (140 microg x kg(-1) x min(-1) for 6 minutes, 3-slice turbo field echo-echo-planar imaging or steady-state free precession sequence, 0.05 mmol/kg Gd-DTPA) was applied, and for DSMR, the standard high-dose dobutamine/atropine protocol (steady-state free-precession cine sequence) was applied. Stress testing was classified as pathological if at MRP > or = 1 segment showed an inducible perfusion deficit > 25% transmurality or if at DSMR > or = 1 segment showed an inducible wall motion abnormality. During a median follow-up of 2.3 years (range, 0.06 to 4.55 years), 19 cardiac events occurred (4.1%; 9 cardiac deaths, 10 nonfatal myocardial infarctions). The 3-year event-free survival was 99.2% for patients with normal MRP and DSMR and 83.5% for those with abnormal MRP and DSMR. Univariate analysis showed ischemia identified by MRP and DSMR to be predictive of cardiac events (hazard ratio, 12.51; 95% confidence interval, 3.64 to 43.03; and hazard ratio, 5.42; 95% confidence interval, 2.18 to 13.50; P<0.001, respectively); other predictors were diabetes mellitus, known coronary artery disease, and the presence of resting wall motion abnormality. By multivariate analysis, ischemia on magnetic resonance stress testing (MRP or DSMR) was an independent predictor of cardiac events. In a stepwise multivariate model (Cox regression), an abnormal magnetic resonance stress test result had significant incremental value over clinical risk factors and resting wall motion abnormality (P<0.001). Conclusions: In patients with known or suspected coronary artery disease, myocardial ischemia detected by MRP and DSMR can be used to identify patients at high risk for subsequent cardiac death or nonfatal myocardial infarction. For patients with normal MRP and DSMR, the 3-year event-free survival was 99.2%. MR stress testing provides important incremental information over clinical risk factors and resting wall motion abnormalities. abstract_id: PUBMED:12539099 Dobutamine stress magnetic resonance imaging: a valuable method in the noninvasive diagnosis of ischemic heart disease. We assessed the clinical applicability of dobutamine stress magnetic resonance imaging (DS-MRI) for the detection of myocardial ischemia and myocardial viability. One hundred patients with suspected coronary artery disease and inconclusive exercise electrocardiography or significant repolarization abnormalities on the resting ECG underwent breath hold DS-MRI (1 Tesla), 4 days after cessation of anti-ischemic medication. Three left ventricular short axis planes were imaged at increasing doses of dobutamine. Recovery of wall thickening in a previously diminished or non contracting segment at low dose dobutamine was considered proof of viability. Development of hypo-, a- or dyskinesia at higher doses of dobutamine was taken to indicate ischemia. If the DS-MRI test was positive for ischemia, coronary angiography was performed. If indicated, this was followed by revascularization. If DS-MRI did not demonstrate ischemia, neither angiography nor revascularization were carried out. Ninety five DS-MRI investigations were available for diagnosis. Forty two patients had DS-MRI scans positive for ischemia and subsequently coronary angiography assessment of the clinical applicability of DS-MRI for the detection of myocardial ischemia was performed. One patient was false-positive. All 53 patients with non-ischemic DS-MRI scans had follow-up for 11-23 months (mean 17 months). One patient died suddenly 2 weeks after the MRI-test. The other 52 patients did not experience any coronary event nor sudden cardiac death. The predictive value of a positive (for ischemia) DS-MRI test is 98% and the predictive value of a negative DS-MRI test is also 98%. abstract_id: PUBMED:20977757 The 20 year evolution of dobutamine stress cardiovascular magnetic resonance. Over the past 20 years, investigators world-wide have developed and utilized dobutamine magnetic resonance stress testing procedures for the purpose of identifying ischemia, viability, and cardiac prognosis. This article traces these developments and reviews the data utilized to substantiate this relatively new noninvasive imaging procedure. abstract_id: PUBMED:15289384 Comparison of dobutamine stress magnetic resonance, adenosine stress magnetic resonance, and adenosine stress magnetic resonance perfusion. Background: Dobutamine stress MR (DSMR) is highly accurate for the detection of inducible wall motion abnormalities (IWMAs). Adenosine has a more favorable safety profile and is well established for the assessment of myocardial perfusion. We evaluated the diagnostic value of IWMAs during dobutamine and adenosine stress MR and adenosine MR perfusion compared with invasive coronary angiography. Methods And Results: Seventy-nine consecutive patients (suspected or known coronary disease, no history of prior myocardial infarction) scheduled for cardiac catheterization underwent cardiac MR (1.5 T). After 4 minutes of adenosine infusion (140 microg x kg(-1) x min(-1) for 6 minutes), wall motion was assessed (steady-state free precession), and subsequently perfusion scans (3-slice turbo field echo-echo planar imaging; 0.05 mmol/kg Gd-BOPTA) were performed. After a 15-minute break, rest perfusion was imaged, followed by standard DSMR/atropine stress MR. Wall motion was classified as pathological if > or =1 segment showed IWMAs. The transmural extent of inducible perfusion deficits (<25%, 25% to 50%, 51% to 75%, and >75%) was used to grade segmental perfusion. Quantitative coronary angiography was performed with significant stenosis defined as >50% diameter stenosis. Fifty-three patients (67%) had coronary artery stenoses >50%; sensitivity and specificity for detection by dobutamine and adenosine stress and adenosine perfusion were 89% and 80%, 40% and 96%, and 91% and 62%, respectively. Adenosine IWMAs were seen only in segments with >75% transmural perfusion deficit. Conclusions: DSMR is superior to adenosine stress for the induction of IWMAs in patients with significant coronary artery disease. Visual assessment of adenosine stress perfusion is sensitive with a low specificity, whereas adenosine stress MR wall motion is highly specific because it identifies only patients with high-grade perfusion deficits. Thus, DSMR is the method of choice for current state-of-the-art treatment regimens to detect ischemia in patients with suspected or known coronary artery disease but no history of prior myocardial infarction. abstract_id: PUBMED:11797508 Comparison of dobutamine-stress magnetic resonance imaging and dipyridamole-TL-201-SPECT as alternative strategies for the detection of coronary artery dis in patients no suitable for stress echocardiography Aim: During the past decade stress-echocardiography has gained increasing popularity for detection of myocardial ischemia in patients with coronary artery disease. However, about 10% to 15% of the patients submitted for stress-echocardiography do not have an adequate acoustic window. The purpose of this study was to compare high-dose dobutamine-stress magnetic resonance imaging (dobutamine-MRI) with dipyridamole-Tl-201-SPECT (dipyridamole-SPECT) as alternative strategies for detection of myocardial ischemia in patients with inadequate image quality by stress-echocardiography. Patients And Methods: Of 296 patients which were consecutively submitted to stress-echocardiography 45 patients (15%) had two or more segments that could not be evaluated according to the 16-segment-model of the American Society of Echocardiography. They underwent dobutamine-MRI and dipyridamole-SPECT studies, which were evaluated using a 28-segment modell. Myocardial segments were attributed to perfusion territories of the coronary arteries. The results of ischemia detection were compared to the results of coronary angiography (stenosis > or = 50%). Results: In comparison to coronary angiography dobutamine-MRI yielded a sensitivity of 87%, a specificity of 86%, a positive predictive value of 93%, a negative predictive value of 75% and a diagnostic accuracy of 86%. For dipyridamole-SPECT results were 90%, 86%, 93%, 80% and 89%, respectively. These values were not significantly different. Conclusions: In patients not suitable for stress-echocardiography, both dobutamine-MRI and dipyridamole-SPECT are reliable strategies for detection of myocardial ischemia. Selection is dependent on the patient criteria, technical considerations, local logistics and experience of the observer. abstract_id: PUBMED:22014488 Evaluation of myocardial viability with cardiac magnetic resonance imaging. Assessment of myocardial viability is of clinical and scientific significance. Traditionally, the detection of myocardial viability (either stunning or hibernation) has been used in aiding diagnosis before revascularization, especially in high-risk patients. There is a considerable body of observational evidence showing substantial improvement after revascularization in patients with significant left ventricular dysfunction and myocardial viability. Recent randomized evidence has questioned the benefit of viability testing but must be interpreted with caution. Dobutamine stress echocardiography, nuclear imaging, and cardiovascular magnetic resonance are the mainstays of viability testing and provide information on contractile function, cellular metabolism, and myocardial fibrosis, respectively. Larger, multicenter trials with outcome data are needed to define the nature of viability testing and, particularly, cardiovascular magnetic resonance in moderate-to-severe ischemic cardiomyopathy. abstract_id: PUBMED:24925328 Comparative definitions for moderate-severe ischemia in stress nuclear, echocardiography, and magnetic resonance imaging. The lack of standardized reporting of the magnitude of ischemia on noninvasive imaging contributes to variability in translating the severity of ischemia across stress imaging modalities. We identified the risk of coronary artery disease (CAD) death or myocardial infarction (MI) associated with ≥10% ischemic myocardium on stress nuclear imaging as the risk threshold for stress echocardiography and cardiac magnetic resonance. A narrative review revealed that ≥10% ischemic myocardium on stress nuclear imaging was associated with a median rate of CAD death or MI of 4.9%/year (interquartile range: 3.75% to 5.3%). For stress echocardiography, ≥3 newly dysfunctional segments portend a median rate of CAD death or MI of 4.5%/year (interquartile range: 3.8% to 5.9%). Although imprecisely delineated, moderate-severe ischemia on cardiac magnetic resonance may be indicated by ≥4 of 32 stress perfusion defects or ≥3 dobutamine-induced dysfunctional segments. Risk-based thresholds can define equivalent amounts of ischemia across the stress imaging modalities, which will help to translate a common understanding of patient risk on which to guide subsequent management decisions. abstract_id: PUBMED:25696185 Dobutamine stress magnetic resonance imaging suffices for the demonstration of myocardial ischaemia and viability. We report three patients in whom dobutamine stress magnetic imaging (DS-MRI) was essential in assessing myocardial ischaemia. Two patients were referred to the cardiologist because of chest pain. Patient A had typical exertional angina and a normal resting electrocardiogram (ECG). Patient B had typical exercise-induced angina and had recently experienced an attack of severe chest pain at rest for 15 minutes. The ECG showed a complete left bundle branch block (LBBB). Patient C was referred for heart failure of unknown origin. There were no symptoms of chest pain during rest or exercise. Echocardiography in this patient demonstrated global left ventricular (LV) dilatation, systolic dysfunction and a small dyskinetic segment in the inferior wall. In all these patients exercise stress testing had failed to demonstrate myocardial ischaemia. Patients A and C produced normal findings whereas in patient B the abnormal repolarisation due to pre-existent LBBB precluded a diagnosis of ischaemia. Breath-hold DS-MRI was performed to study LV wall motion and wall thickening at rest through increasing doses of dobutamine. A test was considered positive for myocardial ischaemia if wall motion abnormalities developed at high-dose levels of the drug (20 μg/kg/min or more with a maximum of 40 μg/kg/min) in previously normal vascular territories or worsened in a segment that was normal at baseline. Recovery of wall thickening in a previously hypokinetic or akinetic segment at a low dose of dobutamine (5-10 μg/kg/min) was taken as proof of viability. Patients A and B developed hypokinesia progressing into akinesia at high-dose dobutamine in the anteroseptal area of the LV indicative of ischaemia. These findings were corroborated by coronary angiography demonstrating severe coronary artery disease which led to coronary artery bypass grafting (CABG) in patient A and balloon angioplasty in patient B. In patient C global recovery of LV contractions during low-dose dobutamine was followed by hypokinesia in the inferoseptal area during high-dose dobutamine. This biphasic response indicates myocardial viability as well as ischaemia. CABG was carried out because of multiple stenoses in the left coronary artery. Post-operatively LV function normalised. DS-MRI is a valuable method for detecting myocardial ischaemia and viability in patients with suspected coronary artery, and can be applied in every hospital with MRI equipment at its disposal. abstract_id: PUBMED:19774515 Diagnosis of coronary artery disease - part 3: Stress echocardiography and cardiac magnetic resonance imaging The diagnostic principle of stress echocardiography and cardiac magnetic resonance imaging (CMR) for the diagnosis of coronary artery disease is based on the visualisation of ischaemia-induced wall motion abnormalities. From a logistic point of view, stress echocardiography is the easiest test given that it can be performed at bedside. Both stress echocardiography and stress CMR also permit direct visualisation of myocardial perfusion at rest and during pharmacological stress (typically adenosine) using contrast administration (microbubbles for stress echocardiography, gadolinium for stress CMR). These novel methods for the visualisation of myocardial perfusion seem to provide information similar to that obtained using myocardial perfusion imaging but these techniques (particularly myocardial perfusion echocardiography) are not broadly established in daily practice yet. Similar to other non-invasive tests stress echocardiography and stress CMR have the highest diagnostic yield in patients with intermediate probability of significant coronary artery disease. Answer: Yes, the presence of inducible ischemia even in a single 'culprit' myocardial segment during dobutamine stress magnetic resonance (DSMR) is predictive of hard cardiac events in patients with known or suspected coronary artery disease (CAD). A study that included 3166 patients who underwent DSMR found that patients with only one ischemic segment showed a high rate of hard cardiac events of approximately 6% annually. This rate was 10-fold higher compared to those without ischemia (0.6% annually) but was similar to those with two and three or more ischemic segments (approximately 5.5% and 7%, respectively) (PUBMED:25517506). This indicates that even a single ischemic segment identified by DSMR can be a significant predictor of adverse cardiac outcomes.
Instruction: Access to expensive cancer drugs in Dutch daily practice: should we be concerned? Abstracts: abstract_id: PUBMED:24829184 Access to expensive cancer drugs in Dutch daily practice: should we be concerned? Background: To investigate whether equal access to bortezomib has been achieved under the Dutch policy regulations that guarantee equal access to expensive inpatient drugs. Methods: We investigated accessibility to bortezomib treatment at national and regional levels by (i) conducting interviews with stakeholders in the Dutch healthcare system to explore prescription barriers and (ii) tabulating sales data from 2004-2009 and trial participation rates. Results: Interviews revealed awareness of the high treatment costs, although prescription barriers were not encountered. National use of bortezomib increased slowly (treating 2% of patients in 2004 to 17% in 2009), indicating a long adjustment period. Furthermore, use remains below the rate estimated by the professional association of haematologists (27%). Regional differences were found for both daily practice use (e.g. ranging from 13-27% in 2009) and clinical trial participation (e.g. ranging from 1-12% in 2006). Conclusion: Our results were somewhat conflicting: interviews did not reveal any prescription barriers, but quantitative methods showed regional differences, signs of underutilisation, and access inequality. Investigating use and accessibility, based on data triangulation, provides valuable feedback which can enhance evidence-based decision making for both physicians and policymakers. This could improve appropriate and efficient use and ensure equal access to expensive drugs. abstract_id: PUBMED:38352693 Ethics of access to newly approved expensive medical treatments: multi-stakeholder dialogues in a publicly funded healthcare system. Background: Due to rising healthcare expenditures, countries with publicly funded healthcare systems face challenges when providing newly approved expensive anti-cancer treatments to all eligible patients. In the Netherlands in 2015, the so-called Coverage Lock (CL), was introduced to help safeguard the sustainability of the healthcare system. Since then, newly approved treatments are no longer automatically reimbursed. Previous work has shown that as policies for access to CL treatments are lacking, patient access to non-reimbursed treatments is limited and variable, which raises ethical issues. The ethics of access were discussed in a series of multi-stakeholder dialogues in the Netherlands. Methods: Three dialogues were held in early 2023 and included physicians, health insurers, hospital executives, policymakers, patients, citizens, and representatives of pharmaceutical companies, patient and professional organizations. In advance, participants had received an 'argument scheme' featuring three models: 1) access based on third-party payment (e.g., by pharmaceutical companies, health insurers or hospitals) 2) access based on out-of-pocket payments by patients 3) no access to CL treatments. During the dialogues, participants were asked to discuss the merits of the ethical arguments for and against these models together, and ultimately to weigh them. The discussions were audio-taped, transcribed, coded, and thematically analyzed. Results: Generally, most stakeholders were in favour of allowing access-at least when treatments are clearly beneficial-to treatments in the CL. When discussing third-party payment, stakeholders favoured payment by pharmaceutical companies over payment by health insurers or hospitals, not wanting to usurp collective funds while cost-effectiveness assessments are still pending. Largely, stakeholders were not in favour of out-of-pocket payments, emphasizing solidarity and equal access as important pillars of the Dutch healthcare system. Recurrent themes included the conflict between individual and collective interests, shifting attitudes, withholding access as a means to put pressure on the system, and the importance of transparency about access to CL-treatments. Conclusion: Policies for access to non-reimbursed treatments should address stakeholders' concerns regarding transparency, equal access and solidarity, and loss of potential health benefits for patients. Multi-stakeholder dialogues are an important tool to help inform policy-making on access to newly approved (too) expensive treatments in countries facing challenges to the sustainability of healthcare systems. abstract_id: PUBMED:24314625 Access to innovation: is there a difference in the use of expensive anticancer drugs between French hospitals? In DRG-based hospital payment systems, expensive drugs are often funded separately. In France, specific expensive drugs (including a large proportion of anticancer drugs) are fully reimbursed up to national reimbursement tariffs to ensure equity of access. Our objective was to analyse the use of expensive anticancer drugs in public and private hospitals, and between regions. We had access to sales per anticancer drug and per hospital in the year 2008. We used a multilevel model to study the variation in the mean expenditure of expensive anticancer drugs per course of chemotherapy and per hospital. The mean expenditure per course of chemotherapy was €922 [95% CI: 890-954]. At the hospital level, specialisation in chemotherapies for breast cancers was associated with a higher expenditure of anticancer drugs per course for those hospitals with the highest proportion of cancers at this site. There were no differences in the use of expensive drugs between the private and the public hospital sector after controlling for case mix. There were no differences between the mean expenditures per region. The absence of disparities in the use of expensive anticancer drugs between hospitals and regions may indicate that exempting chemotherapies from DRG-based payments and providing additional reimbursement for these drugs has been successful at ensuring equal access to care. abstract_id: PUBMED:24810193 Expensive cancer drugs and just health care. Extremely expensive drugs which cost more than 100.000 $ per year for the treatment of one patient are increasingly common. The benefit of these drugs may either be not accessible to many patients or the overall costs could lead to a heavy burden on the health care system and public resources. This article describes the overall background of this trend and addresses the problem of expensive drugs from a perspective of just health care and just health outcomes. Therefore, basic aspects of just health care are outlined such as goods and principles relevant from a perspective of justice. This framework is applied to the allocation of expensive drugs on three levels. These reflections will demonstrate that there is no simple solution to this problem, and that the decisions cannot be taken by experts, but should be the result of an open, transparent, and fair public dialogue on health priorities. abstract_id: PUBMED:22319998 Access to expensive anti-cancer drugs. Expensive anti-cancer drugs expose controversy underlying the process for resource allocation decisions, and intermittently result in marked publicity, emotive discussions about access to novel and expensive treatments, and political involvement which may override existing processes. This column outlines the methods of determining whether or not a treatment is considered appropriate to fund, and focuses upon the evidence of patient and doctor wishes. The existing research illustrates the complexity of patient and oncologist decision-making when these drugs are to be considered. Past litigation to obtain access to expensive treatments is discussed, along with the interactions between patients, pharmaceutical companies, health services and oncologists. This evolving field is being transformed by developments in molecular biology enabling targeted drugs, and amply demonstrates the complexity of funding decisions and how expensive treatments are considered by a range of stakeholders. abstract_id: PUBMED:34745459 Manufacturers' views on outcome-based agreements. Introduction: Outcome-based agreements (OBAs) are occasionally deployed to relieve the burden of high drug prices on healthcare budgets. However, it is not clear when manufacturers are willing to collaborate in establishing such agreements. Therefore, we explored the feasibility of OBAs from the manufacturer's point of view. Methods: Dutch market-access experts from eight major pharmaceutical companies, globally active in the field of oncology, were interviewed. Opinions were compiled, and interviewees and their colleagues were then given the chance to review the manuscript for additional comments. Results: Most interviewees believe that OBAs can be useful in providing access to off-label use of authorised medicines, especially when no alternative treatment is available for seriously ill patients. For the licenced indications, manufacturers seem to be more inclined to collaborate when there is a potential incentive to improve market-access (e.g., if the product is not used because of concerns regarding its effectiveness). However, manufacturers are less likely to collaborate when there are greater financial risks for the company. Further concerns were definition of outcome or performance, the impact of compliance on the effectiveness of a drug, administrative burden, uncertainty regarding revenue recognition and the challenges of reimbursing combination therapies. Discussion: Market-access interviewees were generally positive about OBAs, however they were more reluctant towards OBAs for registered indications with low response-rate. The definition of performance or outcome and its clinical relevance and validity, the feasibility of OBAs and their administrative burden are relevant aspects that need to be addressed in advance. Ideally, countries should collaborate to share the outline of OBAs and create shared databases to accumulate evidence. abstract_id: PUBMED:37733175 Principles for Just Prioritization of Expensive Biological Therapies in the Danish Healthcare System. The Danish healthcare system must meet the need for easy and equal access to healthcare for every citizen. However, investigations have shown unfair prioritization of cancer patients and unfair prioritization of resources for expensive medicines over care. What is needed are principles for proper prioritization. This article investigates whether American ethicists Tom Beauchamp and James Childress's principle of justice may be helpful as a conceptual framework for reflections on prioritization of expensive biological therapies in the Danish healthcare system. We present an empirical study exploring the principles for prioritizing new expensive biological therapies. This study includes qualitative interviews with key Danish stakeholders experienced in antibody therapy and prioritizing resources for expensive medicines. Beauchamp and Childress's model only covers government-funded primary and acute healthcare. Based on the interviews, this study indicates that to be helpful in a Danish context this model should include equal access for citizens to government-funded primary and acute healthcare, costly medicine, and other scarce treatments. We conclude that slightly modified, Beauchamp and Childress's principle of justice might be useful as a conceptual framework for reflections on the prioritization of expensive biological therapies in the Danish healthcare system. abstract_id: PUBMED:29698164 Equity in access to high cost drugs in Uruguay. Objective: The aim of this study was to determine the equity in access to high cost oncology medicines reimbursed by the Uruguayan National Health System. Also, were determined the causes of access inequities. Methods: Different levels of access were determined by crossing epidemiological and reimbursement data with geographical distribution and number of Health System users. Possible causes of inequities were determined and weighted by Delphi technique. Results: Access of patients assisted in the public sector to bevacizumab for metastatic colorectal cancer, rituximab for the treatment of non-Hodgkin lymphoma and trastuzumab for advanced HER2+ breast cancer, appears to be lower compared to patients assisted in private sector.Regarding rituximab for the treatment of non-Hodgkin lymphoma and trastuzumab for advanced HER2+ breast cancer, the results seem to show less access for patients residing outside the south region compared to those living in the south region.The main barriers to get reimbursement for patients living outside southern region are the access to pathological anatomy studies, imaging and other clinical analysis. Late diagnosis appears to be the main hurdles in access to these anti-cancer drugs, for patients assisted in the public sector. Conclusions: Equitable access to high cost drugs reimbursed by the National Health System requires policy decisions to address this issue. abstract_id: PUBMED:34814670 Financing Strategies to Facilitate Access to High-Cost Anticancer Drugs: A Systematic Review of the Literature. Background: Each country manages access to anticancer drugs differently due to variations in the structure and financing of the health system, but a summary of the various strategies used is absent. This study aimed to review and summarize financing strategies implemented across countries to facilitate access to high-cost anticancer drugs. Methods: We conducted a systematic review of articles referenced in PubMed, Embase, and Web of Science through May 12, 2021. Articles published in the English language from 2000 that describe strategies implemented in different countries to facilitate access to high-cost anticancer drugs were included. Letters, news articles, and proposed strategies were excluded. Quality assessment was not performed as we aimed to summarize the strategies. Data were analyzed by thematic analysis. A review protocol was registered at PROSPERO (CRD42018068616). Results: The review included 204 studies from 176 countries. Three themes of financing strategies were identified: (1) Basic pharmaceutical reimbursement and pricing policies, (2) Alternative funding strategies specific to high-cost drugs, and (3) Financial assistance for individual patients. Access in most countries depends mainly on basic pharmaceutical reimbursement policies (165 of 176 countries). Apart from that, high-income countries (HICs) tended to use funding strategies targeting high-cost drugs (72% of HICs vs 0%-24% of the rest), such as managed entry agreements (MEAs) or dedicated funds for high-cost drugs. In contrast, lower-income countries tended to implement financial assistance programs for cancer patients as a tool to increase access (32% of HICs vs 62%-79% of the rest). Conclusion: Many countries have implemented a combination of strategies to increase access to high-cost anticancer drugs. Most low- and middle-income countries utilized placement of anticancer drugs on a national list of essential medicines and patient assistance programs (PAPs) to facilitate access, while many HICs implemented a broader range of strategies. abstract_id: PUBMED:32824444 Unequal Access to Newly Registered Cancer Drugs Leads to Potential Loss of Life-Years in Europe. Background: Many new cancer medicines have been developed that can improve patients' outcomes. However, access to these agents comes later in Europe than in the United States (US). The aim of this study is to assess the access in Europe to newly registered cancer drugs and to get more insight in the implications of these variations for patients. Methods: A retrospective database study was conducted. Analyses involved 12 cancer drugs and 28 European countries in the period 2011-2018. Time to patient access, speed of drug uptake, and the potential loss of life years due to a delay in access have been studied. Results: Marketing approval for the cancer drugs came on average 242 days later in Europe than in the US, and actual patient access varied extensively across Europe. The average time to market in Europe was 403 days (range 17-1187 days). The delay in patient access of ipilimumab and abiraterone may have led to a potential loss of more than 30,000 life years. Conclusion: It takes a long time for patients to get access to newly registered cancer drugs and there is great variation in access. The health outcomes can be substantially improved by faster processes. Answer: Based on the provided abstracts, there is evidence to suggest that concerns about access to expensive cancer drugs in Dutch daily practice are warranted. Interviews with stakeholders in the Dutch healthcare system did not reveal any prescription barriers for bortezomib, yet quantitative methods showed regional differences, signs of underutilization, and access inequality (PUBMED:24829184). Additionally, the introduction of the Coverage Lock (CL) in the Netherlands in 2015, which means newly approved treatments are no longer automatically reimbursed, has led to limited and variable patient access to non-reimbursed treatments, raising ethical issues (PUBMED:38352693). Stakeholders in the Dutch healthcare system generally favor allowing access to treatments in the CL, especially when treatments are clearly beneficial, but there is a preference for third-party payment by pharmaceutical companies over health insurers or hospitals to avoid using collective funds while cost-effectiveness assessments are pending (PUBMED:38352693). There is also a general disfavor towards out-of-pocket payments by patients, emphasizing the importance of solidarity and equal access as pillars of the Dutch healthcare system (PUBMED:38352693). The ethical considerations surrounding the allocation of expensive drugs are complex and require an open, transparent, and fair public dialogue on health priorities (PUBMED:24810193). Moreover, the controversy surrounding access to expensive anti-cancer drugs highlights the complexity of funding decisions and the range of stakeholders involved (PUBMED:22319998). In summary, there are indeed concerns regarding access to expensive cancer drugs in Dutch daily practice, as evidenced by regional disparities, ethical dilemmas, and the challenges of balancing individual needs with the sustainability of the healthcare system. These concerns necessitate careful consideration and the development of policies that address transparency, equal access, solidarity, and the potential health benefits for patients (PUBMED:38352693).
Instruction: Should programmes for community-level meningococcal vaccination be considered in Australia? Abstracts: abstract_id: PUBMED:11416085 Should programmes for community-level meningococcal vaccination be considered in Australia? An economic evaluation. Background: Disease due to serogroup C Neisseria meningitidis is life-threatening and potentially preventable by vaccination. In 1999, the UK instigated mass vaccination after a sustained increase in serogroup C meningococcal disease. In the same year, Victoria, Australia experienced a similar change in disease epidemiology. It is timely to undertake an economic evaluation of options for community vaccination in Australia based on local data. Methods: Cost-effectiveness and cost-benefit analyses of three options for use of polysaccharide vaccine were undertaken for a hypothetical population aged 15--19 years. Baseline analyses assumed 5 years' duration of vaccine protection following a single year of programme implementation. Sensitivity analyses of key variables were performed, including vaccine coverage and effectiveness, case fatality rate and the discount rate. Outcomes included the number of people vaccinated, cases averted, life-years saved and disability-adjusted life-years (DALY) averted. Cost-benefit analysis used lost earnings avoided as a measure of vaccination benefit. Results: Vaccination of people aged 15--19 years in a defined population with a high rate of disease was the most cost-effective option. Compared with no vaccination and assuming 5 years' duration of protection and exclusion of direct cost savings, this resulted in a discounted cost per life-year saved of $23,623, a cost per DALY avoided of $21,097 and benefits exceeding costs in discounted terms. The 'break-even' incidence rate for this option with exclusion of direct cost savings was 14.0/100,000. Conclusions: Community use of polysaccharide vaccination may be cost effective in Australia under certain conditions. Economic evidence favours use of vaccination in well-defined populations with a high rate of disease. Policy decision-making also requires consideration of non-economic factors, including feasibility of implementation and risk perception by the community. abstract_id: PUBMED:38270172 Impact of Meningococcal ACWY Vaccination Program during 2017-18 Epidemic, Western Australia, Australia. The rising incidence of invasive meningococcal disease (IMD) caused by Neisseria meningitidis serogroup W in Western Australia, Australia, presents challenges for prevention. We assessed the effects of a quadrivalent meningococcal vaccination program using 2012-2020 IMD notification data. Notification rates peaked at 1.8/100,000 population in 2017; rates among Aboriginal and Torres Strait Islander populations were 7 times higher than for other populations. Serogroup W disease exhibited atypical manifestations and increased severity. Of 216 cases, 20 IMD-related deaths occurred; most (19/20) were in unvaccinated persons. After the 2017-2018 targeted vaccination program, notification rates decreased from 1.6/100,000 population in 2018 to 0.9/100,000 population in 2019 and continued to decline in 2020. Vaccine effectiveness (in the 1-4 years age group) using the screening method was 93.6% (95% CI 50.1%-99.2%) in 2018 and 92.5% (95% CI 28.2%-99.2%) in 2019. Strategic planning and prompt implementation of targeted vaccination programs effectively reduce IMD. abstract_id: PUBMED:34094620 Lessons from a community vaccination programme to control a meningococcal disease serogroup W outbreak in remote South Australia, 2017. Problem: From December 2016 to February 2017, two cases of invasive meningococcal disease and one case of meningococcal conjunctivitis, all serogroup W, occurred in Aboriginal children in the Ceduna region of South Australia. The clustering of cases in time and place met the threshold for a community outbreak. Context: The Ceduna region is a remote part of South Australia, with more than 25% of the population identifying as Aboriginal or Torres Strait Islander. Action: As part of the outbreak response, a community-wide meningococcal vaccination programme against serogroups A, C, W and Y was implemented in a collaboration among different agencies of the South Australia Department for Health and Well-being, Aboriginal health and community services providers, and other local service providers and government agencies. The programme comprised an outbreak vaccination schedule, targeting all people aged 3 2 months residing in the cases' places of residence or in towns with close links. Outcome: Between March and June 2017, 3383 persons were vaccinated, achieving an estimated coverage of 71-85% of the target population, with 31% (n = 1034) of those vaccinated identifying as Aboriginal or Torres Strait Islander. No local cases of serogroup W occurred during the vaccination programme, but two further cases were notified by the end of 2018. Discussion: The participation of a large number of local and non-health-sector stakeholders in programme planning and implementation, a clear response management structure and high community acceptability were identified as key factors that contributed to the programme achieving high vaccination coverage. The need to develop standard operating procedures for community-based outbreak response interventions to ease logistical challenges was considered an important lesson learnt. abstract_id: PUBMED:32504737 The everchanging epidemiology of meningococcal disease worldwide and the potential for prevention through vaccination. Neisseria meningitidis is a major cause of bacterial meningitis and septicaemia worldwide and is associated with high case fatality rates and serious life-long complications among survivors. Twelve serogroups are recognised, of which six (A, B, C, W, X and Y) are responsible for nearly all cases of invasive meningococcal disease (IMD). The incidence of IMD and responsible serogroups vary widely both geographically and over time. For the first time, effective vaccines against all these serogroups are available or nearing licensure. Over the past two decades, IMD incidence has been declining across most parts of the world through a combination of successful meningococcal immunisation programmes and secular trends. The introduction of meningococcal C conjugate vaccines in the early 2000s was associated with rapid declines in meningococcal C disease, whilst implementation of a meningococcal A conjugate vaccine across the African meningitis belt led to near-elimination of meningococcal A disease. Consequently, other serogroups have become more important causes of IMD. In particular, the emergence of a hypervirulent meningococcal group W clone has led many countries to shift from monovalent meningococcal C to quadrivalent ACWY conjugate vaccines in their national immunisation programmes. Additionally, the recent licensure of two protein-based, broad-spectrum meningococcal B vaccines finally provides protection against the most common group responsible for childhood IMD across Europe and Australia. This review describes global IMD epidemiology across each continent and trends over time, the serogroups responsible for IMD, the impact of meningococcal immunisation programmes and future needs to eliminate this devastating disease. abstract_id: PUBMED:30630323 Cerebrospinal Fluid Pleocytosis following Meningococcal B vaccination in an Infant. We describe a case of cerebrospinal fluid pleocytosis in a previously well infant after his first immunisation with the multicomponent meningococcal serogroup B and advice clinicians to be cautious with the interpretation of CSF findings in children post Meningococcal B vaccination until clearer guidelines are available Keywords: meningococcal B vaccine - cerebrospinal fluid pleocytosis - inflammatory response - infant. abstract_id: PUBMED:12498654 Mass vaccination campaign following community outbreak of meningococcal disease. During December 12-29, 1998, seven patients ages 2-18 years were diagnosed with serogroup C meningococcal disease in two neighboring Florida towns with 33,000 residents. We evaluated a mass vaccination campaign implemented to control the outbreak. We maintained vaccination logs and recorded the resources used in the campaign that targeted 2- to 22-year-old residents of the two towns. A total of 13,148 persons received the vaccinations in 3 days. Vaccination coverage in the target population was estimated to be 86% to 99%. Five additional cases of serogroup C meningococcal disease occurred in the community during the year after the campaign began, four in patients who had not received the vaccine. The cost of control efforts was approximately $370,000. Although cases continued to occur, the vaccination campaign appeared to control the outbreak. Rapid implementation, a targeted approach, and high coverage were important to the campaign's success. abstract_id: PUBMED:34569453 Evolving strategies for meningococcal vaccination in Europe: Overview and key determinants for current and future considerations. Invasive meningococcal disease (IMD) is a life-threatening, unpredictable condition. Vaccines are available against 5 of the 6 meningococcal serogroups (Men) accounting for nearly all IMD cases worldwide; conjugate monovalent MenC, quadrivalent MenACWY, and protein-based MenB vaccines are commonly used. We provide a comprehensive overview of the evolution of meningococcal vaccination strategies employed in national immunization programmes (NIPs) and their impact on IMD incidence in Europe. A more in-depth description is given for several countries: the United Kingdom (UK), the Netherlands, Greece, Italy, and Ireland. We searched European health authorities' websites and PubMed. Various vaccines and immunization schedules are used in 21 NIPs. Most countries implement MenC vaccination in infants, MenACWY in adolescents, and a growing number, MenB in infants. Only Malta has introduced MenACWY vaccination in infants, and several countries reimburse immunization of toddlers. The UK, Italy, Ireland, Malta, Andorra, and San Marino recommend MenB vaccination in infants and MenACWY vaccination in adolescents, targeting the most prevalent serogroups in the most impacted age groups. Main factors determining new vaccination strategies are fluctuating IMD epidemiology, ease of vaccine implementation, ability to induce herd protection, favorable benefit-risk balance, and acceptable cost-effectiveness. Since 1999, when the UK introduced MenC vaccination, the reduction in IMD incidence has been gradually enhanced as other countries adopted routine meningococcal vaccinations. Meningococcal vaccination strategies in each country are continually adapted to regional epidemiology and national healthcare priorities. Future strategies may include broader coverage vaccines when available (e.g., MenABCWY, MenACWY), depending on prevailing epidemiology. abstract_id: PUBMED:26512927 Impact of vaccination on meningococcal epidemiology. Neisseria meningitidis may cause invasive disease (meningitis and sepsis), leading to considerable disease burden and mortality. However, effective vaccines are available against most pathogenic serogroups. Large-scale vaccination campaigns with the MCC vaccine conducted in UK and with MenAfriVac in the Sahel have clearly demonstrated the direct and indirect effect of immunization programmes on disease and carriage. Moreover, the introduction of novel subcapsular vaccines against serogroup B, which may cross-protect against other serogroups, is likely to have a further effect on trends. Accurate data collection is key to elaborate vaccination strategies able to reduce meningococcal disease burden through direct protection and herd immunity. abstract_id: PUBMED:32636356 Evaluation of a meningococcal ACWY catch-up vaccination program for adolescents on Community of Madrid. Objective: In 2019 the Interterritorial Board of the Spanish National Health System approved the modification of the vaccination strategy against meningococcus. The objective of this study was to evaluate the meningococcal ACWY catch-up vaccination program in adolescents born in 2001 and 2002 in the Community of Madrid. Methods: Cross-sectional population-based study. Vaccine coverage was estimated and factors associated with vaccination were assessed using logistic regression models. Results: Vaccination coverage was 57.1% for those born in 2001 and 51.9% for those born in 2002. The probability of vaccination was higher in women (ORa=1.18), adolescents with chronic conditions (ORa=1.38), residents in rural areas (ORa=1.76) and in areas with lower socioeconomic level. Being born in 2002 (ORa=0.78) and abroad (ORa=0.35) were associated with a lower probability of vaccination. Conclusions: Coverage is high compared to similar campaigns in other settings, although there is significant room for improvement, so our study can contribute to the design of strategies to optimize results. abstract_id: PUBMED:9466635 Efficacy of meningococcal vaccine and barriers to vaccination. Context: Use of the quadrivalent meningococcal vaccine for control of outbreaks has increased in recent years, but the efficacy of meningococcal vaccine during mass vaccination campaigns in US civilian populations has not been assessed. Objectives: To evaluate the efficacy of the quadrivalent meningococcal vaccine against serogroup C meningococcal disease in a community outbreak setting and to evaluate potentially modifiable barriers to vaccination in an area with persistent meningococcal disease following immunization. Design: Matched case-control study of vaccine efficacy using cases of serogroup C meningococcal disease in persons eligible for vaccination during mass vaccination campaigns. Control patients were matched by neighborhood and age. The control group was used to identify possible barriers to vaccination. Setting: Gregg County, Texas, population 106076, from 1993 to 1995. Participants: A total of 17 case patients with serogroup C meningococcal disease eligible for vaccine and 84 control patients. Main Outcome Measures: Vaccine efficacy and risk factors associated with nonvaccination. Results: Vaccine efficacy among 2- to 29-year-olds was 85% (95% confidence interval, 27%-97%) and did not change in bivariate analyses with other risk factors that were significant in univariate analysis. Among control patients, older age was strongly associated with nonvaccination; vaccination rates for 2- to 4-year-olds, 5- to 18-year-olds, and 19- to 29-year-olds were 67%, 48%, and 20%, respectively (chi2 for linear trend, P=.01). Conclusions: The meningococcal polysaccharide vaccine was effective against serogroup C meningococcal disease in this community outbreak. Although specific barriers to vaccination were not identified, older age was a risk factor for nonvaccination in the target population of 2- to 29-year-olds. In future outbreaks, emphasis should be placed on achieving high vaccination coverage, with special efforts to vaccinate young adults. Answer: Programmes for community-level meningococcal vaccination should be considered in Australia, as they may be cost-effective under certain conditions, particularly in well-defined populations with a high rate of disease. An economic evaluation of options for community vaccination in Australia based on local data suggested that vaccination of people aged 15–19 years in a defined population with a high rate of disease was the most cost-effective option, with benefits exceeding costs in discounted terms (PUBMED:11416085). The impact of meningococcal ACWY vaccination programs during the 2017-18 epidemic in Western Australia demonstrated that strategic planning and prompt implementation of targeted vaccination programs effectively reduced invasive meningococcal disease (IMD) rates (PUBMED:38270172). Similarly, a community vaccination programme to control a meningococcal disease serogroup W outbreak in remote South Australia in 2017 achieved high vaccination coverage and was identified as a key factor contributing to the control of the outbreak (PUBMED:34094620). The epidemiology of meningococcal disease is everchanging, and the potential for prevention through vaccination is significant. Effective vaccines against the most common serogroups responsible for IMD are available, and the introduction of these vaccines has been associated with declines in disease incidence in various parts of the world (PUBMED:32504737). However, policy decision-making also requires consideration of non-economic factors, including the feasibility of implementation and risk perception by the community (PUBMED:11416085). Additionally, the evolving strategies for meningococcal vaccination in Europe provide insights into how vaccination strategies can be adapted to regional epidemiology and healthcare priorities, which could be relevant for Australian policymakers (PUBMED:34569453). In conclusion, the evidence supports the consideration of community-level meningococcal vaccination programmes in Australia, especially in response to outbreaks or in regions with high disease incidence, while also taking into account the broader context of implementation and community acceptance.
Instruction: Is BRIEF a useful instrument in day to day care of patients with phenylketonuria? Abstracts: abstract_id: PUBMED:25541101 Is BRIEF a useful instrument in day to day care of patients with phenylketonuria? Objectives: Despite early and continuous treatment many patients with phenylketonuria (PKU) still experience neurocognitive problems. Most problems have been observed in the domain of executive functioning (EF). For regular monitoring of EF, the use of the Behavior Rating Inventory of Executive Function (BRIEF) has been proposed. The aim of this study was to investigate whether the BRIEF is indeed a useful screening instrument in monitoring of adults with PKU. Study Design: Adult PKU patients (n = 55; mean age 28.3 ± 6.2 years) filled out the BRIEF-A (higher scores=poorer EF) and performed computerized tasks measuring executive functions (inhibition, cognitive flexibility, and working memory). The outcome of the BRIEF-A questionnaire was compared with the neurocognitive outcome as measured by three tasks from the Amsterdam Neuropsychological Tasks (ANT). Results: Forty-two percent of the PKU patients scored in the borderline/clinical range of the BRIEF-A. Six of the 55 patients (11%) scored >1 SD above the normative mean, mostly on the Metacognition Index. With respect to ANT measurements, patients mainly showed deficits in inhibitory control (34-36%) and cognitive flexibility (31-40%) as compared to the general Dutch population. No significant correlations between the two methods were found, which was confirmed with the Bland-Altman approach where no agreement between the two methods was observed. Only with respect to inhibitory control, patients scored significantly worse on both BRIEF-A and ANT classifications. No other associations between classification according to the BRIEF-A and classifications according to the ANT tasks were found. Conclusions: Patients reporting EF problems in daily life are not necessarily those that present with core EF deficits. The results of this study suggest that regular self-administration of the BRIEF-A is not a sufficient way to monitor EF in adult PKU patients. abstract_id: PUBMED:23729344 Enhancing the attendance rate of psychiatric day care patients Background & Problems: Attending rehabilitation programs at psychiatric daycare wards has been shown to stabilize psychiatric patients' daily routines, reduce patient symptoms, and help them regain social functions. Non-attendance increases risks of patient decompensation and return to the inpatient unit, which reduces quality of care and increases medical costs. The attendance rate for psychiatric daycare patients at our hospital was 73%. To maximize rehabilitation and treatment success, we developed a special project to raise patient program-attendance motivation. Purpose: This study aimed to enhance the attendance rate of our psychiatric day care patients to improve patient independence and their capacity to return to the community. Resolution: Methods used included rehabilitation game cards, holding medication educational workshops, utilizing reward systems, making attendance passports, and designing activity booklets. Results: The attendance rate of psychiatric day care patients rose from 73% to 89%, a 16% increase. Conclusion: This program not only increased the attendance rate of psychiatric day care patients but also improved communications between professional care staff and patient family members. In addition, this program strengthened daycare ward staff teamwork, which further enhanced treatment quality. We suggest considering rehabilitation program attendance as an important ward quality control criterion to assess and improve treatment and nursing care quality. abstract_id: PUBMED:24392389 Comparative study of feasibility of day-care surgery in rural and urban patients. Background: Day-care surgery is a common procedure in most of the hospitals. It has the advantages of early discharge and thus contributes to cost effectiveness. Objectives: The aim of this study was to compare the feasibility of day-care surgery in rural and urban patients. Material And Methods: Around 200 cases undergoing day-care surgeries of various types in urban and rural patient were enrolled randomly in the study. Results: The maximum number of patients in both the groups were between the range of 21-40 years. The most frequent surgery in rural group was dilatation and curettage (25%) and in urban group was herniotomy (20%). In follow up only 1% complication occurred in rural group with 6% patients were lost in follow up and 4% changed the treatment center. In urban patients follow up was uneventful. The urban group people were more educated than the rural group. Conclusion: Day-care surgery is overall a beneficial procedure but understanding of the patient regarding the surgery and importance of follow up according to educational standard and intellectual status are very important criteria. abstract_id: PUBMED:22624677 An instrument assessing patient satisfaction with day care in hospitals. Background: Patient satisfaction is an important indicator of quality of care in hospitals. Reliable and valid instruments to measure clinical and outpatient satisfaction already exist. Recently hospitals have increasingly provided day care, i.e., admitting patients for one day without an overnight stay. This article describes the adaption of the 'Core questionnaire for the assessment of Patient Satisfaction' (COPS) for general Day care (COPS-D), and the subsequent validation of the COPS-D. Methods: The clinical COPS was supplemented with items to cover two new dimensions: Pre-admission visit and Operation Room. It was sent to a sample of day care patients of five general Dutch hospitals to investigate dimensionality, acceptability, reliability, construct and external validity. Construct validity was established by correlating the dimensions of the COPS-D with patients' overall satisfaction. Results: The COPS-D was returned by 3802 patients (response 46%). Factor analysis confirmed its' structure: Pre-intake visit, Admission, Operation room, Nursing care, Medical care, Information, Autonomy and Discharge and aftercare (extraction communality 0.63-0.90). The internal consistency of the eight dimensions was good (α = 0.82-0.90); the item internal consistency corrected for overlap was satisfactory (>0.40); all inter-item correlations were higher than 0.45 but not too high (<0.90). The construct validity of all dimensions was good (r from 0.52-0.62, p < 0.01). The Information dimension had the strongest correlation with overall day care satisfaction. Conclusions: The COPS-D is a reliable and valid instrument for measuring satisfaction with day care. It complements the model of measuring patient satisfaction with clinical and outpatient care given in hospitals. It also fulfils the conditions made while developing the clinical and outpatient COPS: a short, core instrument to screen patient satisfaction. abstract_id: PUBMED:38076270 Perioperative management for elderly patients undergoing day surgery: evidence-based practice for nursing care and day surgery. Introduction: It is necessary to explore the evidence-based practice of perioperative management of elderly patients undergoing ambulatory surgery, to optimize the clinical workflow and improve the quality of nursing care. Methods: Based on the best evidence obtained from the previous study, the perioperative management model and examination index of elderly patients undergoing day surgery were established, and the evidence of best practice was obtained by using the evidence-based methodology. Then, we integrated evidence into clinical practice and improved the process. We made a comparative analysis of the effect before and after the implementation of the evidence. Results: This study summarized 26 pieces of evidence of perioperative management of elderly patients undergoing day surgery and transformed the evidence into 7 items and 11 items of examination index. After the application of the best evidence, knowledge, belief, and practice of perioperative management of nurses for elderly patients in the day operation ward reached 100%, and the rate of reaching the standard of most indicators increased after the application of evidence. The length of waiting for admission and waiting for operation and returning to the ward to discharge of elderly patients decreased significantly, and the difference was statistically significant (All p < 0.05). Conclusion: Evidence-based perioperative management of elderly patients undergoing day surgery improves the nursing practice of clinical nurses and shortens the waiting time and hospitalization time of elderly patients undergoing day surgery, which should be promoted in clinical nursing care. abstract_id: PUBMED:23160111 Motivation in brief psychotherapy: a 30 patients study, valued in a psychiatric day-hospital Introduction: The most important predictive factor for a successful outcome in Brief Psychotherapy is the motivation of patients to the treatment itself. Against the scarcity of literature on the subject, so far limited to offer standardized tools to assess motivation, applied mainly to education and work's areas, in spite of a possible application in psychotherapy, it was considered necessary to develop a questionnaire ad hoc. Materials And Methods: It was therefore created a pattern of motivational diagnosis of the problem based on a sequence of questions and answers, which would make possible the classification of each case and then that could lead to the use of a specific method. It is the result of the attempt to create an orderly pattern and that it would be easy to use and then be analyzed. The questionnaire consists of twenty one questions and requires that the patient answers with a "yes" or a "no" to each applicant. It was administered to thirty patients who came from the psychiatric day hospital. The questionnaire was used as a "not standardized thread" in which, through groups of questions, are addressed some central cores: the spontaneous activity, motivation controlled from the outside, motivation directed to an activity that leads to results, motivation to and activity for a purpose, expectations of self-efficacy, the presence/absence of resistance and finally the area of the will. The purpose of this work is to show, bringing a contribution in agreement with the few studies on the subject, how motivation is a key parameter for predicting the effectiveness of a psychotherapy, in this case, Brief Psychotherapy. Conclusions: The study clearly shows how the motivation is the most important positive predictor in the preliminary assessment of a Brief Psychotherapy: results in treatment are most likely to be rapid and favorable if patients are motivated to change andif they are engaged with the therapist. abstract_id: PUBMED:32290669 Minimally invasive surgery and one-day care. Due to population aging in developed countries, it has become necessary to prepare a functional health care plan for the elderly. One of the possibilities to improve postoperative outcomes and recovery in elderly patients is to perform common surgical procedures in the regime of one-day surgery. We compared elderly patients who underwent surgery in a large university hospital with elderly patients who underwent surgery in a facility dedicated to one-day surgery. The homogeneity and clinical outcomes of the two groups were statistically analyzed. Complications were assessed according to the Clavien-Dindo classification. There were no statistically significant differences in demographic data and pre-operative risk assessment (ASA). The one-day surgery group had fewer complications and a shorter average hospital stay. Advanced age is not the only criterion for assessing the biological state of the patient and therefore does not always necessitate surgical care in a university hospital setting with a high volume of acute and complicated cases. Many elderly patients will benefit from a more individualized approach and healthcare facilities that specialize solely in elective procedures. When indicating elderly patients for one-day surgery, their health status is more important than their calendar age. abstract_id: PUBMED:24252087 Continuity of care in day surgical care - perspective of patients. Background: The realisation of continuity in day surgical care is analysed in this study. The term 'continuity of care' is used to refer to healthcare processes that take place in time (time flow) and require coordination (coordination flow), rapport (caring relationship flow) and information (information flow). Patients undergoing laparoscopic cholecystectomy or inguinal hernia day surgery are ideal candidates for studying the continuity of care, as the diseases are very common and the treatment protocol is mainly the same in different institutions, in addition to which the procedure is elective and most patients have a predictable clinical course. Aim: The aim of the study was to describe, from the day surgery patients' own perspective, how continuity of care was realised at different phases of the treatment, prior to the day of surgery, on the day of surgery and after it. Method: The study population consisted of 203 day surgical patients 10/2009-12/2010 (N = 350, response rate 58%). A questionnaire was developed for this study. Results: Based on the results, the continuity of care was well realised as a rule. Continuity is improved by the fact that patients know the nurse who will look after them in the hospital before the day of surgery and have a chance to meet the nurse even after the operation. Meeting the surgeon who performed the operation afterwards also improves patients' perception of continuation of care. Conclusions: Continuity of care may be improved by ensuring that the patient meets caring staff prior to the day of operation and after the procedure. An important topic for further research would be how continuation of care is realised in the case of other patient groups (e.g. in internal medicine). On the other hand, realisation of continuation of care should also be studied from the viewpoint of those taking part in patient care in order to find similarities/differences between patients' perceptions and professionals' views. Studying interventions aimed to promote continuity of care, for example in patient guidance, would also be of great importance. abstract_id: PUBMED:37019844 Expert Perspectives on the Additional Benefit of Day Hospices and Palliative Day Care Clinics in Germany: A Qualitative Approach. Background: In Germany, hospice and palliative care is well covered through inpatient, outpatient, and home-based care services. It is unknown if, and to what extent, there is a need for additional day care services to meet the specific needs of patients and caregivers. Methods: Two day hospices and two palliative day care clinics were selected. In the first step, two managers from each facility (n = 8) were interviewed by telephone, using a semi-structured interview guide. In the second step, four focus groups were conducted, each with three to seven representatives of hospice and palliative care from the facilities' hospice and palliative care networks. Interviews and focus groups were audio recorded, transcribed verbatim and analyzed using qualitative content analysis. Results: The interviewed experts perceived day care services as providing additional patient and caregiver benefits. Specifically, the services were perceived to meet patient needs for social interaction and bundled treatments, especially for patients who did not fit into inpatient settings (due to, e.g., their young age or a lack of desire for inpatient admission). The services were also perceived to meet caregiver needs for support, providing short-term relief for the home care situation. Conclusions: The results suggest that inpatient, outpatient, and home-based hospice and palliative care services do not meet the palliative care needs of all patients. Although the population that is most likely to benefit from day care services is assumed to be relatively small, such services may meet the needs of certain patient groups more effectively than other forms of care. abstract_id: PUBMED:28974875 Parental Satisfaction with Pediatric Day-Care Surgery and its Determinants in a Tertiary Care Hospital. Objective: The objective is to assess the level of parental satisfaction of pediatric day-care surgery and its different determinants. Materials And Methods: This is a descriptive study performed in a tertiary care hospital in India among parents of pediatric day-care surgery patients from June 2013 to March 2015. The core questionnaire for the assessment of patient satisfaction for general day care (COPS-D) was used. Variables related to surgery, overall satisfaction, one open-ended question, and socio-demographic data were also collected. Calculated sample size was 121. Results: The mean and standard deviation of parental satisfaction were estimated in eight domains of day care (COPS-D) using Likert scale 1-5. Preadmission visit had a mean of 4.63 (0.52), day of surgery 4.65 (0.58), operating room 4.76 (0.51), nursing care 4.46 (0.79), medical care 4.89 (0.48), information 4.51 (0.68), autonomy 4.64 (0.56), and discharge 4.50 (0.72). In elder children, there was less satisfaction on the information and discharge domains. Overall satisfaction was good in 88% of patients and was less than satisfactory when they had significant pain. Conclusion: Perception of quality of pediatric day-care surgery was assessed with a questionnaire and was found to be good. Variables related to surgery such as pain may be included in the questionnaire for assessing satisfaction in the day-care surgery. Answer: The study described in abstract PUBMED:25541101 aimed to investigate whether the Behavior Rating Inventory of Executive Function (BRIEF) is a useful screening instrument for monitoring executive functioning (EF) in adults with phenylketonuria (PKU). The study involved 55 adult PKU patients who completed the BRIEF-A questionnaire and performed computerized tasks measuring executive functions from the Amsterdam Neuropsychological Tasks (ANT). The results showed that 42% of the PKU patients scored in the borderline/clinical range of the BRIEF-A, indicating poorer EF. However, no significant correlations were found between the BRIEF-A outcomes and the neurocognitive outcomes measured by the ANT tasks. The only exception was with respect to inhibitory control, where patients scored significantly worse on both BRIEF-A and ANT classifications. The study concluded that patients reporting EF problems in daily life are not necessarily those that present with core EF deficits. Therefore, the results suggest that regular self-administration of the BRIEF-A is not a sufficient way to monitor EF in adult PKU patients. Based on these findings, it can be inferred that BRIEF may not be a universally useful instrument in the day-to-day care of patients with PKU for monitoring EF.
Instruction: Outcomes and Complications of Lead Removal: Can We Establish a Risk Stratification Schema for a Collaborative and Effective Approach? Abstracts: abstract_id: PUBMED:26293652 Outcomes and Complications of Lead Removal: Can We Establish a Risk Stratification Schema for a Collaborative and Effective Approach? Background: Removal of an entire cardiovascular implantable electronic device is associated with morbidity and mortality. We sought to establish a risk classification scheme according to the outcomes of transvenous lead removal in a single center, with the goal of using that scheme to guide electrophysiology lab versus operating room extraction. Methods: Consecutive patients undergoing transvenous lead removal from January 2001 to October 2012 at Mayo Clinic were retrospectively reviewed. Results: A total of 1,378 leads were removed from 652 (age 64 ± 17 years, M 68%) patients undergoing 702 procedures. Mean (standard deviation) lead age was 57.6 (58.8) months. Forty-four percent of leads required laser-assisted extraction. Lead duration (P < 0.001) and an implantable cardioverter defibrillator (ICD) lead (P < 0.001) were associated with the need for laser extraction and procedure failure (P < 0.0001 and P = 0.02). The major complication rate was 1.9% and was significantly associated with longer lead duration (odds ratio: 1.2, 95% confidence interval: 1.1-1.3; P < 0.001). High-risk patients (with a >10-year-old pacing or a >5-year-old ICD lead) had significantly higher major events than moderate-risk (with pacing lead 1-10 years old or ICD lead 1-5 years old) and low-risk (any lead ≤1-year-old) patients (5.3%, 1.2%, and 0%, respectively; P < 0.001). Conclusions: Transvenous lead removal is highly successful, with few serious procedural complications. We propose a risk stratification scheme that may categorize patients as low, moderate, and high risk for lead extraction. Such a strategy may guide which extractions are best performed in the operating room. abstract_id: PUBMED:30784687 Efficacy and Safety of Transvenous Lead Extraction in the Device Laboratory and Operating Room Guided by a Novel Risk Stratification Scheme. Objectives: The goal of this study was to evaluate a novel risk stratification scheme to categorize patients on the basis of risk to either an operating room or device laboratory with rescue strategy. Background: Lead extraction can be complicated by lethal issues such as vascular and cardiac rupture. Currently, the optimal site for lead extraction has not been well established. Methods: A risk stratification scheme was developed from previously available risk factors for major complications. Patients were prospectively risk stratified between October 2013 and January 2016. High-risk procedures were performed in the operating room with ready surgical services; intermediate-risk procedures were performed in the device laboratory. Results: In total, 349 leads were removed from 187 patients (age 61.0 ± 17.2 years; 66.3% men) over 27 months. Seventy-two patients (38.5%) were categorized as high risk. Median implant duration of the oldest lead per patient was 11.2 years (interquartile range: 7.9 to 14.9 years) in the operating room group versus 2.6 years (interquartile range: 1.6 to 4.9 years) in the device laboratory group (p < 0.001). Clinical success in the operating room (95.8%) and device laboratory (99.1%) groups was similar (p = 0.16). A higher incidence of major complications occurred in the high-risk group (operating room group: 6.9%; device laboratory: 0.0%; p = 0.007). In-hospital mortality (operating room group: 8.3%; device laboratory: 2.6%; p = 0.09) and long-term (2-year) survival (operating room: 70.8%; device laboratory: 84.4%; p = 0.07) rates were similar. Conclusions: Use of a novel risk stratification scheme in guiding the selection of operating room versus device laboratory for lead extraction is feasible, safe, and efficacious. Intermediate-risk procedures can be performed safely in the device laboratory with rescue strategy, without excess surgical resource utilization. abstract_id: PUBMED:33781982 Transvenous lead extraction in 1000 patients guided by intraprocedural risk stratification without surgical backup. Background: Transvenous lead extraction (TLE) carries a significant risk of morbidity and mortality. Reliable preprocedural risk predictors to guide resource allocation and optimize procedural safety are lacking. Objective: The aim of this study was to evaluate an intraprocedural approach to risk stratification during elective TLE procedures. Methods: This is a single-center retrospective study of consecutive patients who underwent elective TLE of a pacemaker or implantable cardioverter-defibrillator lead for noninfectious indications. The risk of TLE is judged intraprocedurally only after an attempt is made to extract the target lead as long as high-risk extraction techniques are avoided. TLE was performed in a well-equipped electrophysiology laboratory with rescue strategies in place but in the absence of surgical staff. Results: During the study period, 1000 patients were included in this analysis (527 female (52.7%); mean age 61.5 ± 10.2 years). TLE was attempted for 1362 leads, with a mean lead dwell time of 73 ± 43 months (median 70 months; interquartile range 12-180 months). TLE was successful in 914 patients, partially successful in 10, and failed in 76 patients. A laser sheath was required for extraction of 926 leads (68%). Only 1 patient developed intraprocedural cardiac tamponade requiring emergency pericardiocentesis. None of the patients developed hemothorax or required surgical intervention. Conclusion: At experienced centers, intraprocedural risk stratification for TLE that avoids high-risk extraction techniques achieved successful TLE in the majority of patients and can potentially help optimize the balance between efficacy, safety, and efficiency in lead extraction. abstract_id: PUBMED:29788560 Personalized risk stratification for adverse surgical outcomes: innovation at the boundaries of medicine and computation. Patients undergoing surgery exhibit a highly variable risk of mortality and morbidity, even when undergoing similar procedures. Accurately quantifying this risk is critical for preoperative decision-making to ensure patients recieve treatment that is optimal for their individual profile, and for guiding intraoperative and postoperative care. Despite the considerable attention this issue has received, existing models for surgical risk stratification remain grounded in traditional statistical methods and in problem statements that have not evolved significantly over the years. This article explores recent innovations in machine learning and data mining to advance these efforts. Risk-stratification models based on sophisticated computational techniques hold the promise of a new generation of predictive analytical tools that are highly accurate and widely deployable. abstract_id: PUBMED:23717035 Surgical outcomes in phacoemulsification after application of a risk stratification system. Background: The purpose of this study was to determine whether application of a risk stratification system during preoperative assessment of cataract patients and subsequent allocation of patients to surgeons with matching experience may reduce intraoperative complications. Methods: Nine hundred and fifty-three consecutive patients (1109 eyes) undergoing phacoemulsification cataract surgery were assigned to two groups, ie, group A (n = 498 patients, 578 eyes) and group B (n = 455 patients, 531 eyes). Patients from group A were allocated to surgeons with varying experience with only a rough estimate of the complexity of their surgery. Patients from group B were assigned to three risk groups (no added risk, low risk, and moderate-high risk) according to risk factors established during their preoperative assessment and were respectively allocated to resident surgeons, low-volume surgeons, or high-volume surgeons. Data were collected and entered into a computerized database. The intraoperative complication rate was calculated for each group. Results: The intraoperative complication rate was significantly lower in group B than in group A (group A, 5.88%; group B, 3.2%; P < 0.05). Patients from group B with no added risk and allocated to resident surgeons had a significantly lower rate of intraoperative complications than those from group A allocated to resident surgeons (group A, 7.2%; group B, 3.08%; P < 0.05). Conclusion: Our study demonstrates that allocation of cataract patients to surgeons matched for experience according to a uniform and reliable preoperative assessment of their risk of complications allows for better surgical outcomes, especially for resident surgeons. abstract_id: PUBMED:37959221 Suicidality in the Light of Schema Therapy Constructs, i.e., Early Maladaptive Schema and Schema Modes: A Longitudinal Study. (1) Background: The search for new strategies to diagnose people at risk of suicide and to help them is highly significant in view of the still high rate of suicidality. Schema therapy and its core constructs, i.e., early maladaptive schemas (EMSs) and schema modes, correspond to both directions. (2) Methods: This study compared the severity of EMSs and schema modes in a clinical group of suicide risk, a clinical non-suicidal group, and a control group. Intragroup comparisons were also conducted between times of crisis and psychological stability. The evaluation was supported by controlling for the psychopathological symptoms presented, following the dimensional concept. (3) Results: The unquestionable relevance of the disconnection/rejection domain in suicidality has been proven. The importance of EMSs from other domains, especially during psychiatric crises, was confirmed. Among the schema modes, child and Punitive Parent modes proved to be the most significant. There were changes in coping modes but of a lesser effect size. The protective importance of the Healthy Adult and Happy Child modes was also proven. (4) Conclusions: The results provide an indication for practitioners about the EMSs and schema modes most associated with suicide risk. They can also serve as a framework for deepening the issue of identifying and preventing suicidality in schema therapy. abstract_id: PUBMED:33523328 Non-infective left ventricular lead complications requiring re-intervention following cardiac resynchronization therapy: prevalence, causes and outcomes. Purpose: Left ventricular (LV) lead complications in cardiac resynchronization therapy are challenging and poorly reported. We aimed to establish prevalence, causes and outcomes of LV lead complications requiring re-intervention. Methods: We analysed the rate of complications in 2551 consecutive patients who received a transvenous de novo LV lead as part of a cardiac resynchronization therapy device between 2000 and 2018. LV lead complications requiring re-intervention were identified; those due to infection were excluded. Patient, procedural and device characteristics, and outcomes were examined for non-infective LV lead complications requiring re-intervention. Results: During a median of 4.7 years, 142 (5.6%) patients required re-intervention for non-infective LV lead complications with a decrease from 10.7% between 2000 and 2004, 8.7% between 2005 and 2009, 3.2% between 2010 and 2014 to 3.2% after 2014. The most common complications were LV lead displacement (50%), high pacing threshold (28%) and phrenic nerve stimulation (15%). Of the complications, 79 (56%) occurred within 90 days post-implant and 63 (44%) later. At the end of the study period, 132/142 patients (93%) had a functional LV lead. Lead re-intervention was associated with higher risk of complications (20%), but no increase in mortality (P = 0.19). Quadripolar leads had longer longevity and lower risk of complications compared with unipolar and bipolar LV leads. Conclusions: A small but significant proportion of patients required LV lead re-intervention for complications following de novo implant. Lead displacement accounted for half of the re-interventions. Re-intervention was associated with a higher complication rate, but 92% of these patients had functional LV leads at the end of follow-up. abstract_id: PUBMED:30118332 Utility of risk scores to predict adverse events in cardiac lead extraction. Introduction: Increasing device implantations, patient comorbidities, and longer life expectancy contribute to an increased need for lead extraction. Even if transvenous lead extraction (TLE) is a highly successful procedure, some serious procedural complications are reported. In order to identify those patients who are at higher risk, risk stratification scores were proposed. Areas covered: The major obstacles to lead extractions are represented by the body's response to the foreign implanted material and by the following development of fibrotic reaction between the lead and the vascular system. Several clinical factors and device features are associated with major complications and worse outcomes. Although different multiparametric scores predicting the safety and the efficacy of TLE procedures were reported, none of these scores were prospective evaluated. Expert commentary: A correct risk stratification is needed in order to refer complex patients to centers with proven experience and avoid futile procedures. Furthermore, the identification of high-risk patients allows to perform the extraction procedure in the operating room instead of electrophysiology lab. Albeit some risk scores able to predict adverse event in cardiac lead extraction were described, there are still several limitations to their use and reproducibility. abstract_id: PUBMED:29238634 ASA Classification as a Risk Stratification Tool in Adult Spinal Deformity Surgery: A Study of 5805 Patients. Study Design: Retrospective analysis of prospectively collected data. Objectives: Adult spinal deformity (ASD) surgery is a highly complex procedure that has high complication rates. Risk stratification tools can improve patient management and may lower complication rates and associated costs. The goal of this study was to identify the independent association between American Society of Anesthesiologists (ASA) class and postoperative outcomes following ASD surgery. Methods: The 2010-2014 American College of Surgeons National Surgical Quality Improvement Program database was queried using Current Procedural Terminology and International Classification of Diseases, Ninth Revision, codes relevant to ASD surgery. Patients were divided based on their ASA classification. Bivariate and multivariate logistic regression analyses were employed to quantify the increased risk of 30-day postoperative complications for patients with increased ASA scores. Results: A total of 5805 patients met the inclusion criteria, 2718 (46.8%) of which were ASA class I-II and 3087 (53.2%) were ASA class III-IV. Multivariate logistic regression revealed ASA class to be a significant risk factor for mortality (odds ratio [OR] = 21.0), reoperation within 30 days (OR = 1.6), length of stay ≥5 days (OR = 1.7), overall morbidity (OR = 1.4), wound complications (OR = 1.8), pulmonary complications (OR = 2.3), cardiac complications (OR = 3.7), intra-/postoperative red blood cell transfusion (OR = 1.3), postoperative sepsis (OR = 2.7), and urinary tract infection (OR = 1.6). Conclusions: This is the first study evaluating the role of ASA class in ASD surgery with a large patient database. Use of ASA class as a metric for preoperative health was verified and the association of ASA class with postoperative morbidity and mortality in ASD surgery suggests its utility in refining the risk stratification profile and improving preoperative patient counseling for those individuals undergoing ASD surgery. abstract_id: PUBMED:28411919 Complications of Renal Surgery. The incidence of the small renal mass continues to increase owing to the aging population and the ubiquity imaging. Most of these tumors are stage I tumors. Management strategies include surveillance, ablation, and extirpation. There is a wide body of literature favoring nephron-sparing approaches. Although nephron-sparing surgery may yield decreased long-term morbidity, it is not without its drawbacks, including a higher rate of complications. Urologists must be attuned to the complications of surgery and develop strategies to minimize risk. This article reviews expected complications of surgery on renal masses and risk stratification schema. Answer: The outcomes and complications of lead removal have been studied with the aim of establishing a risk stratification schema to guide a collaborative and effective approach to transvenous lead removal (TLE). A retrospective review of patients undergoing TLE at Mayo Clinic from January 2001 to October 2012 revealed that lead duration and the presence of an implantable cardioverter defibrillator (ICD) lead were associated with the need for laser extraction and procedure failure. The major complication rate was 1.9% and was significantly associated with longer lead duration. A proposed risk stratification scheme categorized patients as low, moderate, and high risk for lead extraction, which could guide whether extractions are best performed in the operating room (PUBMED:26293652). A novel risk stratification scheme was evaluated prospectively between October 2013 and January 2016, categorizing patients based on risk to either an operating room or device laboratory with a rescue strategy. High-risk procedures were performed in the operating room with ready surgical services, while intermediate-risk procedures were performed in the device laboratory. The scheme proved feasible, safe, and efficacious, with similar clinical success rates in both settings, although a higher incidence of major complications occurred in the high-risk group (PUBMED:30784687). Another study focused on intraprocedural risk stratification during elective TLE procedures without surgical backup. The study found that at experienced centers, this approach, which avoids high-risk extraction techniques, achieved successful TLE in the majority of patients and can potentially optimize the balance between efficacy, safety, and efficiency in lead extraction (PUBMED:33781982). In summary, these studies support the establishment of a risk stratification schema for TLE, which can categorize patients according to their risk level and guide the choice of extraction setting, whether in the operating room or device laboratory, to improve outcomes and minimize complications.
Instruction: Detection of cardia intestinal metaplasia: do the biopsy number and location matter? Abstracts: abstract_id: PUBMED:15571591 Detection of cardia intestinal metaplasia: do the biopsy number and location matter? Background And Aim: Presence of intestinal metaplasia in the gastric cardia (cardia intestinal metaplasia, CIM) has been reported in 5-34% of patients undergoing upper endoscopy and is a topic of interest given the rising incidence of cancer in this location. The aim of this article is to determine the prevalence of CIM in biopsies obtained from two separate locations within the gastric cardia. Methods: Patients presenting to the endoscopy unit for upper endoscopy for any symptoms were invited to participate in the study. The biopsy protocol included: eight biopsies from the gastric cardia, four from upper cardia (forceps across the squamocolumnar junction), four from lower cardia (within 1 cm of upper cardia), and four each from the gastric body and antrum. All cardia biopsies were stained with hematoxylin and eosin (H&E) and alcian blue at pH 2.5 for the presence of goblet cells and the body/antrum biopsies were stained with Steiner silver stain for Helicobacter pylori detection. In patients testing negative for H. pylori by histology, a serology test was performed. Results: Sixty-five patients have been evaluated by this protocol; median age 54 yr (range: 34-81 yr), 63 males, 53 Caucasians, and 12 African Americans. The detection of CIM was as follows: upper cardia only, 7, both upper and lower cardia, 5, and lower cardia only, 7. Thus, CIM was detected in 12 patients (18%) in the upper cardia biopsies, in 12 patients (18%) in the lower cardia; overall prevalence of CIM was 29% (19 patients). Fifty-eight percent of CIM patients tested positive for H. pylori by either histology or serology. The addition of serology allowed for the detection of eight additional H. pylori-positive CIM patients. Conclusions: The prevalence of CIM in this study was similar (18%, four biopsies) at each location; however, if both locations were considered (eight biopsies), the prevalence increased to 29%. Thus, CIM prevalence may vary depending on the number of biopsies obtained as well as on the location of biopsies. Use of additional testing detects more patients who are H. pylori positive and should be performed if association of CIM with H. pylori is contemplated. Future endoscopic studies of the gastric cardia should specify the location of biopsies, the number of biopsies obtained, and the tests used to diagnose H. pylori. abstract_id: PUBMED:34942042 Usefulness of linked color imaging for the detection of obscure early gastric cancer: Multivariate analysis of 508 lesions. Objectives: Early gastric cancers (EGCs) of the elevated type or with submucosal invasion are easily found by routine endoscopy. However, most early cancers are challenging to detect because of subtle morphological or color differences from surrounding atrophic mucosa and intestinal metaplasia. Linked color imaging (LCI) enhances mucosal color difference, making it easier to detect EGCs. The aim of this study is to clarify the advantages and possible disadvantages of LCI for screening for obscure EGC. Methods: A total of 665 malignant gastric lesions resected using endoscopic submucosal dissection between January 2015 and April 2018 were retrospectively reviewed. Obviously detectable lesions were not included in the main analysis when determining the target lesion. White light imaging (WLI)/LCI images of 508 endoscopically obscure malignant lesions were included in the final analysis and evaluated by three non-expert and three expert endoscopists using visibility scores for detection and extent. Results: The detection visibility scores using LCI were significantly higher than those using WLI regardless of lesion characteristics including location, size, histological type, depth of invasion, and Helicobacter pylori status. The detection score improved in 46.4% cases and deteriorated in 4.9% when the modality changed from WLI to LCI. A mixed-effects multivariate logistic regression analysis showed that use of LCI (odds ratio [OR] 2.57), elevated type (OR 1.92), invasion to submucosa (OR 2.18) were significantly associated with improved visibility of EGC. Conclusions: Linked color imaging significantly improves visibility of EGC regardless of differences in lesion morphology, histology, location, depth of invasion, and H. pylori status compared to conventional WLI. abstract_id: PUBMED:36894035 A Liquid Biopsy Signature for the Early Detection of Gastric Cancer in Patients. Background & Aims: Diagnosing gastric cancer (GC) while the disease remains eligible for surgical resection is challenging. In view of this clinical challenge, novel and robust biomarkers for early detection thus improving prognosis of GC are necessary. The present study is to develop a blood-based long noncoding RNA (LR) signature for the early-detection of GC. Methods: The present 3-step study incorporated data from 2141 patients, including 888 with GC, 158 with chronic atrophic gastritis, 193 with intestinal metaplasia, 501 healthy donors, and 401 with other gastrointestinal cancers. The LR profile of stage I GC tissue samples were analyzed using transcriptomic profiling in discovery phase. The extracellular vesicle (EV)-derived LR signature was identified with a training cohort (n = 554) and validated with 2 external cohorts (n = 429 and n = 504) and a supplemental cohort (n = 69). Results: In discovery phase, one LR (GClnc1) was found to be up-regulated in both tissue and circulating EV samples with an area under the curve (AUC) of 0.9369 (95% confidence interval [CI], 0.9073-0.9664) for early-stage GC (stage I/II). The diagnostic performance of this biomarker was further confirmed in 2 external validation cohorts (Xi'an cohort, AUC: 0.8839; 95% CI: 0.8336-0.9342; Beijing cohort, AUC: 0.9018; 95% CI: 0.8597-0.9439). Moreover, EV-derived GClnc1 robustly distinguished early-stage GC from precancerous lesions (chronic atrophic gastritis and intestinal metaplasia) and GC with negative traditional gastrointestinal biomarkers (CEA, CA72-4, and CA19-9). The low levels of this biomarker in postsurgery and other gastrointestinal tumor plasma samples indicated its GC specificity. Conclusions: EV-derived GClnc1 serves as a circulating biomarker for the early detection of GC, thus providing opportunities for curative surgery and improved survival outcomes. abstract_id: PUBMED:18406219 The impact of biopsy number and site on the accuracy of intestinal metaplasia detection in the stomach A morphometric study based on virtual biopsies. Background: Intestinal metaplasia is a risk factor for gastric carcinoma. So far few studies have focused on the efficacy of endoscopic biopsies in detecting intestinal metaplasia in relation with the site and number of biopsies performed. The present study is aimed to assess the efficacy of single and multiple gastric biopsies in the detection and staging of intestinal metaplasia. Material And Methods: The study was conducted on 455 gastrectomies. We defined the extent of intestinal metaplasia on surgical specimens sampled by the "Swiss roll" technique and assessed the efficiency of an increasing number of "virtual" biopsies performed on the same rolls in recognising the true extent of intestinal metaplasia in the antro-angular region. Results: Seventy-four out of 455 cases showed intestinal metaplasia in more than 5% of the antro-angular mucosa. The correlation between antro-angular intestinal metaplasia on rolls and on virtual biopsies was always highly significant, both when a single biopsy was considered and when groups of multiple biopsies (from 2 to 6 samples) were taken into account (p<or=0.0001 and rho ranging from 0.304 to 0.817). By increasing the number of biopsies (from 1 to 6) we observed an increase in the rate of correct estimations when 20% extent was used as a cut-off: the increase was statistically significant in both intestinal metaplasia >or=20% and intestinal metaplasia <20% groups (p<0.0001). The highest value of concordance between intestinal metaplasia extent in virtual biopsies and in rolls was recorded for a set of 4 antral+1 angular biopsies whereas a set of 2 antral+1 angular biopsies turned out to be the most effective in terms of number/efficacy. Conclusions: A three biopsy set (2 antral+1 angular) allows correct detection of intestinal metaplasia extent in 90% of cases. abstract_id: PUBMED:35363856 Expansion of Gastric Intestinal Metaplasia with Copy Number Aberrations Contributes to Field Cancerization. Intestinal metaplasia (IM) is a risk factor for gastric cancer following infection with Helicobacter pylori. To explore the susceptibility of pure gastric IM to cancer development, we investigated genetic alterations in single IM gastric glands. We isolated 50 single IM or non-IM glands from the inflamed gastric mucosa of 11 patients with intramucosal gastric carcinoma (IGC) and 4 patients without IGC; 19 single glands in the noninflamed gastric mucosa of 11 individuals from our cohort and previous dataset were also included as controls. Whole-exome sequencing of single glands revealed significantly higher accumulation of somatic mutations in various genes within IM glands compared with non-IM glands. Clonal ordering analysis showed that IM glands expanded to form clusters with shared mutations. In addition, targeted-capture deep sequencing and copy number (CN) analyses were performed in 96 clustered IM or non-IM gastric glands from 26 patients with IGC. CN analyses were also performed on 41 IGC samples and The Cancer Genome Atlas-Stomach Adenocarcinoma datasets. These analyses revealed that polyclonally expanded IM commonly acquired CN aberrations (CNA), including amplification of chromosomes 8, 20, and 2. A large portion of clustered IM glands typically consisted of common CNAs rather than other cancer-related mutations. Moreover, the CNA patterns of clustered IM glands were similar to those of IGC, indicative of precancerous conditions. Taken together, these findings suggest that, in the gastric mucosa inflamed with H. pylori infection, IM glands expand via acquisition of CNAs comparable with those of IGC, contributing to field cancerization. Significance: This study contributes to our understanding of gastric intestinal metaplasia as a risk factor for gastric adenocarcinoma via their multifocal expansion and acquisition of CNAs and somatic mutations. abstract_id: PUBMED:31004598 Anatomic location of Barrett's esophagus recurrence after endoscopic eradication therapy: development of a simplified surveillance biopsy strategy. Background And Aims: Surveillance endoscopy is recommended after endoscopic eradication therapy (EET) for Barrett's esophagus (BE) because of the risk of recurrence. Currently recommended biopsy protocols are based on expert opinion and consist of sampling visible lesions followed by random 4-quadrant biopsy sampling throughout the length of the original BE segment. Despite this protocol, some recurrences are not visibly identified. We aimed to identify the anatomic location and histology of recurrences after successful EET with the goal of developing a more efficient and evidence-based surveillance biopsy protocol. Methods: We performed an analysis of a large multicenter database of 443 patients who underwent EET and achieved complete eradication of intestinal metaplasia (CE-IM) from 2005 to 2015. The endoscopic location of recurrence relative to the squamocolumnar junction (SCJ), visible recurrence identified during surveillance endoscopy, and time to recurrence after CE-IM were assessed. Results: Fifty patients with BE recurrence were studied in the final analysis. Seventeen patients (34%) had nonvisible recurrences. In this group, biopsy specimens demonstrating recurrence were taken from within 2 cm of the SCJ in 16 of these 17 patients (94%). Overall, 49 of 50 recurrences (98%) occurred either within 2 cm of the SCJ or at the site of a visible lesion. Late recurrences (>1 year) were more likely to be visible than early (<1 year) recurrences (P = .006). Conclusions: Recurrence after EET detected by random biopsy sampling is identified predominately in the distal esophagus and occurs earlier than visible recurrences. As such, we suggest a modified biopsy protocol with targeted sampling of visible lesions followed by random biopsy sampling within 2 cm of the SCJ to optimize detection of recurrence after EET. (Clinical trial registration number: NCT02634645.). abstract_id: PUBMED:25817897 Recurrent intestinal metaplasia after radiofrequency ablation for Barrett's esophagus: endoscopic findings and anatomic location. Background: Radiofrequency ablation (RFA) is a safe and effective treatment for Barrett's esophagus (BE) that results in high rates of complete eradication of intestinal metaplasia (CEIM). However, recurrence is common after CEIM, and surveillance endoscopy is recommended. Neither the anatomic location nor the endoscopic appearance of these recurrences is well-described. Objective: Describe the location of histologic specimens positive for recurrence after CEIM and the testing performance of endoscopic findings for the histopathologic detection of recurrence. Design: Retrospective cohort. Setting: Single referral center. Patients: A total of 198 patients with BE with at least 2 surveillance endoscopies after CEIM. Interventions: RFA, EMR, surveillance endoscopy. Main Outcome Measurements: The anatomic location and histologic grade of recurrence. Results: In a mean 3.0 years of follow-up, 32 (16.2%; 95% confidence interval [CI], 11.0%-22.0%) patients had recurrence of disease, 5 (2.5%; 95% CI, 0.3%-4.7%) of whom progressed beyond their worst before-treatment histology. Recurrence was most common at or near the gastroesophageal junction (GEJ). Recurrence>1 cm proximal to the GEJ always was accompanied by endoscopic findings, and random biopsies in these areas detected no additional cases. The sensitivity of any esophageal sign under high-definition white light or narrow-band imaging for recurrence was 59.4% (42.4%, 76.4%), and the specificity was 80.6% (77.2%, 84.0%). Limitations: Single-center study. Conclusion: Recurrent intestinal metaplasia often is not visible to the endoscopist and is most common near the GEJ. Random biopsies>1 cm above the GEJ had no yield for recurrence. In addition to biopsy of prior EMR sites and of suspicious lesions, random biopsies oversampling the GEJ are recommended. abstract_id: PUBMED:25869737 Detection of precancerous gastric lesions and gastric cancer through exhaled breath. Objectives: Timely detection of gastric cancer (GC) and the related precancerous lesions could provide a tool for decreasing both cancer mortality and incidence. Design: 968 breath samples were collected from 484 patients (including 99 with GC) for two different analyses. The first sample was analysed by gas chromatography linked to mass spectrometry (GCMS) while applying t test with multiple corrections (p value<0.017); the second by cross-reactive nanoarrays combined with pattern recognition. For the latter, 70% of the samples were randomly selected and used in the training set while the remaining 30% constituted the validation set. The operative link on gastric intestinal metaplasia (OLGIM) assessment staging system was used to stratify the presence/absence and risk level of precancerous lesions. Patients with OLGIM stages III-IV were considered to be at high risk. Results: According to the GCMS results, patients with cancer as well as those at high risk had distinctive breath-print compositions. Eight significant volatile organic compounds (p value<0.017) were detected in exhaled breath in the different comparisons. The nanoarray analysis made it possible to discriminate between the patients with GC and the control group (OLGIM 0-IV) with 73% sensitivity, 98% specificity and 92% accuracy. The classification sensitivity, specificity, and accuracy between the subgroups was as follows: GC versus OLGIM 0-II-97%, 84% and 87%; GC versus OLGIM III-IV-93%, 80% and 90%; but OLGIM I-II versus OLGIM III-IV and dysplasia combined-83%, 60% and 61%, respectively. Conclusions: Nanoarray analysis could provide the missing non-invasive screening tool for GC and related precancerous lesions as well as for surveillance of the latter. Trial Registration Number: Clinical Trials.gov number, NCT01420588 (3/11/2013). abstract_id: PUBMED:28725964 Toward early detection of Helicobacter pylori-associated gastric cancer. Background: Gastric cancer is typically diagnosed at a late stage, leading to poor prognoses. Helicobacter pylori is responsible for 70% of gastric cancers globally, and patients with this bacterial infection often present with early stages of the carcinogenic pathway such as inflammation or gastritis. Although many patients continue to progress to advanced-stage disease after antibacterial treatment, there are no follow-up screening protocols for patients with a history of H. pylori. Methods: Several biomarkers (Lgr5, CD133, CD44) become upregulated during gastric carcinogenesis. A logistic regression model is developed using clinical data from 59 patients at different stages of the carcinogenic pathway to identify the likelihood of being at an advanced stage of disease for all combinations of age, sex, and marker positivity. Using these likelihood distributions and the observed rate of marker positivity increase, time to high likelihood (probability >0.8) of advanced disease for individual patients is predicted. Results: A strong correlation between marker positivity and disease stage was found for all three markers. Disease stage was accurately classified by the respective regression models for more than 86% of retrospective patients. Highly patient-specific predictions of time to onset of dysplasia were made, allowing the classification of 17 patients initially diagnosed with intestinal metaplasia into high-, intermediate-, or low-risk categories. Conclusions: We present an approach designed to integrate pathology, mathematics, and statistics for detection of the earliest precancerous, treatable lesion. Given the simplicity and robustness of the framework, such technique has the potential to guide personalized screening schedules to minimize the risk of undetected malignant transformation. abstract_id: PUBMED:36753864 A novel detection technology for early gastric cancer based on Raman spectroscopy. Despite universal endoscopic screening, early detection of gastric cancer is challenging, led researchers to seek for a novel approach in detecting. Raman spectroscopy measurements as a fingerprint of biochemical structure, enable accurate prediction of gastric lesions non-destructively. This study aimed to evaluate the diagnostic power of Raman spectroscopy in early gastric cancer (EGC), and reveal dynamic biomolecular changes in vitro from normal to EGC. To clarify the biochemical alterations in Correa's cascade, Raman spectra of human normal gastric mucosa, intestinal metaplasia, dysplasia, and adenocarcinoma were compared at tissue and cellular levels based on a self-developed data processing program. For effectively identify EGC, Raman spectroscopy was used combined with multiple machine learning methods, including partial least-squares discriminant analysis (PLS-DA), support vector machine (SVM), and convolutional neural network (CNN) with leave-one-out (LOO) cross validation. A total of 450 Raman spectra were investigated in this study. The upregulation of νsym(O-P-O) backbone (p < 0.001) was identified as a favorable factor for the diagnosis of EGC, the area under the ROC curve (AUC) was up to 0.918. In addition, higher levels of lactic acid (p < 0.001), lipids (p < 0.001), phenylalanine (p = 0.002), and carotenoids (p < 0.001) were detected in EGC. Multivariate machine learning methods for diagnosis of EGC based on Raman spectroscopy, the sensitivity, specificity, accuracy, and AUC were 91.0%, 100%, 94.8%, and 95.8% for SVM, and 84.8%, 92.0%, 88.8%, and 95.5% for CNN, respectively. Raman spectroscopy can be used as a powerful tool for detecting EGC while elucidating biomolecular dynamics in tumorigenesis. (Chictr.org.cn, ChiCTR2200060720.). Answer: Yes, the detection of cardia intestinal metaplasia (CIM) is influenced by both the number and location of biopsies. A study found that the prevalence of CIM was similar (18%) when biopsies were taken from either the upper or lower cardia. However, when considering biopsies from both locations, the prevalence increased to 29%. This suggests that the number of biopsies obtained can affect the detection rate of CIM. Additionally, the location of the biopsies is important, as CIM was found in both the upper and lower cardia regions. The study also highlighted that using additional testing, such as serology, can detect more patients who are Helicobacter pylori positive, which is relevant if considering the association of CIM with H. pylori infection (PUBMED:15571591). Furthermore, another study on the efficacy of endoscopic biopsies for detecting intestinal metaplasia emphasized that the number of biopsies impacts the accuracy of detection. It was found that increasing the number of biopsies from 1 to 6 improved the rate of correct estimations of intestinal metaplasia extent. Specifically, a set of three biopsies (2 antral + 1 angular) allowed for correct detection of intestinal metaplasia extent in 90% of cases (PUBMED:18406219). In the context of Barrett's esophagus, a study on the recurrence of intestinal metaplasia after endoscopic eradication therapy found that recurrences detected by random biopsy sampling predominantly occurred in the distal esophagus, suggesting that biopsy protocols should focus on this area to optimize detection of recurrence (PUBMED:31004598). Overall, these findings indicate that both the number and location of biopsies are critical factors in the detection of cardia intestinal metaplasia and related conditions. Future endoscopic studies should specify the biopsy location, the number of biopsies obtained, and the tests used to diagnose H. pylori to improve the detection and management of CIM (PUBMED:15571591).
Instruction: Does sports club participation contribute to health-related quality of life? Abstracts: abstract_id: PUBMED:30987637 Sports participation and health-related quality of life in children: results of a cross-sectional study. Background: In children physical activity has been shown to be associated with health-related quality of life (HRQoL). This study further explores this association for specific characteristics of sports participation, namely membership of a sports club, frequency of sports participation, performing individual versus team sports, performing indoor versus outdoor sports, while differentiating between specific dimensions in the physical, psychological and social domain of HRQoL. Methods: Cross-sectional data were collected from Dutch primary school children aged 10 to 12 years. They completed the Movement and Sports Monitor Questionnaire Youth aged 8 to 12 years (MSMQ) and the KIDSCREEN-52, an HRQoL questionnaire for children and adolescents. The data were examined using linear multilevel analyses because of the clustering of children in schools. Results: The questionnaires were completed by 1876 children (response rate 81.3%). Membership of a sports club, moderate or high frequency of sports participation, and performing outdoor sports were all significantly associated with better HRQoL. These associations were largely found in the physical domain of HRQoL, to a lesser degree in the social domain, and to a limited extent in the psychological domain. Conclusion: The association between sports participation and HRQoL in children depends on both characteristics of sports participation and the domain of life that is concerned. These differences offer starting points for developing tailor-made sports programs for children. abstract_id: PUBMED:19996991 Does sports club participation contribute to health-related quality of life? Unlabelled: Given the social nature of participation in sport, we hypothesized that club sports participants would have greater well-being and quality of life than participants in other forms of physical activity (PA). Purpose: The purpose of this study was to examine health-related quality of life and life satisfaction in women who participate in three contrasting forms of PA: club sport, gym activities, and walking. Methods: This was a cross-sectional study of the relationship between type of PA setting and measures of health-related quality of life (Short-Form Health Survey [SF-36]) and life satisfaction in 818 women living in rural Victoria, Australia, in 2007. Data were also compared with those from a normative sample of 2345 women. Results: After adjustment for potential confounders (age, education, marital status, children aged <16 yr, perceived financial stress, and level of recreational PA), four of the eight SF-36 subscales, the SF-36 mental health component summary score, and life satisfaction were significantly higher in the club sport group than that in the other groups. Conclusion: Although cross-sectional research cannot establish causal links, the results suggest that participation in club sport may enhance the health benefits of PA. abstract_id: PUBMED:31161332 Sports participation and health-related quality of life: a longitudinal observational study in children. Purpose: In this study, longitudinal associations between sports participation and health-related quality of life (HRQoL) were explored. Sports participation was operationalized as membership of a sports club, frequency of sports participation, performing individual versus team sports and performing indoor versus outdoor sports. The concept of HRQoL referred to the self-perceived enjoyment and satisfaction with one's personal health situation. Methods: Data from 618 fourth-grade primary school children were included at baseline; 10-13 months later, 417 children (response rate 67.5%) were retained. At both time points, children reported on sports participation (Move and Sports Monitor Questionnaire-youth aged 8-12 years) and health-related quality of life (KIDSCREEN-52). Because of the clustering of children in schools, data were analysed using linear mixed models. Analyses were adjusted for sex, age, BMI, household composition, SES and frequency of sports participation. Results: The questionnaires were fully completed by 417 children. High sports-active children showed better scores on almost all dimensions of HRQoL than moderate [difference (B) = - 1.82 (p = 0.05) to - 1.51 (p = 0.05)] or low ports-active children [difference (B) = - 3.67 (p < 0.001) to - 1.95 (p = 0.03)] and non-sports club members [difference (B) = - 5.58 (p < 0.001) to - 2.65 (p = 0.02)]. Unlike frequency, the other examined characteristics of sports participation were only to a limited extent longitudinal associated with HRQoL. Conclusion: As frequency is more relevant than the form of sports participation, children should be encouraged to perform any kind of sports activity on a very regular base. abstract_id: PUBMED:28673131 Health literacy and participation in sports club activities among adolescents. Aims: The aim of this research was to compare the levels of perceived health literacy among adolescents who do or do not participate in sports club activities. Organized sport club activities reach a high proportion of adolescents, and have the potential to contribute to the development of their health literacy. Methods: The cross-sectional data on health literacy among school children in Finland (aged 13 and 15, n=3852) were measured, as a part of the Health Behaviour in School-Aged Children (HBSC) study, using the Health Literacy for School-aged Children (HLSAC) instrument. Sports club participation and its association with health literacy were examined in relation to age, gender, family affluence, school achievement, and physical activity. The statistical analyses included cross-tabulation and the multilevel mixed-effects logistic regression analyses. Results: Perceived health literacy was higher among adolescents who participated in sports club activities. This conclusion was valid for boys and girls, for both age groups, among those who were physically active 6-7 days a week, had at least moderate school achievement, and those who belonged to the middle or high affluence families. From the health literacy perspective, participation in sports club activities was especially beneficial for those having low or moderate school achievement level. Conclusions: The sports club setting may work towards equalizing health literacy differences related to school achievement. However, the clubs should ensure that access is available to as many adolescents as possible; by this means they may spread beneficial influences, supporting the development of health literacy among broader population groups. abstract_id: PUBMED:33969295 Childhood Sports Participation Is Associated With Health-Related Quality of Life in Young Men: A Retrospective Cross-Sectional Study. The aim of the study was to examine whether sports participation (SP), engagement in competitive sports (CS), and the type of sport undertaken at the age of 12 are associated with the physical and mental components of health-related quality of life (HRQoL) in young adulthood. The data were collected using questionnaires prior to a compulsory military refresher training course in Finland. The sample consisted of 784 men (mean age 26 years). HRQoL was measured with RAND 36 and childhood SP with a series of questions. Data were analyzed with logistic regression. Higher frequency of SP, participation in district-level CS; performing team, endurance, or extreme sports; and playing yard games in childhood were after adjustments all associated with better HRQoL in early adulthood. The association was mainly found with the mental component, and to a lesser extent with the physical component, of HRQoL. Team (OR 1.43, CI 1.00-2.06) and extreme sports (OR 1.77, CI 1.19-2.63) were associated with better mental HRQoL, while playing yard games (OR 0.62, CI 0.44-0.89) reduced the likelihood for having low physical HRQoL. SP in childhood-in the forms of team or individual sports, but also as informal physical activity, such as playing yard games-is associated with HRQoL in young adulthood. abstract_id: PUBMED:34285893 Self-esteem and social development according to participation in school sports club. The purpose of this study is to explore the meaning of the middle school sports club and to understand the impact on the self-esteem and social development of middle school students participating in the school sports club. To achieve the purpose of this study, a questionnaire survey was conducted on 450 students by selecting a middle school. Of the 420 collected questionnaires, 399 questionnaires were used as a valid sample. As a result of examining seven areas of self-esteem, self-esteem in five areas excluding domestic ego and personality ego was highest in the group that participated in the league. As a result of examining the difference in social development according to the type of participation in school sports clubs, the league participation group was higher in all five areas. As a result of examining the difference in self-esteem according to the period of participation in school sports clubs, the group participating in the league was high in six areas excluding physical ability, however, it was difficult to see a statistically significant difference. As for the factor related to physical ability, the group with the longest participation period of 13 months or longer was the highest, and there is a statistically significant difference. As a result of examining the difference in social development according to the period of participation in school sports clubs, the group under 6 months was the highest in four areas other than physical activity. Physical activity was highest in the group over 13 months. abstract_id: PUBMED:31046012 Impact of Adaptive Sports Participation on Quality of Life. The health benefits of regular recreational physical activity are well known in reducing secondary health consequences of a sedentary lifestyle in the general population. However, individuals with physical disabilities participate less frequently in recreational activity compared with those without disabilities. Although evidence on the impact of recreational physical activity on quality of life in this population is in its infancy, regular recreational and sports activity participation has shown to have a positive association with improvements in quality of life, life satisfaction, community reintegration, mood, and employment in those with disabilities. Facilitators of participating in adaptive sports include a desire to improve social support, physical fitness, health, and fun. Unfortunately, those with disabilities face numerous barriers to participate in adaptive sports including accessibility, transportation, awareness, finances, and physical and cognitive impairments. Further studies are needed to investigate facilitators and barriers to participating in adaptive sports to capitalize on the physical and psychosocial benefits of regular recreational activity. The aim of this article is to review the available literature on the effects of adaptive sports participation on quality of life. abstract_id: PUBMED:29999480 Does sports club participation contribute to physical activity among children and adolescents? A comparison across six European countries. Aims: Insufficient physical activity (PA) is one of the largest public health challenges of our time and requires a multisectoral public-health response. PA recommendations state that all children and adolescents should accumulate at least 60 minutes of moderate-to-vigorous PA (MVPA) daily and carry out vigorous PA (VPA) three times weekly. While participation in sports club activities is known to enhance the probability of reaching the recommended overall PA level, less is known about the contribution of sports club participation to VPA, and few cross-national comparisons have been carried out. The purpose of this paper is to study whether participation in sports club activities is associated with meeting the overall PA and VPA recommendations among children and adolescents across six European countries, namely Belgium (Flanders), Czech Republic, Finland, France, Ireland and Sweden. Methods: Analyses were carried out on existing self-reported national data sets using descriptive statistics and logistic regression. Results: Results indicate that approximately two-thirds of children and adolescents take part in sports club activities in the given countries. Sports club participants were more likely to meet the overall PA recommendations (OR 2.4-6.4) and VPA recommendation (OR 2.8-5.0) than non-participants. Conclusions: The extent to which overall PA and/or VPA is gained through sports club participation versus other settings needs to be further studied. Nonetheless, it can be argued that sports clubs have an important position in PA promotion for younger populations. abstract_id: PUBMED:30443519 A relationship between personality and empathy in teenagers' school sports club participation. This study aims to identify the relation between students' participation in sports club activities and personality and sympathy based on the findings obtained from personalty and sympathy study and understand what the interrelational characteristics are. This study investigated a total of 408 individuals (male, 58.3%; female, 41.7%) who have continuously participated in sports club activity in middle schools located at Seoul and Gyeonggi province, Korea. To look at the type of event that they selected, 127 individuals (31.1%) reported that "School sports club event is already defined in school." and 281 (68.9%) reported that "We can select among several school sports club events." The events that they participated in are that 79 individuals participated in basket ball, 47 in line dancing, 96 in badminton, 2 in yoga, 31 in football, 129 in pingpong, and 20 in floorball. In conclusion adolescence is a period that adolescents are physically and mentally developed in terms of personality, morality, creativity, and sociality. During this period, learning and experience are very important for lifetime. Therefore, teenagers' school sports club activity can contribute to fostering positive and desirable personality and stabilizing emotional empathy among student participants. This study identified the relation between personality and empathy in adolescents, which was not studied before. As presented in previous researches, it can be found that positive personality can be cultivated by teenagers' communal activities in school life. The link to connect this with empathy factor is considered to be the very communal activity, sports. abstract_id: PUBMED:24657117 Sports participation and parent-reported health-related quality of life in children: longitudinal associations. Objective: To investigate the longitudinal association between sports participation and parent-reported health-related quality of life (HRQOL) in children. Study Design: Cohort study that used data drawn from the Longitudinal Study of Australian Children in waves 3 (2008) and 4 (2010). Participants were a nationally representative sample of 4042 Australian children ages 8.25 (SD = 0.44) years at baseline and followed-up 24 months later. Results: After we adjusted for multiple covariates, children who continued to participate in sports between the ages of 8 and 10 years had greater parent-reported HRQOL at age 10 (Eta2 = .02) compared with children who did not participate in sports (P ≤ .001), children who commenced participation after 8 years of age (P = .004), and children who dropped out of sports before reaching 10 years of age (P = .04). Children who participated in both team and individual sports (P = .02) or team sports alone (P = .04) had greater HRQOL compared with children who participated in individual sports alone (Eta2 = .01). The benefits of sports participation were strongest for girls (P < .05; Eta2 = .003). Conclusions: Children's participation in developmentally appropriate team sports helps to protect HRQOL and should be encouraged at an early age and maintained for as long as possible. Answer: Yes, sports club participation does contribute to health-related quality of life (HRQoL). Several studies have found significant associations between sports participation and various aspects of HRQoL across different age groups. For children, membership in a sports club, moderate or high frequency of sports participation, and performing outdoor sports are all significantly associated with better HRQoL, particularly in the physical domain, and to a lesser extent in the social and psychological domains (PUBMED:30987637). Longitudinal data also suggest that high sports-active children show better scores on almost all dimensions of HRQoL compared to less active or non-sports club members (PUBMED:31161332). Additionally, sports participation in childhood is associated with better HRQoL in young adulthood, with team and extreme sports linked to better mental HRQoL, and informal physical activities like playing yard games associated with better physical HRQoL (PUBMED:33969295). In adolescents, sports club participation is associated with higher levels of perceived health literacy, which is an important component of HRQoL. This association is particularly beneficial for those with lower or moderate school achievement levels (PUBMED:28673131). Moreover, participation in school sports clubs has been linked to higher self-esteem and social development among middle school students (PUBMED:34285893). For women, participation in club sports is associated with higher scores on several dimensions of HRQoL and life satisfaction compared to those participating in gym activities or walking (PUBMED:19996991). Furthermore, sports club participation is associated with meeting overall physical activity recommendations and vigorous physical activity recommendations among children and adolescents across six European countries, which is an important factor for maintaining good health and HRQoL (PUBMED:29999480). Lastly, for individuals with physical disabilities, participation in adaptive sports is associated with improvements in quality of life, life satisfaction, community reintegration, mood, and employment (PUBMED:31046012). In summary, sports club participation is positively associated with HRQoL across various populations, contributing to physical, psychological, and social well-being.
Instruction: Is there a relationship between the diameter of the inferior vena cava and hemodynamic parameters in critically ill patients? Abstracts: abstract_id: PUBMED:29527693 Relationship between inferior vena cava diameter ratio and central venous pressure. Purpose: To explore the relationship between the shape of the inferior vena cava (IVC) lumen and central venous pressure (CVP). Methods: In 60 patients undergoing mechanical ventilation and CVP monitoring in the Intensive Care Unit of Peking Union Medical College Hospital from July to October 2016, we measured with B-mode ultrasonography the transverse maximum (MXD) and minimum diameter (MID) of the IVC at end expiration, and calculated the diameter ratio (DR) as MXD/MID. Patients were divided into three groups according to CVP: low (CVP &lt; 8 mm Hg), intermediate (8 mm Hg ≤ CVP ≤ 10 mm Hg), and high (CVP &gt; 10 mm Hg). Results: MXD was 2.32 ± 0.41 cm, MID was 1.41 ± 0.40 cm, and DR was 1.76 ± 0.49. CVP was 9.27 ± 2.99 mm Hg. DR correlated with CVP (r = -0.527, P &lt; .001). The low-CVP group had greater dispersion of DR values, with a large variety in IVC shape (elliptical, irregular, teardrop-shaped, partially collapsed…). The area under the ROC curve for predicting CVP with DR, with a CVP threshold of 8 mm Hg was 0.835 (95% CI, 0.726-0.945; P &lt; .05). With a DR cutoff value of 1.76, sensitivity was 0.765 and specificity was 0.781. Conclusions: DR above 1.7 is predictive of CVP &lt; 8 mm Hg. abstract_id: PUBMED:26289523 Is there a relationship between the diameter of the inferior vena cava and hemodynamic parameters in critically ill patients? Introduction: The early detection of critically ill patients together with the rapid initiation of effective treatment in emergency departments(ED) increase the survival rates. Aim: This study investigated whether a correlation exists between haemodynamic parameters of critically ill patients and the diameter of the inferior vena cava (IVC). Materials And Methods: A cross-sectional study was performed included patients aged ≥18 years with an unstable haemodynamic and/or respiratory status who were referred to the ED for non-traumatic issues. IVC diameters were measured by ultrasound (US) and then central venous pressures (CVP) were measured. Anteroposterior (AP) and mediolateral (ML) diameters of the IVC, both in the inspirium (IAP, IML) and expirium (EAP, EML), were measured by US. Results: 102 patients were evaluated with a median age of 59. The relationship between the diameters of IVC and CVP was evaluated and significant correlation was found in IAP, EAP according to CVP values (p&lt;0.001). ROC analyses were performed and significant relationship was found between the EAP diameter with haemoglobin (Hmg), haemotocrit (Hct), and central venous oxygen saturation (ScvO2) and also significant correlation was detected between the IAP diameter and white blood cell (WBC). Discussion: We detected significant correlation between the CVP and the IVC diameter in our study compatible with recent studies besides, significant correlation was found between the diameter of the IVC and CVP values as well as between the EAP diameter and Hmg, Hct, ScvO2 levels. Conclusion: Measurement of IVC diameters, especially EAP may be useful at the monitoring of critically ill patients in ED. abstract_id: PUBMED:30213669 Continuous Inferior Vena Cava Diameter Tracking through an Iterative Kanade-Lucas-Tomasi-Based Algorithm. Ultrasound assessment of the respiratory-induced change in size of the inferior vena cava is a useful technique in the evaluation and management of critically ill patients. We have developed an automated technique based on the Kanade-Lucas-Tomasi feature tracker and pyramidal segmentation to continuously track the diameter of the inferior vena cava during ultrasound. To test the accuracy of this automated process, the inferior vena cava of 47 spontaneously breathing patients were measured by trained ultrasound physicians and compared against the results obtained via the automated tracking. Good agreement between the techniques was found, with intra-class correlation coefficients for maximum vessel diameter, minimum diameter and caval index of 0.897, 0.967 and 0.975, respectively. More than 95% of the difference between physicians and automated measurements agreed to within 10% of the inferior vena cava collapse. Furthermore a phenomenon of cardiac collapsibility index variability was observed and reported. The accuracy and precision of this algorithmic technique provide a foundation for future automated measures for critical care ultrasound. abstract_id: PUBMED:26178491 Does Inferior Vena Cava variability predict fluid responsiveness in critically ill patients with atrial fibrillation Objective: To assess the reliability of the variability of the inferior vena cava diameter as an indicator of fluid responsiveness in critically ill patients with atrial fibrillation. Methods: Adult critically ill patients with atrial fibrillation requiring mechanical ventilation were enrolled. The variability of the inferior vena cava diameter (V(IVC)) was measured during a breath and then calculated as the maximum diameter (Dmax) minus minimum diameter (Dmin) divided by Dmin. All the hemodynamic parameters were collected at baseline and after a manoeuvre of passive leg raising (PLR). A 15% of VTI increase after the PLR was the criterion to identify the patients with or without fluid responsiveness. ROC curve was used to assess the sensitivity and specificity of V(IVC) as an indicator of fluid responsiveness for critically ill patients with atrial fibrillation. Results: There were 34 patients enrolled in our study, with 14 of them responsed to PLR and the rest 20 did not. For patients with fluid responsiveness V(IVC) was significantly higher than patients without fluid responsiveness. The sensitivity and specificity of V(IVC) ≥ 16% to predict fluid responsiveness in patients with atrial fibrillation were 57.1% and 89.9% respectively. Conclusion: V(IVC) proved to be a reliable predictor of fluid responsiveness in critical ill patients with atrial fibrillation. abstract_id: PUBMED:33365158 Ultrasonographic inferior vena cava collapsibility and distensibility indices for detecting the volume status of critically ill pediatric patients. Especially in recent years, the use of point-of-care ultrasound by non-radiologist clinicians has become widespread. Point-of-care ultrasound provides rapid responses to the problems of critically ill patients at the bedside. This technique has many important advantages, including being non-invasive, cheap, repeatable, painless, and radiation-free. Numerous studies have revealed the most important clinical benefits of point-of-care ultrasound use by pediatric intensive care providers. The inferior vena cava is a vessel that is highly sensitive to fluid changes. The inferior vena cava diameter can be measured by a point-of-care ultrasound, and represents a critical parameter in assessing the patient's fluid status. The inferior vena cava collapsibility index (in spontaneously breathing patients) and the inferior vena cava distensibility index (in mechanically ventilated patients) are calculated by determined formulas by using maximum and minimum diameters of the inferior vena cava. The indices are important guides for pediatric intensive care providers for managing their patients' fluid treatment. Although some authors claim it is not a reliable method, the technique is coming to fore in intensive care units day by day, and has an increasing trend among pediatric intensive care specialists. Here, we aim to give detailed information on the ultrasonographic inferior vena cava diameter measurement methods, and calculations of the inferior vena cava collapsibility index and inferior vena cava distensibility index, and emphasize the importance of a noninvasive, bedside, and objective method of detecting the volume status of critically ill patients for pediatric intensive care specialists according to the published literature. abstract_id: PUBMED:32757463 Subcostal versus transhepatic view to assess the inferior vena cava in critically ill patients. Background: Evaluation of the inferior vena cava (IVC) is not always possible through the subcostal (SC) window. Methods: Inferior vena cava diameters measured by transhepatic (TH) and SC views were compared by Bland and Altman analysis. Results: 131 patients were enrolled, including 88 (67%) under mechanical ventilation. The echogenicity was statistically poorer through the TH view in comparison with the SC view (P = .002). The correlation between the SC and TH views was good and better for respiratory variation than for end-expiratory or end-inspiratory diameter measurements (r = 0.86). Despite low bias, the limits of agreement were wide (-7.5 and 7.7 mm for end-expiratory diameter, -8.7 and 8.5 mm for end-inspiratory diameter, and -5.3 and 5.8 mm for respiratory variation). Complementary analysis showed that the concordance between the SC and the TH views was better when the IVC was distended. However, the limits of agreement remained broad. Conclusions: Although feasible in almost all patients, the TH view does not provide better echogenicity in comparison with the SC view. Despite a good correlation with the SC view and a low bias, the limits of agreement were wide, especially when the IVC has an ellipsoidal shape, suggesting caution in the interpretation of data obtained by the TH view. abstract_id: PUBMED:34092856 Comparison of Superior Vena Cava and Inferior Vena Cava Diameter Changes by Echocardiography in Predicting Fluid Responsiveness in Mechanically Ventilated Patients. Context: Resuscitation of critically ill patients requires an accurate assessment of the patient's intravascular volume status. Passive leg raise cause auto transfusion of fluid to the thoracic cavity. Aims: This study aims to assess and compare the efficacy of superior vena cava (SVC) and inferior vena cava (IVC) diameter changes in response to passive leg raise (PLR) in predicting fluid responsiveness in mechanically ventilated hemodynamically unstable critically ill patients. Methods: We enrolled 30 patients. Predictive indices were obtained by transesophageal and transthoracic echocardiography and were calculated as follows: (Dmax - Dmin)/Dmax for collapsibility index of SVC (cSVC) and (Dmax - Dmin)/Dmin for distensibility index of IVC (dIVC), where Dmax and Dmin are the maximal and minimal diameters of SVC and IVC. Measurements were performed at baseline and 1 min after PLR. Patients were divided into responders (increase in cardiac index (CI) ≥10%) and nonresponders (NR) (increase in CI &lt;10% or no increase in CI). Results: Among those included, 24 (80%) patients were R and six were NR. There was significant rise in mean arterial pressure, decrease in heart rate, and decrease in mean cSVC from baseline to 1 min after PLR among responders. The best threshold values for discriminating R from NR was 35% for cSVC, with sensitivity and specificity of being 100%, and 25% for dIVC, with 54% sensitivity and 86.7% specificity. The areas under the receiver operating characteristic curves for cSVC and dIVC regarding the assessment of fluid responsiveness were 1.00 and 0.66, respectively. Conclusions: cSVC had better sensitivity and specificity than dIVC in predicting fluid responsiveness. abstract_id: PUBMED:32078174 Point-of-Care Ultrasound Assessment of the Inferior Vena Cava in Mechanically Ventilated Critically Ill Children. Objectives: The objective of this study was to compare the ultrasound-measured inferior vena cava distensibility index (IVCdi), inferior vena cava distensibility variability (IVCdv), and inferior vena cava-to-aorta ratio (IVC/Ao) to other common methods to assess fluid status in mechanically ventilated pediatric critically ill patients. These methods include central venous pressure (CVP), percent fluid overload by weight (%FOw), and percent fluid overload by volume (%FOv). Methods: This was a prospective observational study of a convenience sample of 50 mechanically ventilated pediatric patients. Ultrasound measurements of the inferior vena cava and aorta were obtained, and the IVCdi, IVCdv, and IVC/Ao were calculated and compared to CVP, %FOw, and %FOv. Results: The median %FOw was 5%, and the median %FOv was 10%. The mean CVP ± SD was 8.6 ± 4 mm Hg. The CVP had no significant correlation with %FOw or %FOv. There was no significant correlation of the IVCdi with CVP (r = -0.145; P = .325) or %FOv (r = 0.119; P = .420); however, the IVCdi had a significant correlation with %FOw (P = .012). There was also no significant relationship of the IVCdv with CVP (r = -0.135; P = .36) or %FOv (r = 0.128; P = .385); however, there was a significant correlation between the IVCdv and %FOw (P = .012). There was no relationship between the IVC/Ao and any other measures of fluid status. Conclusions: In this cohort of mechanically ventilated pediatric intensive care unit patients, many commonly used markers of fluid status showed weak correlations with each other. The IVCdi and IVCdv significantly correlated with %FOw and may have potential as markers for fluid overload in this patient population. abstract_id: PUBMED:34689482 Inferior Vena Cava Collapsibility Index and Central Venous Pressure for Fluid Assessment in the Critically Ill Patient. Objective: To determine the correlation of sonographic evaluation of inferior vena cava diameter and its collapsibility index with central venous pressure in both spontaneously breathing and mechanically ventilated patients in surgical ICU. Study Design: Cross-sectional study. Place And Duration Of Study: Surgical ICU, Lahore General Hospital from November 2020 to May 2021. Methodology: All patients above 18 years of age, who had central venous catheter placed, were included in the study. Patients with raised abdominal pressure, pregnancy, morbid obesity, heart diseases, and those unable to lie in supine position, were excluded. For inferior vena cava (IVC) measurement, IVC was visualised using curvilinear probe of ultrasound machine. Minimum and maximum diameters of inferior vena cava were calculated in every respiratory phase. IVC collapsibility index was expressed in percentage. Central venous pressure (CVP) was recorded soon after IVC measurement, using manometer. Results: Total number of patients was 126. A significant correlation was seen between IVC measurements (inferior vena cava diameters and the collapsibility index) and CVP, (p&lt;0.001), but the regression coefficients were less in patients who were mechanically ventilated (r=0.779 for IVC maximum diameter and -0.725 for collapsibility index) than the patients who were breathing spontaneously (r=0.850 for IVC maximum diameter and -0.899 for collapsibility index) Conclusion: Evaluation of IVC diameter and its collapsibility index is an easy and non-invasive method to evaluate intravascular volume status of critically ill patients. Its use is more helpful in patients who are spontaneously breathing than those who are mechanically ventilated. Key Words: Central venous pressure, Fluid status, Inferior vena cava diameter. abstract_id: PUBMED:18385005 Inferior vena cava diameter correlates with invasive hemodynamic measures in mechanically ventilated intensive care unit patients with sepsis. Early optimization of fluid status is of central importance in the treatment of critically ill patients. This study aims to investigate whether inferior vena cava (IVC) diameters correlate with invasively assessed hemodynamic parameters and whether this approach may thus contribute to an early, non-invasive evaluation of fluid status. Thirty mechanically ventilated patients with severe sepsis or septic shock (age 60 +/- 15 years; APACHE-II score 31 +/- 8; 18 male) were included. IVC diameters were measured throughout the respiratory cycle using transabdominal ultrasonography. Consecutively, volume-based hemodynamic parameters were determined using the single-pass thermal transpulmonary dilution technique. This was a prospective study in a tertiary care academic center with a 24-bed medical intensive care unit (ICU) and a 14-bed anesthesiological ICU. We found a statistically significant correlation of both inspiratory and expiratory IVC diameter with central venous pressure (p = 0.004 and p = 0.001, respectively), extravascular lung water index (p = 0.001, p &lt; 0.001, respectively), intrathoracic blood volume index (p = 0.026, p = 0.05, respectively), the intrathoracic thermal volume (both p &lt; 0.001), and the PaO(2)/FiO(2) oxygenation index (p = 0.007 and p = 0.008, respectively). In this study, IVC diameters were found to correlate with central venous pressure, extravascular lung water index, intrathoracic blood volume index, the intrathoracic thermal volume, and the PaO(2)/FiO(2) oxygenation index. Therefore, sonographic determination of IVC diameter seems useful in the early assessment of fluid status in mechanically ventilated septic patients. At this point in time, however, IVC sonography should be used only in addition to other measures for the assessment of volume status in mechanically ventilated septic patients. Answer: Yes, there is a relationship between the diameter of the inferior vena cava (IVC) and hemodynamic parameters in critically ill patients. Several studies have demonstrated significant correlations between IVC diameters and central venous pressure (CVP), which is a key hemodynamic parameter. For instance, one study found a significant correlation between the diameters of the IVC in both inspiration and expiration according to CVP values (PUBMED:26289523). Another study reported that the inferior vena cava diameter ratio (DR), calculated as the maximum diameter divided by the minimum diameter, correlated with CVP (r = -0.527, P < .001), and a DR above 1.7 was predictive of CVP < 8 mm Hg (PUBMED:29527693). Moreover, the collapsibility index of the IVC, which is a measure of the change in IVC diameter with respiration, has been shown to correlate with fluid responsiveness in mechanically ventilated patients (PUBMED:34689482). In patients with atrial fibrillation, the variability of the IVC diameter was found to be a reliable predictor of fluid responsiveness (PUBMED:26178491). Additionally, the distensibility index of the IVC, which is used in mechanically ventilated patients, has been compared to other methods of assessing fluid status, such as CVP, and has shown potential as a marker for fluid overload (PUBMED:32078174). Furthermore, studies have also explored the use of automated techniques to continuously track the diameter of the IVC, which could provide a foundation for future automated measures for critical care ultrasound (PUBMED:30213669). The use of point-of-care ultrasound to measure IVC diameters and calculate indices like the collapsibility and distensibility indices is emphasized as an important noninvasive, bedside, and objective method for detecting the volume status of critically ill patients (PUBMED:33365158). In summary, the diameter of the IVC, as well as its variability and collapsibility, have been shown to correlate with various hemodynamic parameters in critically ill patients, making it a useful tool for assessing fluid status and guiding fluid management in this patient population.
Instruction: Are entry criteria for cataract surgery justified? Abstracts: abstract_id: PUBMED:25401738 Are entry criteria for cataract surgery justified? Purpose: The German Ophthalmological Society (GOS) recently proposed surgical entry criteria, i.e. 300 cataract surgeries. We herein correlate the surgical hands-on experience with the risk of posterior capsule ruptures in order to assess whether this number is appropriate. Methods: We identified all cataract operations that had been performed at the University Eye Hospital Freiburg since 1995. For each surgeon, we assigned a running number to his/her procedures in the order they had been performed. Thereafter, we excluded all combined procedures and the second eyes. We then selected the 5475 surgical reports between November 2008 and November 2012 for detailed review. We additionally classified each surgery into low- vs. high- à priori risk for posterior capsule ruptures. We fitted a multifactorial logistic regression model to assess the GOS recommendation of 300 surgeries under supervision. In the low-risk group, we additionally visualized the 'typical' learning curve by plotting the posterior capsule ruptures against the respective rank numbers. Results: The odds ratio for posterior capsule ruptures of 'learning-mode' (one of the respective surgeon's 300 first procedures) vs. the non-learning-mode was 3.8 (p&lt;0.0001). By contrast, classification into the low-risk group lowered the risk of posterior capsule ruptures three fold (p&lt;0.0001). According to the low-risk plot, the surgeons started with a complication rate of 4% and continuously improved towards 0.5% after 1500 operations. Thereafter, the rate increased again and stabilized around one percent. Conclusion: The learning curve with respect to posterior capsule ruptures is surprisingly flat. The GOS entry criterion of 300 cataract procedures is therefore most likely justified. Careful selection of low-risk patients for the training surgeons may help in reducing the rate of posterior capsule ruptures during training. abstract_id: PUBMED:36308153 Novel tunnel staining technique to reduce premature entry in manual small-incision cataract surgery. Mastering manual small-incision cataract surgery (MSICS) for beginner surgeons is difficult. In the initial days of residency or training, surgeons struggle to make a proper scleral tunnel and keratome entry. It commonly results in premature entry and iris prolapse. Most of the literature has shed light on premature entry during tunnel construction by a crescent blade, whereas a significant majority of iris prolapse happens due to improper keratome entry. This novel trypan blue dye-assisted tunnel staining (TBTS) technique helps in proper tunnel demarcation which can reduce the incidence of premature entry with a keratome. abstract_id: PUBMED:1995973 Medicare Peer Review Organization preprocedure review criteria. An analysis of criteria for three procedures. The Medicare Peer Review Organization (PRO) program includes preprocedure review using explicit criteria to assess the appropriateness of specific procedures. This study evaluates the variability in the PRO preprocedure criteria for the three procedures most frequently reviewed by PROs: carotid endarterectomy, cataract removal, and cardiac pacemaker implants. In August 1989, the PRO review criteria were received from the Health Care Financing Administration. To provide a reference point for reviewing the PRO criteria, national practice guidelines for these three procedures were identified. Wide variability was demonstrated in the PRO procedure-specific carotid endarterectomy and cataract removal review criteria among PROs, and the criteria differed significantly from the identified practice guidelines. The criteria for cardiac pacemaker implants were somewhat less variable, and were based, to varying degrees, on practice guidelines developed by the American College of Cardiology (ACC). Greater attention is needed to improve the development of review criteria, including the use of relevant practice guidelines, to ensure that review criteria are optimal. abstract_id: PUBMED:11709243 Combined ocular anesthesia via a single entry. Although topical anesthesia techniques for cataract surgery are evolving, injections are unavoidable in some cases. Our technique of injecting local anesthetic agents for akinesia of the orbicularis oculi and the globe via a single entry near the lateral canthus decreases the pain and patient anxiety about the number of injections and avoids the complications of retrobulbar injection. abstract_id: PUBMED:22817610 Diagnostic criteria for Werner syndrome based on Japanese nationwide epidemiological survey. Aim: Werner syndrome (WS) is an autosomal recessive disorder of progeroid symptoms and signs. It is caused by mutations in the WRN gene, which encodes a RecQ DNA helicase. The aim of this study was to revise the diagnostic criteria for Japanese Werner syndrome. Methods: A nationwide epidemiological study was carried out from 2009 to 2011, involving 6921 surveys sent to hospitals with more than 200 beds to assess existing WS diagnostic criteria, as well as additional signs of high incidence on the basis of clinical experience with WS. Results: The existing diagnostic criteria were reviewed, and signs with &gt;90% incidence were listed as cardinal signs. Several criteria were added, including genetic testing and calcification of the Achilles tendon, whereas criteria that are practically difficult to obtain, such as measurement of urinary hyaluronic acid, were omitted. Conclusion: The 26-year-old diagnostic criteria for WS were revised on the basis of the results of a nationwide epidemiological study. The proposed revised criteria will facilitate simpler, faster and more robust diagnosis of WS in the Japanese population. abstract_id: PUBMED:9006477 The New Zealand priority criteria project. Part 1: Overview. New Zealand restructured its health system in 1992 with the aim of achieving greater levels of assessment and accountability in the publicly funded health sector. A committee was established specifically to advise the minister of health on the kinds, and relative priorities, of health services that should be publicly funded. One of its projects has been to develop standardised sets of criteria to assess the extent of benefit expected from elective surgical procedures. These have been developed with the help of professional advisory groups using a modified Delphi technique to reach consensus. So far the committee has developed criteria for cataract surgery, coronary artery bypass grafting, hip and knee replacement, cholecystectomy, and tympanostomy tubes for otitis media with effusion. These criteria incorporate both clinical and social factors. Use of priority criteria to ensure consistency and transparency regarding patients' priority for surgery is required for access to a dedicated NZ$130m (pounds 57m; US$90m) pool of money, created to help eliminate surgical waiting lists and move to booking systems. The criteria will also be used in surgical outcome studies, currently in the planning phase. abstract_id: PUBMED:15953440 Comparative study of two sets of criteria for the diagnosis of Vogt-Koyanagi-Harada's disease. Purpose: The Vogt-Koyanagi-Harada's (VKH) Disease Committee established the "Revised diagnostic criteria for VKH disease" in 2001. The purpose of the present study was to assess the concordance between these criteria and the traditional Sugiura's diagnosis system. Design: Observational case series. Methods: The medical records of patients previously diagnosed with VKH disease based upon Sugiura's criteria at the Uveitis Survey Clinic of the Hokkaido University Hospital between 1991 and 2003 were retrospectively reevaluated using the VKH Committee's revised diagnostic criteria for VKH disease. Results: Sugiura's criteria were used to identify 169 patients with VKH disease. All patients were Japanese, and 95 cases (56%) were women. Mean age at the time of their first visit to our clinic was 44.7 +/- 13.9 years (range, 9 to 74 years). Using the VKH Committee's new criteria, 91.7% of the previously diagnosed VKH patients were classified as having the disease. Of this group, 11.8% were classified as complete, 71% incomplete, and 8.9% as probable VKH disease. Conclusions: The VKH Committee's revised diagnostic criteria proved useful for VKH disease diagnosis, as the concordance rate for the two criteria was more than 90%. However, patients who had prior cataract surgery or who lacked signs of serous retinal detachment were not classified as having VKH disease because of exclusion by the VKH Committee's new criteria. abstract_id: PUBMED:18973949 Validity of newly developed appropriateness criteria for cataract surgery. Objective: The introduction of phacoemulsification in the early 1990s significantly changed the treatment of cataract. It is possible that appropriateness criteria for cataract extraction developed before the widespread adoption of phacoemulsification no longer is as accurate or valid as possible. The objective of this study was to validate newly developed explicit appropriateness criteria. Design: Prospective, observational cohort study. Participants: Consecutive patients with a diagnosis of cataract who were on waiting lists to undergo cataract extraction by phacoemulsification were recruited. Methods: Newly developed explicit appropriateness criteria for cataract extraction, following Research and Development methodology, were applied in this study conducted in 17 public teaching hospitals. Data necessary to classify the appropriateness of the intervention, including demographic and clinical data and that related to the surgery and complications, were recorded by previously trained ophthalmologists in all centers before the intervention and at the 6-week visit afterward. Main Outcome Measures: Subjects completed 2 questionnaires that measure health-related quality of life--the Short Form 36 and the Visual Function Index 14 (VF-14)--before the procedure and 3 months afterward. Also, visual acuity was recorded by ophthalmologists previous to the intervention and at 6 weeks after. Results: Five thousand two hundred fifty-seven cases were included in the study. At 3 months after cataract extraction, 4335 (82.5%) patients had responded to the questionnaires. Patients whose procedures were classified as necessary or appropriate by the new appropriateness criteria had greater improvements in visual acuity and VF-14 scores than those undergoing procedures classified as inappropriate. These differences seemed to be clinically relevant by measures such as the minimal clinically important difference and minimal detectable change. Complications rates were similar among all appropriateness categories, except for complications occurring in the peri-intervention period. Conclusions: These results suggest a direct relationship between the newly developed explicit appropriateness criteria for cataract extraction and outcomes, as measured by clinical evaluation and health-related quality-of-life instruments. The results support the use of these criteria for clinical evaluation or the development of clinical guidelines. abstract_id: PUBMED:16512893 Explicit criteria for prioritization of cataract surgery. Background: Consensus techniques have been used previously to create explicit criteria to prioritize cataract extraction; however, the appropriateness of the intervention was not included explicitly in previous studies. We developed a prioritization tool for cataract extraction according to the RAND method. Methods: Criteria were developed using a modified Delphi panel judgment process. A panel of 11 ophthalmologists was assembled. Ratings were analyzed regarding the level of agreement among panelists. We studied the effect of all variables on the final panel score using general linear and logistic regression models. Priority scoring systems were developed by means of optimal scaling and general linear models. The explicit criteria developed were summarized by means of regression tree analysis. Results: Eight variables were considered to create the indications. Of the 310 indications that the panel evaluated, 22.6% were considered high priority, 52.3% intermediate priority, and 25.2% low priority. Agreement was reached for 31.9% of the indications and disagreement for 0.3%. Logistic regression and general linear models showed that the preoperative visual acuity of the cataractous eye, visual function, and anticipated visual acuity postoperatively were the most influential variables. Alternative and simple scoring systems were obtained by optimal scaling and general linear models where the previous variables were also the most important. The decision tree also shows the importance of the previous variables and the appropriateness of the intervention. Conclusion: Our results showed acceptable validity as an evaluation and management tool for prioritizing cataract extraction. It also provides easy algorithms for use in clinical practice. abstract_id: PUBMED:19553616 Regional differences in store-operated Ca2+ entry in the epithelium of the intact human lens. Purpose: An elevated level of Ca(2+) is an important factor in cataract, yet precisely how Ca(2+) enters the lens is unknown. Lens epithelial cells contain a range of G-protein-coupled receptors and receptor tyrosine kinases that induce increases in intracellular Ca(2+). Receptor-associated Ca(2+) influx is, therefore, likely to be an important route for Ca(2+) influx to the lens. The authors investigated stimulated and passive Ca(2+) influx in in situ human lens epithelium. Methods: Ca(2+) changes in equatorial (E) and central anterior (CA) epithelial cells were monitored with the use of a Ca(2+) indicator (Fluo4) and confocal microscopy. Gene expression was monitored by RT-PCR and immunoblotting. Results: Adenosine triphosphate (ATP) induced Ca(2+) responses that were smaller in CA than E. Ca(2+) store depletion, using ATP (100 microM) or thapsigargin (1 microM), revealed greater relative store capacity and Ca(2+) influx in E. Ca(2+) influx was blocked by La(3+) (0.5 microM) in both regions. Unstimulated Ca(2+) influx was greater in E than CA. Greater expression of Orai1 and STIM1 was detected in E than in CA. Conclusions: Greater Ca(2+) store capacity and Ca(2+) influx in E compared with CA reflects underlying differences in proliferation and differentiation between the regions. The relatively small resting Ca(2+) influx in CA epithelium suggests that store-operated Ca(2+) entry (SOCE) is the main route of Ca(2+) influx in these cells. Greater resting influx and SOCE in E cells suggests that these are a major route for Ca(2+) influx into the lens. Increased expression of Orai1 and STIM1 in E could account for the differences in Ca(2+) entry. Receptor activation will modulate Ca(2+) influx, and inappropriate activity may contribute to cortical cataract. Answer: Yes, entry criteria for cataract surgery are justified. A study correlating surgical hands-on experience with the risk of posterior capsule ruptures found that the odds ratio for posterior capsule ruptures was significantly higher for surgeons in 'learning-mode'—defined as one of the respective surgeon's first 300 procedures—compared to those who were not (PUBMED:25401738). This suggests that the German Ophthalmological Society's (GOS) recommendation of 300 surgeries under supervision is appropriate, as it likely helps to reduce the risk of complications during cataract surgery. The study also noted that careful selection of low-risk patients for training surgeons could further reduce the rate of posterior capsule ruptures during training (PUBMED:25401738). Moreover, the development of explicit criteria for prioritizing cataract surgery has been shown to have acceptable validity as an evaluation and management tool, providing algorithms that can be used in clinical practice to determine the appropriateness of the intervention (PUBMED:16512893). Additionally, newly developed appropriateness criteria for cataract surgery have been validated and shown to have a direct relationship with outcomes as measured by clinical evaluation and health-related quality-of-life instruments, supporting their use for clinical evaluation or the development of clinical guidelines (PUBMED:18973949). In summary, entry criteria for cataract surgery are justified as they are based on evidence that suggests a correlation between surgical experience and complication rates, and they contribute to the standardization and prioritization of surgical interventions, thereby potentially improving patient outcomes.
Instruction: Should we address the neopulmonic valve? Abstracts: abstract_id: PUBMED:23844626 Easy and reproducible technique to address tricuspid valve regurgitation with patch augmentation. We describe a patch augmentation of the anterior leaflet of the tricuspid valve to address tricuspid valve regurgitation due to leaflet retraction. The area of the anterior leaflet is measured using a tricuspid valve annuloplasty sizer; a glutaraldehyde-fixed autologous pericardial patch is trimmed with slight 5 mm oversize. The anterior leaflet is detached and the patch is sutured with three 5/0 running interlocked sutures, then an annuloplasty is performed with an undersized ring. This technique offers an easy and reproducible tool to address tricuspid regurgitation due to lack of leaflet coaptation. abstract_id: PUBMED:27093211 Valve-in-valve with Portico valve for a degenerative bioprosthetic surgical valve (Biocor) In the last years, a general shift toward the use of surgical bioprosthetic aortic valves rather than mechanical valves with subsequent less use of anticoagulant therapy has been observed. However, bioprosthetic valves have limited durability. Reoperation, the current standard of care for these patients, carries a high surgical risk, especially because patients are elderly and with numerous comorbidities. Recently, transcatheter aortic valve replacement within a failed bioprosthetic valve (valve-in-valve procedure) has proven feasible. We here describe a case of valve-in-valve procedure with a Portico valve placed in a purely insufficient bioprosthetic valve (Biocor). abstract_id: PUBMED:37038603 Effects of Address Coverage Enhancement on Estimates from Address-Based Sampling Studies. For over a decade, address-based sampling (ABS) frames have often been used to draw samples for multistage area sample surveys in lieu of traditionally listed (or enumerated) address frames. However, it is well known that the use of ABS frames for face-to-face surveys suffer from undercoverage due to, for example, households that receive mail via a PO Box rather than being delivered to the household's street address. Undercoverage of ABS frames has typically been more prominent in rural areas but can also occur in urban areas where recent construction of households has taken place. Procedures have been developed to supplement ABS frames to address this undercoverage. In this article, we investigate a procedure called Address Coverage Enhancement (ACE) that supplements the ABS frame with addresses not found on the frame, and the resulting effects the addresses added to the sample through ACE have on estimates. Weighted estimates from two studies, the Population Assessment of Tobacco and Health Study and the 2017 US Program for the International Assessment of Adult Competencies, are calculated with and without supplemental addresses. Estimates are then calculated to assess if poststratifying analysis weights to control for urbanicity at the person level brings estimates closer to estimates from the supplemented frame. Our findings show that the noncoverage bias was likely minimal across both studies for a range of estimates. The main reason is because the Computerized Delivery Sequence file coverage rate is high, and when the coverage rate is high, only very large differences between the covered and not covered will result in meaningful bias. abstract_id: PUBMED:31448762 Tricuspid valve-in-valve procedure: a step-by-step guideline. Objective: The aim of this study was to present a valve-in-valve (ViV) case and a step-by-step guideline on how to perform this procedure. Methods: A 51-year-old female with a history of rheumatic heart disease and tricuspid valve replacement presented functional class deterioration, a transesophageal echocardiogram (TEE) revealed prosthetic dysfunction due to thrombosis; therefore, a valvular replacement with a 27 mm bioprosthesis (Carpentier-Edwards Perimount) was performed without complication. 3 years after the procedure, the patients presented functional class deterioration (NYHA-III) with tricuspid dysfunction by TEE and the heart team decided to perform a transcatheter tricuspid ViV replacement. abstract_id: PUBMED:37228546 Surgical Aortic Valve Replacement to Treat Prosthetic Valve Endocarditis After Valve-in-Valve Transcatheter Aortic Valve Replacement. Prosthetic valve endocarditis (PVE) is an uncommon complication after heart valve replacement surgery that can result in increased morbidity and mortality. Current guidelines for management of PVE recommend antibiotic therapy followed by surgical valve replacement. The number of aortic valve replacements is expected to rise in the coming years with the expanded indications for use of transcatheter aortic valve replacement (TAVR) in patients with low, intermediate, and high surgical risk, as well as in patients with a failed aortic bioprosthetic valve. Current guidelines do not address the use of valve-in-valve (ViV) TAVR for management of PVE in patients who are at high risk for surgical intervention. The authors present a case of a patient with aortic valve PVE after surgical aortic valve replacement (SAVR); he was treated with valve-in-valve (ViV) TAVR due to the high surgical risk. The patient was discharged, but he returned to the hospital with PVE and valve dehiscence 14 months after ViV TAVR, after which he successfully underwent re-operative SAVR. abstract_id: PUBMED:30932086 Tricuspid valve-in-valve procedure: a step-by-step guideline. Objective: The aim of this study was to present a valve-in-valve (ViV) case and a step-by-step guideline on how to perform this procedure. Methods: A 51-year-old female with a history of rheumatic heart disease and tricuspid valve replacement presented functional class deterioration, a transesophageal echocardiogram (TEE) revealed prosthetic dysfunction due to thrombosis; therefore, a valvular replacement with a 27 mm bioprosthesis (Carpentier-Edwards Perimount) was performed without complication. 3 years after the procedure, the patients presented functional class deterioration (NYHA-III) with tricuspid dysfunction by TEE and the heart team decided to perform a transcatheter tricuspid ViV replacement. abstract_id: PUBMED:24612128 Valve-in-valve transcatheter aortic valve implantation: the new playground for prosthesis-patient mismatch. Transcatheter aortic valve implantation (TAVI) has become an established procedure for patients with aortic valve stenosis and significant comorbidities. One option offered by this technique is the implantation of a transcatheter valve inside a surgically implanted bioprosthesis. Many reports address the feasibility but also the pitfalls of these valve-in-valve (VIV) procedures. Review articles provide tables listing which valve sizes are appropriate based on the size of the initially implanted bioprosthesis. However, we previously argued that the hemodynamic performance of a prosthetic tissue valve is in large part a result of the dimensions of the bioprosthesis in relation to the patient's aortic outflow dimensions. Thus, the decision if a VIV TAVI procedure is likely to be associated with a favorable hemodynamic result cannot safely be made by looking at premade sizing tables that do not include patient dimensions and do not inquire about the primary cause for bioprosthetic valve stenosis. Prosthesis-patient mismatch (PPM) may therefore be more frequent than expected after conventional aortic valve replacement. Importantly, it may be masked by a potentially flawed method assessing its relevance. Such PPM may therefore impact significantly on hemodynamic outcome after VIV TAVI. Fifteen percent of currently published VIV procedures show only a minimal reduction of pressure gradients. We will address potential pitfalls in the current determination of PPM, outline the missing links for reliable determination of PPM, and present a simplified algorithm to guide decision making for VIV TAVI. abstract_id: PUBMED:33575180 Mitral valve-in-valve and valve-in-ring: tips, tricks, and outcomes. Transcatheter aortic valve implantation is now a mainstay of treatment in patients with aortic stenosis who are considered intermediate, high and prohibitive risk for surgery. Extended use of this innovative platform in treating other conditions has led to its approval in treating degenerated aortic bioprosthesis. Similarly, use of transcatheter devices in treating degenerated mitral bioprosthesis and failed mitral valve repairs with annuloplasty rings has opened a potential alternative to surgery in these patients. Experience in mitral valve-in-valve (MVIV) and valve-in-ring (VIR), while still limited, is on the rise. Although similar in many ways to the aortic VIV, it is different with respect to patient selection, planning and procedural steps. Familiarity with the bioprosthetic properties and dimensions can help an operator choose an appropriate transcatheter device and deploy it in an ideal position. Due to greater variability in construction and properties, mitral rings have led to poorer results compared to mitral valve-in-valve. Understanding the properties of mitral rings is critical and has been simplified by us in a stepwise manner. We also describe steps in patient preparation and procedure, which should help operators in performing this procedure. Certain unique complications, such as left ventricular outflow tract obstruction and risk of embolization, are discussed with tips to address these issues. Once these steps are followed, the procedure can be performed with minimal risk and good outcome. abstract_id: PUBMED:28270905 Acute pulmonic valve endocarditis Right heart endocarditis mainly affects tricuspid valve, especially in drug users. Isolated pulmonary valve infection is rare. We here report the case of a 32-year old young woman whose diagnosis of community-acquired pulmonary valve endocarditis was confirmed. This study is even more interesting because pulmonary valve endocarditis has occurred in a patient without a history of drug addiction; nevertheless, it wasn't associated with restrictive interventricular communication. The patient underwent emergency surgery due to the visualization of large and mobile vegetation on transthoracic heart ultrasound. Surgery was carried out under extracorporeal circulation and pulmonary valve has been replaced with a bioprosthesis. Bloodcultures and valve culture showed methicillin-sensitive staphylococcus aureus. Postoperative evolution was favorable, with a follow-up of 06 months without recurring infections. abstract_id: PUBMED:26012593 Valve reconstruction for congenital mitral valve disease. The surgical treatment of mitral valve disease in children is a challenging problem. Mitral stenosis and regurgitation may occur in isolation or together. Mitral valve repair is almost always preferable to replacement. Mitral valve replacement is not an ideal alternative to repair due to limitations of size, growth, structural valve degeneration, anticoagulation and poor survival. Surgical repair of congenital mitral stenosis must address the multiple levels of obstruction, including resection of the supramitral ring, thinning of leaflets and mobilization of the subvalvular apparatus. Sometimes leaflet augmentation is required. Repair of mitral regurgitation in children may involve simple cleft closures, edge-to-edge repairs, triangular resections and annuloplasties. Techniques used in adults, such as annuloplasty bands or artificial chords, may not be appropriate for children. Overall, an imperfect mitral valve repair may be more acceptable than the negative consequences of a replacement in a child. Answer: The provided abstracts do not directly discuss addressing a "neopulmonic valve," which is not a standard term in cardiology. However, they do discuss various interventions for valve diseases, including the tricuspid valve, aortic valve, mitral valve, and pulmonic valve. For instance, there is mention of a case of acute pulmonic valve endocarditis where the patient underwent emergency surgery with replacement of the pulmonary valve with a bioprosthesis (PUBMED:28270905). This suggests that in cases of infection or dysfunction of the pulmonic valve, surgical intervention may be necessary. Additionally, there are several abstracts discussing valve-in-valve (ViV) procedures, where a transcatheter valve is implanted within a failed bioprosthetic valve (PUBMED:27093211, PUBMED:31448762, PUBMED:30932086, PUBMED:24612128, PUBMED:33575180). This technique is used for various valves, including the aortic and mitral valves, and could theoretically be applied to the pulmonic valve if a bioprosthetic valve were to fail. The abstracts also discuss the importance of addressing valve regurgitation, such as with tricuspid valve regurgitation, where a patch augmentation of the anterior leaflet is described (PUBMED:23844626). In summary, while the term "neopulmonic valve" is not used, the abstracts do suggest that addressing valve issues, including those of the pulmonic valve, is important and can be done through various surgical and transcatheter interventions depending on the specific condition and valve involved.
Instruction: Can the Bruckner test be used as a rapid screening test to detect significant refractive errors in children? Abstracts: abstract_id: PUBMED:17456940 Can the Bruckner test be used as a rapid screening test to detect significant refractive errors in children? Purpose: To assess the suitability of Brückner test as a screening test to detect significant refractive errors in children. Materials And Methods: A pediatric ophthalmologist prospectively observed the size and location of pupillary crescent on Brückner test as hyperopic, myopic or astigmatic. This was compared with the cycloplegic refraction. Detailed ophthalmic examination was done for all. Sensitivity, specificity, positive predictive value and negative predictive value of Brückner test were determined for the defined cutoff levels of ametropia. Results: Ninety-six subjects were examined. Mean age was 8.6 years (range 1 to 16 years). Brückner test could be completed for all; the time taken to complete this test was 10 seconds per subject. The ophthalmologist identified 131 eyes as ametropic, 61 as emmetropic. The Brückner test had sensitivity 91%, specificity 72.8%, positive predictive value 85.5% and negative predictive value 83.6%. Of 10 false negatives four had compound hypermetropic astigmatism and three had myopia. Conclusions: Brückner test can be used to rapidly screen the children for significant refractive errors. The potential benefits from such use may be maximized if programs use the test with lower crescent measurement cutoffs, a crescent measurement ruler and a distance fixation target. abstract_id: PUBMED:32153328 Efficacy of Brückner's Test for Screening of Refractive Errors by Non-ophthalmologist Versus Ophthalmologist: A Comparative Study. Objective: The aim of this study is to assess the efficacy of Brückner's test as a screening tool for identifying refractive errors by a non-ophthalmologist and to assess the inter-observer variation. Subjects And Methods: This was a quasi-experimental study conducted at a tertiary care medical college hospital. Brückner's test was performed by a non-ophthalmologist followed by an ophthalmologist. The results were confirmed by cycloplegic refraction. Results: The study included 75 children (31 males and 44 females) with a mean age of 13.3 ± 1.2 years. The results of Brückner's test by both non-ophthalmologist and ophthalmologist showed good sensitivity and specificity when compared with cycloplegic refraction. There was a good strength of agreement between the observations of non-ophthalmologist and ophthalmologist. Conclusion: Brückner's test is simple, quick, reliable, and easy to administer on children of all age groups that can provide information about the presence of refractive errors by using a coaxial light source such as direct ophthalmoscope. The application of this test by training primary care providers would be of immense value in the early identification of refractive errors. abstract_id: PUBMED:33767959 Outcome of vision screening by community health workers at immunization outlets in Nigeria to support access to early visual evaluation in children aged 0-2 years. Purpose: Routine eye examination in early life is not the practice in most resource-limited countries. Delay in the presentation for eye problems is typical. Community health officers are often consulted by caregivers for all health problems during routine immunization and well-baby clinics in primary healthcare for children aged 0-2 years. This study evaluated the value and limitation of interview, Bruckner red reflex test, and instrument vision screener by noneye care middle-level staff of rural and urban well-baby immunization clinics, in early detection and referral for childhood eye disorders. Materials And Methods: This was a cross-sectional study. Middle-level community health workers (CHWs) working at well-baby/immunization clinics were trained to perform vision screening using interview of caregivers, red reflex eye examination with ophthalmoscope, and instrument vision screener (Welch Allyn SPOT™ Vision Screener) without mydriatic drugs during routine immunization of children aged 0-2 years. IRB approval was obtained. Results: Over a 6-month period in 2017, the CHWs screened 5609 children. Overall, 628 (11.2%) patients were referred to the tertiary child eye care unit. Referred cases included cataract, glaucoma, congenital nasolacrimal duct obstruction, ophthalmia neonatorum, retinoblastoma, and significant refractive errors. Referral from the interview of mothers was enhanced if specific questions to elicit visual function were asked. Bruckner red reflex test was more effective than instrument vision screener in the detection of cataract and life-threatening diseases such as retinoblastoma. Instrument vision screener was preferred by parents and better at detecting amblyopic risk factors. Conclusion: Preschool vision screening during routine immunization by primary healthcare workers in resource-limited settings was effective. Whenever instrument vision screener does not give any recommendation during screening, consider vision- or life-threatening pathology and refer. abstract_id: PUBMED:27905334 The opportunistic screening of refractive errors in school-going children by pediatrician using enhanced Brückner test. Aim: The aim of this study was to compare the results of enhanced Brückner test (EBT) performed by a pediatrician and an experienced pediatric ophthalmologist. Subjects And Methods: In this prospective double-masked cohort study, a pediatrician and a pediatric ophthalmologist performed the EBT in a classroom of a school in semi-dark lighting condition using a direct ophthalmoscope. The results of the test were compared using 2 × 2 Bayesian table and kappa statistics. The findings of the pediatric ophthalmologists were considered gold standard. Results: Two hundred and thirty-six eyes of 118 subjects, mean age 6.8 ± 0.5 years (range, 5.4-7.8 years), were examined. The time taken to complete this test was &lt;10 s per subject. The ophthalmologist identified 59 eyes as ametropic (12 hyperopic and 47 myopic eyes) and 177 as emmetropic compared to 61 eyes as ametropic and 175 emmetropic by pediatrician. The prevalence of the test positive was 25.9%. The sensitivity of the pediatrician was 90.2%, specificity was 97.7%, predictive value of the positive test was 93.2%, and predictive value of the negative test was 96.6%. The clinical agreement (kappa) between the pediatric ophthalmologist and the pediatrician was 0.9. Conclusion: The results of the EBT performed by pediatrician were comparable to that of an experienced pediatric ophthalmologist. Opportunistic screening of refractive errors using EBT by a pediatrician can be an important approach in the detection of ametropia in children. abstract_id: PUBMED:17602235 The Brückner test: extended distance improves sensitivity for ametropia. Background: The Brückner test is usually performed at a distance of 1 m. Due to optical reasons, the test should detect ametropia more sensitively at an extended distance. We compared the sensitivity of the test to detect unilateral ametropia at a distance of 4 m versus the traditional distance of 1 m. Patients And Methods: In this study, five blinded experienced observers (experts) performed the Brückner transillumination test on five emmetropic subjects (age 22.3 +/- 2.3 years) at a distance of 1 m which was then extended up to 4 m. Unilateral ametropia was simulated by both spherical and cylindrical lenses of -1, -2, -3, -4, and +1, +2, +3, +4 diopters in front of one eye and a plano lens in front of the other eye in a randomised order. Controls with plano lenses in front of both eyes were interspersed. The test was considered positive in case of any difference between the red reflexes of both eyes. For spherical ametropia, the procedure was repeated with 25 medical students who examined one subject each. Results: At a distance of 1 m, unilateral myopia/hypermetropia of more than 1 diopter was detected in 78.7%/71.3% by experts and in 34.7%/53.3% by students. At 4 m, detection rates increased to 99.3%/96.0% and 98.7%/100% respectively. Rates of false positive findings for experts vs students were 3.1% vs 1.5% at 1 m and 4.0% vs 3.0% at 4 m. For unilateral astigmatism, experts' detection rates were similar at 1 m (64.7%/70.0%) and lower at 4 m (91.7%/90.3%), depending on the astigmatism axis. Conclusions: The sensitivity of the Brückner reflex for anisometropia improves by extension of the examination distance, especially in the hands of less experienced observers. To detect ametropia more sensitively, a test distance of 4 m is recommended. abstract_id: PUBMED:31378518 Can the Red Reflex Test Detect Unequal Refractive Error? Objective: To determine the accuracy of the red reflex test in the detection of anisometropia. Study Design: This prospective, single-masked study enrolled new patients younger than the age of 18 years who had not undergone pharmacologic pupillary dilation. A fellow who was masked to all clinical information illuminated both eyes with a direct ophthalmoscope in a darkened room from a distance of 1 m, assessing whether the red reflex between the 2 eyes was symmetric or asymmetric. The patient was then dilated, and cylcoplegic refraction was performed by an attending pediatric ophthalmologist. Exclusion criteria included the presence of strabismus, anisocoria, previous intraocular surgery, media opacity, leukocoria, or nystagmus. Sensitivity was compared with a null hypothesized value of 50% using a 1-sided binomial test. Results: Ninety-two patients with a mean age of 7.3 years (range 3 months to 16 years) were enrolled. With spherical anisometropia greater than or equal to 0.125 diopters, the sensitivity of the red reflex test was 90.6% and the specificity was 58.3%. With cylindrical anisometropia greater than or equal to 0.25 diopters, the sensitivity of the red reflex test was 81.3% and the specificity was 70%. Anisometropia greater than 1.5 diopters in spherical equivalent (4 patients, range -10.625 to -2.625) or cylinder (3 patients, range 1.75-2.25) was accurately detected by red reflex testing in each case. Conclusions: The red reflex test can be an accurate screening tool to detect anisometropia when performed by an ophthalmologist. abstract_id: PUBMED:3317238 Refractive errors in children. Optical correction of refractive errors in infants and young children is indicated when the refractive errors are sufficiently large to cause unilateral or bilateral amblyopia, if they are impairing the child's ability to function normally, or if the child has accommodative strabismus. Screening for refractive errors is important and should be performed as part of the annual physical examination in all verbal children. Screening for significant refractive errors in preverbal children is more difficult; however, the red reflex test of Bruckner is useful for the detection of anisometropic refractive errors. The photorefraction test, which is an adaptation of Bruckner's red reflex test, may prove to be a useful screening device for detecting bilateral as well as unilateral refractive errors. Objective testing as well as subjective testing enables ophthalmologists to prescribe proper optical correction for refractive errors for infants and children of any age. abstract_id: PUBMED:31236980 Diagnostic test accuracy of Spot and Plusoptix photoscreeners in detecting amblyogenic risk factors in children: a systemic review and meta-analysis. Purpose: Amblyopia is a permanent visual impairment developed in early childhood. Recently, instrument-based photoscreeners have been widely used for vision screening to detect amblyopia risk factors, which is key for the prevention and treatment of amblyopia. This meta-analysis aims to evaluate the diagnostic test accuracy of Spot and Plusoptix photoscreeners in detecting risk factors for amblyopia in children. Methods: An electronic literature search was performed in October 2018 using the MEDLINE, Embase, and Web of Science databases. Twenty-one publications with a total of 5022 subjects were included. Subjects given a comprehensive examination were considered to have amblyopia or amblyogenic risk factors based on the American Association for Pediatric Ophthalmology and Strabismus (AAPOS) referral criteria guidelines. Results: The overall sensitivity for Spot and Plusoptix was 87.7% and 89.4%, respectively (p = 0.38); the specificity was 78.0% and 89.9%, respectively (p = 0.90). For subjects under 7 years old (preschool children), the overall sensitivity for Spot and Plusoptix was 91.7% and 90.2%, respectively (p = 0.81); the specificity was 82.6% and 93.0%, respectively (p = 0.46). Conclusion: Both Spot and Plusoptix photoscreeners demonstrated good sensitivity and specificity in detecting amblyopia risk factors in children, especially at preschool ages. There was no significant difference in diagnostic test accuracy between them. abstract_id: PUBMED:452880 A simple method for screening of children with strabismus, anisometropia or ametropia by simultaneous photography of the corneal and the fundus reflexes. A simple screening method for detecting strabismus, anisometropia and ametropia in young children by simultaneous photography of the corneal and fundus reflexes with a conventional camera and flashlight is presented. The method is the photographic application of the von Brückner Durchleuchtung test and static skiascopy. The objective document from the external part of the eyes and the face is obtained at the same time. Model photographs with certain gaze deviation angles are presented with the method. abstract_id: PUBMED:17197520 Children unable to perform screening tests in vision in preschoolers study: proportion with ocular conditions and impact on measures of test accuracy. Purpose: To examine the relative prevalence of ocular conditions among children who are unable to perform preschool vision screening tests and the impact on measures of screening test performance. Methods: Trained nurse and lay screeners each administered a Lea Symbols visual acuity (VA) test (Good-Lite, Inc., Steamwood, IL), Stereo Smile II test (Stereo Optical, Inc., Chicago, IL), and Retinomax Autorefractor (Right Manufacturing, Virginia Beach, VA), and SureSight Vision Screener (Welch Allyn, Inc., Skaneateles Falls, NY) examinations to 1475 children who later received a comprehensive eye examination to identify amblyopia, strabismus, significant refractive error, and unexplained reduced VA. The outcomes of the examination for children for whom screeners were unable to obtain results (Unables) were compared to the outcomes of children who passed and children who failed each screening test. When estimating sensitivity, specificity, and positive and negative predictive values (PPV and NPV), Unables were classified as either screening failures or screening passers. Results: Less than 2% of children were classified as Unables for each test. The percentage with an ocular condition was at least two times higher for Unables than for screening passers for six of the eight modes of screening (P &lt; 0.05). Considering Unables as screening failures, rather than screening passers, increased the estimate of sensitivity by 1% to 3% (depending on test) and decreased the estimate of specificity by 0% to 2%; PPV decreased by 0% to 4% for most tests, whereas NPV increased by &lt;1%. Conclusions: Preschool children who are unable to perform VIP screening tests are more likely to have vision disorders than are children who pass the tests. Because &lt; or =2% of children were unable to do each test, referring these children for an eye examination had little impact on the PPV and NPV of the tests, as administered in VIP. Answer: Yes, the Bruckner test can be used as a rapid screening test to detect significant refractive errors in children. Studies have shown that the Bruckner test is a simple, quick, and reliable method that can be completed in a short time, such as 10 seconds per subject, and can be administered by both ophthalmologists and non-ophthalmologists with good sensitivity and specificity (PUBMED:17456940; PUBMED:32153328). The test has been demonstrated to have a high sensitivity of 91% and a specificity of 72.8% in one study (PUBMED:17456940), and another study found good agreement between non-ophthalmologists and ophthalmologists in performing the test (PUBMED:32153328). Additionally, the Bruckner test has been effectively used by community health workers at immunization outlets, showing its potential for early detection and referral for childhood eye disorders (PUBMED:33767959). Furthermore, the enhanced Bruckner test (EBT) performed by pediatricians has been found to have comparable results to those obtained by experienced pediatric ophthalmologists, suggesting that it can be an important approach in the detection of ametropia in children (PUBMED:27905334). The sensitivity of the Bruckner reflex for anisometropia also improves by extending the examination distance, especially for less experienced observers (PUBMED:17602235). Lastly, the red reflex test, which is a component of the Bruckner test, has been shown to accurately detect anisometropia when performed by an ophthalmologist (PUBMED:31378518).
Instruction: Does offering and performing prenatal screening influence women's attachment to their unborn child? Abstracts: abstract_id: PUBMED:17546705 Does offering and performing prenatal screening influence women's attachment to their unborn child? A longitudinal randomized controlled trial. Objective: The question addressed was: Does offering prenatal screening and receiving a negative screening outcome influence women's attachment to their unborn child? Methods: Women were offered a nuchal translucency measurement, maternal serum screening, or no screening at all in a randomized controlled trial. Attachment was measured by a self-developed questionnaire at four points in time: before screening was offered, after the offer, after receiving the negative screening result (or at comparable points in time) and in the last trimester of pregnancy. In the last trimester, the Prenatal Attachment Inventory was also filled in. Results: Women who had been offered screening (n = 1031) showed more attachment (F(1,1415) = 19.42, p &lt; .001) compared to women who had not been offered screening (n = 387). This difference disappeared later in pregnancy. At all points in time, negatively screened women (n = 466) had equal levels of attachment compared to screening decliners (n = 565). No difference was observed between women who received a negative result of the ultrasound screening (n = 285) as compared to the blood screening (n = 162). Conclusion: Offering prenatal screening seems to temporarily increase attachment. However, this difference is very small. Attachment is not influenced by whether a blood screening or an ultrasound screening is performed. abstract_id: PUBMED:37508684 Becoming Dad: Expectant Fathers' Attachment Style and Prenatal Representations of the Unborn Child. How expectant fathers think and feel about the unborn child (prenatal representations), has shown associations with fathers' postnatal parenting behaviors, observed father-infant interactional quality and child cognitive development. There is limited knowledge about fathers' prenatal representations. The present study examined if fathers' partner-related attachment styles were related to their prenatal representations of the unborn child. In the "Little in Norway Study", an ongoing prospective, longitudinal population-based study, 396 expectant fathers completed the Experiences in Close Relationships Scale at enrollment (mean gestational week = 23.76, SD = 4.93), and in gestational weeks 27-35 completed three questions assessing prenatal representations. Correlations of attachment style and prenatal representations were reported using logistic regression analyses. We found that an avoidant attachment style by fathers were predicted to have absent or negative representations on all three items (1) "strongest feeling about the unborn child" (Cl = 1.19-2.73), (2) "thoughts about child personality" (Cl = 1.16-1.87), and (3) "experiences of relationship with the child" (Cl = 1.14-1.75). Father anxious attachment style was not significantly associated with absent or negative prenatal representations. Results suggest that expectant fathers with a partner related avoidant attachment style have an increased risk of having absent or negative prenatal representations of the unborn child. abstract_id: PUBMED:18690005 Does offering prenatal screening influence pregnant women's attitudes regarding prenatal testing? Objectives: This study aims to find out whether offering prenatal screening for Down syndrome and neural tube defects influences pregnant women's attitudes toward having a screening test. Methods: Women were randomised into a group that was offered prenatal screening and a group that was not offered screening (controls). Both groups completed questionnaires before screening was offered, after the offer (not the control group), and in the last trimester of pregnancy. Results: Women with a neutral attitude at baseline who accepted the screening test had a more positive attitude, decliners became more negative and the attitude of the control group did not change. Conclusion: Offering prenatal screening triggers a change in some pregnant women's attitude regarding prenatal testing. This instability of women's attitudes may pose a problem for determining whether some women made an informed choice. abstract_id: PUBMED:17345647 Intergenerational transmission of trauma: exploring mother-infant prenatal attachment. Early childhood attachment and bonding and the intergenerational transmission of trauma are two key areas to address to understand the connection between parental trauma and the parent-child relationship. The purpose of the current study was to explore the relationship between trauma and past parental attachment behaviors of 41 expectant mothers and the subsequent development of attachment and bonding with their unborn child. Results of the current study suggest that trauma history, in general, does not negatively impact expectant mothers' current prenatal attachment with their unborn child. However, interpersonal trauma history does appear to have negative effects on prenatal attachment. These results point to the importance of understanding the role of interpersonal trauma exposure on prenatal attachment. abstract_id: PUBMED:22437190 The association of prenatal attachment and perinatal factors with pre- and postpartum depression in first-time mothers. Purpose: This prospective study investigated associations between prenatal attachment of adult first-time mothers to the unborn child, perinatal factors and levels of depression before and up to 18 months after delivery. Method: Primiparas (N = 161) without specific risk factors answered the following questionnaires during the last term of pregnancy (t1): Edinburgh Postnatal Depression Scale (EPDS), Maternal Antenatal Attachment Scale (MAAS), questionnaire on the schema of the unborn child, and a questionnaire about the pregnancy. Perinatal data were taken from the patients' files. The EPDS was answered 3 weeks (t2, N = 157), 6 months (t3, N = 159), and 18 months (t4, N = 132) postpartum. Results: During pregnancy, 16.9 % of the women indicated mild depressive symptoms, and 7.5 %, medium to severe symptoms of depression. Mild symptoms of depression were found in 25.5 % at t2, 10.1 % at t3, and 12.2 % at t4; medium to severe symptoms were reported by 7.6, 1.9 and 5.6 %, respectively. Women with low control during delivery (emergency Caesarean) showed a tendency for higher levels (p = 0.067) of depression at t3 than women with elective Caesarean did. The quality of prenatal attachment to the unborn child correlated negatively with depressive symptoms at t1-t4. Conclusions: The closer the prenatal attachment of a mother to her unborn child, the less symptoms of depression she reports during the last term of pregnancy and postpartum. Therefore, promoting good mother-child attachment during pregnancy might influence the level of postpartum depression. abstract_id: PUBMED:26826039 Viewing the unborn child - pregnant women's expectations, attitudes and experiences regarding fetal ultrasound examination. Objective: To gain a deeper understanding of pregnant women's expectations, attitudes and experiences regarding ultrasound examination during pregnancy. Methods: In-depth interviews were conducted with eight pregnant Norwegian women prior to their 18-week scan. The data were analyzed through systematic text condensation. Results: The analysis generated three main themes: (1) I want to know if everything is fine, (2) Viewing the unborn child, (3) Holistic care. The women had a strong wish for ultrasound scanning and medical knowledge about their "baby" was their first priority interest. Visualizing the "baby" represented a strong emotional dimension for the parents-to-be and initiated the bonding process and the planning of a new life. The women wanted to be empowered and approached in a holistic way, where dialogic communication was needed. Conclusion: The women wanted medical knowledge about the fetal health. They considered the examination a visual experience together with their partner. The scan provided a personification of and an attachment to the fetus. This experience was the first step in the planning of a new life. The women had ambivalent feelings related to the ultrasound examination and they highlighted the importance of holistic care, where the sharing of information happens on an individual basis. abstract_id: PUBMED:37743542 The effect of prenatal anxiety on prenatal attachment in high-risk pregnant women and related factors: a cross-sectional study. High-risk pregnant women experience stress and anxiety. Anxiety may affect prenatal attachment. This study aimed to examine the effect of anxiety on prenatal attachment in high-risk pregnant women. The study was cross-sectional and conducted on 290 high-risk pregnant women who applied to the perinatology outpatient clinic of a university hospital between February and July 2022. The data were collected with the descriptive information sheet, Prenatal Attachment Inventory (PAI), and Perinatal Anxiety Screening Scale (PASS). Mann-Whitney U, Kruskal Wallis-H, and Spearman's Correlation tests were used to analyze the data. The mean age of the examined pregnant women was 28.12 ± 5.36, and most of them were multigravida (65.5 percent) and in the third trimester (80.7 percent). The PAI and PASS mean scores were 64.09 ± 11.31 and 27.98 ± 16.52, respectively. There was a weak positive correlation between anxiety and prenatal attachment (r = .227, p &lt; .001). The levels of prenatal attachment were higher in primigravidas and women with no children. Also, the anxiety levels of pregnant women in the third trimester were lower. Health professionals should evaluate the prenatal attachment and anxiety levels of high-risk pregnant women, and consider obstetrical factors that may influence both prenatal attachment and anxiety levels. abstract_id: PUBMED:37407838 Maternal prenatal attachment during the COVID-19 pandemic: exploring the roles of pregnancy-related anxiety, risk perception, and well-being. Pregnant women have faced novel physical and mental health risks during the pandemic. This situation is remarkable because a parent's emotional bond with their unborn baby (also known as prenatal attachment) is related to the parent's mental state. Prenatal attachment helps parents psychologically prepare for the transition into parenthood. Moreover, it plays a pivotal role in the future parentchild relationship and psychosocial development of the baby. Based on the available literature, the current study integrated risk perception theories with mental health indicators to examine maternal prenatal attachment during the pandemic. Pregnant women (n = 258) completed the Pregnancy-Related Anxiety Questionnaire-Revision 2 (PRAQ-R2), the WHO Well-being Index (WHO-5), the Prenatal Attachment Inventory (PAI), and answered questions about COVID-19 risk perception. The findings illustrated that pregnancy-related anxiety was positively associated with maternal prenatal attachment. Moreover, COVID-19 risk perception and well-being mediated this relationship. In other words, the higher levels of pregnancy-related anxiety were associated with increased COVID-19 risk perception and decreased well-being, inhibiting prenatal attachment in pregnant women. Considering the importance of prenatal attachment, it is crucial to understand the experiences of pregnant women and develop policies for promoting prenatal attachment, especially during challenging times such as the COVID-19 pandemic. abstract_id: PUBMED:35886548 A Cross-Sectional Study of the Marital Attitudes of Pregnant Women at Risk for Cystic Fibrosis and Psychological Impact of Prenatal Screening. Cystic fibrosis (CF) is one of the most frequent genetic disorders in those with Northern European ancestry. Prenatal testing for cystic fibrosis may be used to plan and prepare for the birth of a child with the disease or to determine whether to terminate the pregnancy. The accessibility of prenatal detection for women with a high genetic risk of delivering a child with cystic fibrosis is determined by CF carriers and those affected by the disease. Moreover, prenatal testing for CF is mainly dependent on invasive diagnostic tests that can influence the mental health of the pregnant woman, and it is assumed that the birth of a CF child will have a serious influence on the couple's subsequent family planning and marital behavior. The purpose of this research was to examine the marital attitudes of women at risk for cystic fibrosis and the psychological effect of screening for CF among pregnant women. The study followed a cross-sectional design with five questionnaires comprising Prenatal Attachment Interview (PAI), Maternal Antenatal Attachment Scale (MAAS), Pregnancy-Related Anxiety Questionnaire (PRAQ-R2), the Prenatal Psychosocial Profile (PPP), and the Marital Intimacy Questionnaire (MIQ). A total of 84 pregnant women were included in the "carriers" group for CFTR and 91 in the "non-carrier" group. CFTR-carrier mothers were likely to be more affectionate to the fetus, with better maternal-fetal quality and intensity of attachment. The same group of pregnant women was less scared of giving birth or worried about bearing a physically or mentally handicapped child compared to women who were expecting the prenatal diagnosis test for being at risk of delivering a newborn with malformations. CFTR-carrier pregnant women did not score significantly different results in the Prenatal Psychosocial Profile regarding stress levels, social support, and self-esteem. It was also found that intimacy and consensus problems inside the marriage were significantly more often experienced by CFTR carriers. Based on the current findings, it is likely that CFTR-carrier mothers have a better perception of the possible pregnancy outcomes by knowing their abnormal gene carrier status. Therefore, the psychological impact of invasive diagnostic tests is lower in this category compared with those who are unaware of the possible pregnancy outcomes. However, we promote a future analysis for pregnant women with moderate risk of giving birth to a child with single-gene mutations such as cystic fibrosis or other congenital malformations that undergo noninvasive prenatal diagnosis tests, as they become more accurate and might cause lower pre-diagnosis stress levels. abstract_id: PUBMED:25288610 Maternal Experiences of Their Unborn Child's Spiritual Care: Patterns of Abstinence in Iran. Preparing for pregnancy and childbirth has significant association with spirituality. Review of the literature shows that the spirituality of the "unborn child" has not yet attracted much critical attention. This study was conducted with the aim of exploration of maternal behaviors associated with the spiritual health of the unborn child. A qualitative approach was used to investigate the research question. Twenty-seven in-depth unstructured interviews were conducted with 22 Iranian mothers in Tehran city (Iran) who were pregnant or had experienced pregnancy in 2012-2013. Data analysis was carried out using a conventional content analysis approach. "Refusing to eat forbidden food," "Overcoming mental adversity," "Regulating one's social interactions," "Preventing the effects of harmful environments on the senses," "Avoidance of using insulting and abusive language," "Keeping one's mind and spirit free from evil traits," and "Refraining from damaging behaviors" were important experiences that the mothers used for "Holistic Abstinence." The results provide new information about the subjective experiences of Iranian women on the patterns of abstinence for the midwives, research community, policy makers, and planners of maternal and child health care services in order to contribute to holistic, culturally, and religiously competent prenatal care for Muslim pregnant women throughout the world. Answer: Yes, offering and performing prenatal screening does influence women's attachment to their unborn child. A longitudinal randomized controlled trial found that women who were offered prenatal screening showed more attachment compared to those who were not offered screening. This difference was observed after the offer of screening and after receiving a negative screening result, but it disappeared later in pregnancy. The study concluded that offering prenatal screening seems to temporarily increase attachment, although the difference is very small and attachment is not influenced by whether a blood screening or an ultrasound screening is performed (PUBMED:17546705).
Instruction: Can short-term administration of dexamethasone abrogate radiation-induced acute cytokine gene response in lung and modify subsequent molecular responses? Abstracts: abstract_id: PUBMED:11567802 Can short-term administration of dexamethasone abrogate radiation-induced acute cytokine gene response in lung and modify subsequent molecular responses? Purpose: To investigate the effects of short-term administration of dexamethasone (DEX) on radiation-induced responses in the mouse lung, focusing on expression of pro-inflammatory cytokine and related genes. Methods And Materials: At indicated times after thoracic irradiation and/or drug treatment, mRNA expression levels of cytokines (mTNF-alpha, mIL-1 alpha, mIL-1 beta, mIL-2, mIL-3, mIL-4, mIL-5, mIL-6, mIFN-gamma) and related genes in the lungs of C3H/HeN mice were measured by RNase protection assay. Results: Radiation-induced pro-inflammatory cytokine mRNA expression levels in lung peak at 6 h after thoracic irradiation. DEX (5 mg/kg) suppresses both basal cytokine mRNA levels and this early response when given immediately after irradiation. However, by 24 h, in mice treated with DEX alone or DEX plus radiation, there was a strong rebound effect that lasted up to 3 days. Modification of the early radiation-induced response by DEX did not change the second wave of cytokine gene expression in the lung that occurs at 1 to 2 weeks, suggesting that early cytokine gene induction might not determine subsequent molecular events. A single dose of DEX attenuated, but did not completely suppress, increases in cytokine mRNA levels induced by lipopolysaccharide (2.5 mg/kg) treatment, but, unlike with radiation, no significant rebound effect was seen. Five days of dexamethasone treatment in the pneumonitic phase also inhibited pro-inflammatory cytokine gene expression and, again, there was a rebound effect after withdrawal of the drug. Conclusions: Our findings suggest that short-term use of dexamethasone can temporarily suppress radiation-induced pro-inflammatory cytokine gene expression, but there may be a rebound after drug withdrawal and the drug does little to change the essence and course of the pneumonitic process. abstract_id: PUBMED:28570981 Ethyl pyruvate alleviates radiation-induced lung injury in mice. Objective: Radiation-induced lung injury (RILI) is a common complication of thoracic cancer radiation therapy. Ethyl pyruvate (EP) was reported to have an ameliorating effect on a variety of systemic inflammation reactions, including acute lung injury. However, the protective effect of EP on RILI has not been explored. Materials/methods: RILI was induced by a single thoracic irradiation of 16Gy X-rays in C57BL/6 mice. Mice were divided into four groups: control, radiation, 100mg/kg EP, and 200mg/kg dexamethasone. Inflammation and fibrosis grade of lung tissue were scored by H&amp;E and Masson's trichrome staining. Cytokines include IL-1β, IL-6, TNF-α, GM-CSF, M-CSF, TGF-β1, and HMGB1 were measured after irradiation. Colony formation assay was performed to determine the protective effect of EP in RAW264.7 and HBE cells. The effect of EP on HMGB1 was also explored in vitro. Result: The cytoplasm of bronchial epithelium cells in mice was positive-stained of HMGB1 accompanying with an increase of HMGB1, IL-6, IL-1β, GM-CSF, M-CSF, TNF-α, and TGF-β1 after irradiation. EP prescription significantly reduced pulmonary inflammation infiltration of RILI, along with a decrease of HMGB1, IL-6, IL-1β, and GM-CSF at 4 weeks after irradiation. Furthermore, EP decreased radiation-induced collagen deposition at 20 weeks after irradiation. Pro-fibrotic cytokines including TGF-β1 and HMGB1 in irradiated lung tissue and plasma obviously decreased in EP administration group in the later stage. In vitro, EP administration protected HBE cells from radiation injury. EP also rescued radiation-induced release but not translocation of HMGB1 in RAW264.7 and HBE cells. Conclusion: EP treatment ameliorates RILI, including radiation-induced fibrosis in mice. The inhibition of production and release of pro-inflammatory or fibrotic cytokines, especially HMGB1, may partly attribute to its attenuating RILI effect. abstract_id: PUBMED:10597915 Rapid induction of cytokine gene expression in the lung after single and fractionated doses of radiation. Purpose: To investigate cytokine gene expression in the lung after single and fractionated doses of radiation, and to investigate the effect of steroids and the genetic background. Materials And Methods: Expression of cytokine genes (mTNF-alpha, mIL-1alpha, mIL-1beta, mIL-2, mIL-3, mIL-4, mIL-5, mIL-6, mIFN-gamma) in the lungs of C3H/HeJ and C57BL/6J mice was measured by RNase protection assay at different times after various doses of radiation. The effects of dexamethasone and fractionated radiation treatment on gene expression were also studied. Results: IL-1beta was the major cytokine induced in the lungs of C3H/HeJ mice within the first day after thoracic irradiation. Radiation doses as low as 1 Gy were effective. Responses to 20 Gy irradiation peaked within 4-8h and subsided by 24 h. With the exception of IL-1alpha and TNF-alpha, the other cytokines that were investigated had undetectable pre-treatment mRNA levels and were not radiation inducible. Similar responses were seen in C57BL/6J mice, although TNF-alpha was induced and there were some quantitative differences. Pre-treatment of C3H/HeJ mice with dexamethasone reduced basal and induced IL-1 levels, but complete inhibition was not achieved. Dexamethasone was also effective if given immediately after irradiation. Fractionated daily doses of radiation (4 Gy/day) helped to maintain cytokine gene expression for a longer period. Conclusions: Inflammatory genes are rapidly induced in the lung by irradiation. This response cannot be readily abolished by steroid pre-treatment. Fractionated treatment schedules help to perpetuate the response. abstract_id: PUBMED:7558951 Induction of acute phase gene expression by brain irradiation. Purpose: To investigate the in vivo acute phase molecular response of the brain to ionizing radiation. Methods And Materials: C3Hf/Sed/Kam mice were given midbrain or whole-body irradiation. Cerebral expression of interleukins (IL-1 alpha, IL-1 beta, IL-2, IL-3, IL-4, IL-5, IL-6), interferon (IFN-gamma), tumor necrosis factors (TNF-alpha and TNF-beta), intercellular adhesion molecule-1 (ICAM-1), inducible nitric oxide synthetase (iNOS), von Willebrand factor (vWF), alpha 1-antichymotrypsin (EB22/5.3), and glial fibrillary acidic protein (GFAP) was measured at various times after various radiation doses by ribonuclease (RNase) protection assay. The effects of dexamethasone or pentoxifylline treatment of mice on radiation-induced gene expression were also examined. Results: Levels of TNF-alpha, IL-1 beta, ICAM-1, EB22/5.3 and to a lesser extent IL-1 alpha and GFAP, messenger RNA were increased in the brain after irradiation, whether the dose was delivered to the whole body or only to the midbrain. Responses were radiation dose dependent, but were not found below 7 Gy; the exception being ICAM-1, which was increased by doses as low as 2 Gy. Most responses were rapid, peaking within 4-8 h, but antichymotrypsin and GFAP responses were delayed and still elevated at 24 h, by which time the others had subsided. Pretreatment of mice with dexamethasone or pentoxifylline suppressed radiation-induced gene expression, either partially or completely. Dexamethasone was more inhibitory than pentoxifylline at the doses chosen. Conclusions: The initial response of the brain to irradiation involves expression of inflammatory gene products, which are probably responsible for clinically observed early symptoms of brain radiotherapy. This mechanism explains the beneficial effects of the clinical use of steroids in such circumstances. abstract_id: PUBMED:27586366 Acute respiratory changes and pulmonary inflammation involving a pathway of TGF-β1 induction in a rat model of chlorine-induced lung injury. We investigated acute and delayed respiratory changes after inhalation exposure to chlorine (Cl2) with the aim to understand the pathogenesis of the long-term sequelae of Cl2-induced lung-injury. In a rat model of nose-only exposure we analyzed changes in airway hyperresponsiveness (AHR), inflammatory responses in airways, expression of pro-inflammatory markers and development of lung fibrosis during a time-course from 5h up to 90days after a single inhalation of Cl2. A single dose of dexamethasone (10mg/kg) was administered 1h following Cl2-exposure. A 15-min inhalation of 200ppm Cl2 was non-lethal in Sprague-Dawley rats. At 24h post exposure, Cl2-exposed rats displayed elevated numbers of leukocytes with an increase of neutrophils and eosinophils in bronchoalveolar lavage (BAL) and edema was shown both in lung tissue and the heart. At 24h, the inflammasome-associated cytokines IL-1β and IL-18 were detected in BAL. Concomitant with the acute inflammation a significant AHR was detected. At the later time-points, a delayed inflammatory response was observed together with signs of lung fibrosis as indicated by increased pulmonary macrophages, elevated TGF-β expression in BAL and collagen deposition around airways. Dexamethasone reduced the numbers of neutrophils in BAL at 24h but did not influence the AHR. Inhalation of Cl2 in rats leads to acute respiratory and cardiac changes as well as pulmonary inflammation involving induction of TGF-β1. The acute inflammatory response was followed by sustained macrophage response and lack of tissue repair. It was also found that pathways apart from the acute inflammatory response contribute to the Cl2-induced respiratory dysfunction. abstract_id: PUBMED:37364330 Effects of R-salbutamol on the inflammatory response and acute lung injury in endotoxemic mice. Salbutamol, which consists of an R-isomer and S-isomer, is an effective and widely used β2 adrenoreceptor agonist that may possess anti-inflammatory properties in addition to its bronchodilator activity. Whether the salbutamol R-isomer has advantages over its racemic mixture and effectiveness in treating endotoxemia and endotoxin-induced lung injury has not been well studied. In this study, we investigated the preventive and therapeutic effects of R-salbutamol (R-sal), S-salbutamol (S-sal), and their racemic mixture (Rac-sal) on a mouse model of lipopolysaccharide (LPS)-induced endotoxemia. Dexamethasone (Dex) was used for comparison. The results showed that R-sal markedly improved the 7-day survival rate of endotoxic mice when administered before and after LPS treatment. Dex was toxic and accelerated the death of endotoxic mice when administered before LPS injection. Histological examination of the lungs revealed that the LPS challenge resulted in acute lung damage, including inflammatory cell infiltration, thickened alveolar septa, and congestion. R-sal pre-treatment effectively inhibited these changes, accompanied by markedly reduced lung myeloperoxidase levels, serum cytokine levels, and lactate release, significant restoration of lymphocyte count, and reduction of monocyte count. This may have occurred through inhibition of M1 macrophage inflammatory responses by enhancement of β-arrestin2 expression and suppression of NF-κB activation. Rac-sal exhibited diminished effects compared to that of R-sal, while S-sal showed enhanced release of some inflammatory cytokines. In addition, R-sal pre-treatment showed a better improvement in prognostic pulmonary function on day 4 compared to that by Rac-sal. Collectively, our results indicate the potential benefits of R-sal in regulating inflammatory responses to endotoxemia and endotoxin-induced lung injury. abstract_id: PUBMED:35594367 3,3'-Diindolylmethane attenuates inflammation and fibrosis in radiation-induced lung injury by regulating NF-κB/TGF-β/Smad signaling pathways. Objective: This study aims to investigate the protective effect of 3,3'-diindolylmethane (DIM) on the radiation-induced lung injury (RILI) model and to explore its possible mechanism. Methods: A mouse model of RILI was established by thoracic irradiation, and dexamethasone was used as a positive drug to investigate the effect of DIM on RILI mice. Lung histopathology was analyzed by HE staining and Masson staining. Then the levels of inflammatory cytokines (TGF-β, TNF-α, IL-1β, and IL-6), inflammatory cell counts, and activity of MPO were detected. The expression of TGFβ1/Smad signaling pathway-related proteins was determined by immunohistochemistry. qPCR was used to analyze the mRNA expression levels of inflammatory factors, α‑SMA and COL1A1. The expression of COX-2, NF-κB, IκBα, PI3K, and Akt proteins was assessed by Western blot. Results: Histopathological staining of lung tissues showed that DIM administration alleviated the pulmonary inflammation and fibrosis caused by RILI. Moreover, the content of inflammatory factors such as IL-1β and IL-6, the expression of NF-κB pathway-related proteins, and the counts of inflammatory cells were inhibited in lung tissue, indicating that DIM can inhibit the NF-κB pathway to reduce inflammation. In addition, DIM could down-regulate the mRNA levels of α-SMA, COL1A1, and downregulate TGFβ1, Smad3, and p-Smad2/3 in lung tissues. Conclusion: Our study confirms that DIM has the potential to treat RILI in vivo by inhibiting fibrotic and inflammatory responses in lung tissue through the TGFβ/Smad and NF-κB dual pathways, respectively. abstract_id: PUBMED:21272678 Isoforskolin pretreatment attenuates lipopolysaccharide-induced acute lung injury in animal models. Isoforskolin was isolated from Coleus forskohlii native to Yunnan in China. We hypothesize that isoforskolin pretreatment attenuates acute lung injury induced by lipopolysaccharide (endotoxin). Three acute lung injury models were used: situ perfused rat lung, rat and mouse models of endotoxic shock. Additionally, lipopolysaccharide stimulated proinflammatory cytokine production was evaluated in human mononuclear leukocyte. In situ perfused rat lungs, pre-perfusion with isoforskolin (100, and 200 μM) and dexamethasone (65 μM, positive control) inhibited lipopolysaccharide (10 mg/L) induced increases in lung neutrophil adhesion rate, myeloperoxidase activity, lung weight Wet/Dry ratio, permeability-surface area product value, and tumor necrosis factor (TNF)-α levels. In rats, pretreatments with isoforskolin (5, 10, and 20 mg/kg, i.p.) and dexamethasone (5mg/kg, i.p.) markedly reduced lipopolysaccharide (6 mg/kg i.v.) induced increases of karyocyte, neutrophil counts and protein content in bronchoalveolar lavage fluid, and plasma myeloperoxidase activity. Lung histopathology showed that morphologic changes induced by lipopolysaccharide were less pronounced in the isoforskolin and dexamethasone pretreated rats. In mice, 5 mg/kg isoforskolin and dexamethasone caused 100% and 80% survival, respectively, after administration of lipopolysaccharide (62.5mg/kg, i.v., 40% survival if untreated). In human mononuclear leukocyte, isoforskolin (50, 100, and 200 μM) and dexamethasone (10 μM) pre-incubation lowered lipopolysaccharide (2 μg/mL) induced secretion of the cytokine TNF-α, and interleukins (IL)-1β, IL-6, and IL-8. In conclusion, pretreatment with isoforskolin attenuates lipopolysaccharide-induced acute lung injury in several models, and it is involved in down-regulation of inflammatory responses and proinflammatory cytokines TNF-α, IL-1β, IL-6, and IL-8. abstract_id: PUBMED:27605220 Protective Effects of Asiatic Acid Against Spinal Cord Injury-Induced Acute Lung Injury in Rats. The biological effects of asiatic acid (AA) on spinal cord injury (SCI)-induced acute lung injury (ALI) have not been investigated. We aimed to investigate the therapeutic efficacy and molecular mechanisms of AA on SCI-induced ALI. One-hundred and fifty Sprague-Dawley rats were randomly assigned to five groups: sham, SCI, SCI + dexamethasone (Dex, 2 mg/kg), SCI + AA (30 mg/kg), and SCI + AA (75 mg/kg). The influences of AA on histologic changes, pulmonary edema, neutrophil infiltration and activation, proinflammatory cytokine production, oxidative stress, and Nrf2 and NLRP3 inflammasome protein expression were estimated. AA administration at the 30- and 75-mg/kg doses significantly attenuates lung wet-to-dry weight (W/D) ratio, pulmonary permeability index (PPI), and pulmonary histologic conditions. Furthermore, the protective effects of AA might be attributed to the reduction of neutrophil infiltration, myeloperoxidase (MPO), inflammatory cytokines, reactive oxygen species (ROS), malondialdehyde (MDA), and the increase of superoxide dismutase (SOD) and catalase (CAT). Moreover, AA markedly upregulated Nrf2 levels and downregulated NLRP3 inflammasome protein expression in lung tissues. AA exhibits a protective effect on SCI-induced ALI by alleviating the inflammatory response, by inhibiting NLRP3 inflammasome activation and oxidative stress with the upregulation of Nrf2 protein levels. The use of AA may be a potential efficient therapeutic strategy for the treatment of SCI-induced ALI. abstract_id: PUBMED:38013140 Short-term administration of Qipian®, a mixed bacterial lysate, inhibits airway inflammation in ovalbumin-induced mouse asthma by modulating cellular, humoral and neurogenic immune responses. Aims: Qipian® is a commercialized agent composed of extracts of three genera of commensal bacteria, and its mechanism of action on asthma is unclear. This study aimed to examine the impact of Qipian® on airway inflammation and investigate the underlying mechanisms. Materials And Methods: Qipian® or dexamethasone (DEX) was administered before OVA challenge in an ovalbumin-induced asthma mouse model, and then asthmatic symptoms were observed and scored. Samples of lung tissues, blood, and bronchoalveolar lavage fluid (BALF) were collected, and eosinophils (Eos), immunoglobins (Igs), and type 1 T helper (Th1)/Th2 cell cytokines were measured. Mucus production in the lung was assessed by periodic acid-Schiff (PAS) staining. The effects of Qipian® on dendritic and T regulatory (Treg) cells were investigated using flow cytometry. Key Findings: The short-term administration of Qipian® significantly inhibited the cardinal features of allergic asthma, including an elevated asthmatic behaviour score, airway inflammation and immune response. Histological analysis of the lungs showed that Qipian® attenuated airway inflammatory cell infiltration and mucus hyperproduction. Qipian® restored Th1/Th2 imbalance by decreasing interleukin (IL)-4, IL-5, and IL-13 while increasing interferon (IFN)-γ and IL-10. Further investigation revealed that Qipian® treatment induced the upregulation of CD4+CD25+Foxp3+ Treg cells and CD103+ DCs and downregulation of tachykinins neurokinin A (NKA) and NKB in the lung. Significance: Our findings suggested that short-term treatment with Qipian® could alleviate inflammation in allergic asthma through restoring the Th1/Th2 balance by recruiting Treg cells to airways and inducing the proliferation of CD103+ DCs, which actually provides a new treatment option for asthma. Answer: Yes, short-term administration of dexamethasone (DEX) can temporarily suppress radiation-induced pro-inflammatory cytokine gene expression in the lung. However, there may be a rebound effect after drug withdrawal, and the drug does little to change the essence and course of the pneumonitic process (PUBMED:11567802). This suggests that while DEX can attenuate the early radiation-induced cytokine response when given immediately after irradiation, it does not significantly alter the second wave of cytokine gene expression that occurs 1 to 2 weeks later, indicating that early cytokine gene induction might not determine subsequent molecular events (PUBMED:11567802). Additionally, studies have shown that inflammatory genes are rapidly induced in the lung by irradiation, and this response cannot be readily abolished by steroid pre-treatment. Fractionated treatment schedules help to perpetuate the response (PUBMED:10597915). Moreover, the initial response of the brain to irradiation involves the expression of inflammatory gene products, which are probably responsible for clinically observed early symptoms of brain radiotherapy. This mechanism explains the beneficial effects of the clinical use of steroids in such circumstances (PUBMED:7558951). However, it is important to note that the effects of DEX on radiation-induced responses may vary depending on the context and the timing of administration relative to the radiation exposure.
Instruction: Does streaming affect the cerebral distribution of infraophthalmic intracarotid chemotherapy? Abstracts: abstract_id: PUBMED:12427632 Does streaming affect the cerebral distribution of infraophthalmic intracarotid chemotherapy? Background And Purpose: The development of new non-ocular-toxic drugs has enabled infraophthalmic chemotherapeutic infusion. We assessed whether streaming occurs with infraophthalmic, high cervical internal carotid artery (ICA) delivery of chemotherapeutic agents by means of conventional angiographic catheters. Methods: Six patients with high-grade gliomas treated with monthly carotid intraarterial chemotherapy were studied. Chemotherapy delivery and distribution was modeled by technetium 99m hexylmethyl-propyleneamine oxine (HMPAO), a first-pass agent. Each patient received 0.5 mCi (18.5 MBq) of (99m)Tc-HMPAO in 50-mL of saline intraarterially in the ICA at the C1-C2 level. Injections were given twice, at two different injection rates: 0.08 mL/s at one therapeutic session and 6 mL/s a month later. The slow injection rate modeled the slowest rate used in the delivery of chemotherapy into the ICA. The higher rate was selected to avoid any possibility of uneven mixing, by replacing intracarotid blood completely and by using a turbulent injection rate that destroys laminar flow and intraarterial streaming. Single photon emission CT (SPECT) was performed 2 hours after injection. For each patient, the corresponding SPECT sections at the two injection rates were compared. Results: No differences were noted in (99m)Tc-HMPAO distribution between the two injection rates in any of the patients. However, some of the rapid injection rate SPECT scans showed extension of the (99m)Tc-HMPAO uptake into adjacent watershed territories. Conclusion: There was no evidence, in humans, of substantial streaming during slow infraophthalmic intracarotid injections. Slow rates of infusion are as good as high rates for infraophthalmic intracarotid drug delivery. This is of special importance for drugs that are not tolerated at high injection rates. Moreover, infraophthalmic intracarotid chemotherapeutic infusion does not require special injectors or catheters. abstract_id: PUBMED:8345691 Brain SPECT by intraarterial infusion of 99mTc-HMPAO for assessing the cerebral distribution of carotid artery infusions in patients with brain tumor In order to assess the cerebral distribution of intracarotid chemotherapy, 17 postoperative patients with brain tumor underwent brain SPECT obtained by intraarterial infusion of 18.5 MBq of 99mTc-HMPAO. Injection methods were continuous (5.0 ml/min) or pulsatile infusion with supraor infraophthalmic catheterization. The findings obtained by brain SPECT were frequently different from those of angiography and/or DSA. In supraophthalmic catheterization with continuous infusion, only 2 of 10 studies (20%) had homogeneous distribution and 5 of them (50%) had maldistribution of 99mTc-HMPAO which appears in association with laminar flow effect. The remaining 3 studies showed localized distribution (two: tumor localization, one: healthy brain localization). On the other hand, all of 5 studies with pulsatile infusion had homogeneous distribution of 99mTc-HMPAO. In infraophthalmic catheterization, all but one of 5 studies had homogeneous distribution with continuous infusion. These results suggest that pulsatile infusion may be effective in eliminating maldistribution of 99mTc-HMPAO in supraophthalmic catheterization. In conclusion, we are convinced that 99mTc-HMPAO is a useful intraarterial agent for assessing cerebral distribution of intracarotid chemotherapy. abstract_id: PUBMED:1849555 Intravascular streaming during carotid artery infusions. Demonstration in humans and reduction using diastole-phased pulsatile administration. Intra-arterial carotid artery chemotherapy for malignant gliomas is limited by focal injuries to the eye and brain which may be caused by poor mixing of the drug with blood at the infusion site. This inadequate mixing can be eliminated in animal models with diastole-phased pulsatile infusion (DPPI) which creates 1-ml/sec spurts during the slow blood flow phase of diastole. Before treatment with intracarotid cisplatin, 10 patients with malignant gliomas were studied to determine whether intravascular streaming occurs after intracarotid infusion in humans, and if so, if it is reduced with DPPI. Regional cerebral blood flow (rCBF) studies were performed by intravenous injection of H2(15)O and positron emission tomography. This was followed by supra- or infraophthalmic internal carotid artery (ICA) injections of H2(15)O with either continuous infusion or DPPI. Local H2(15)O concentration in the brain was determined and the images of radiotracer distribution in the continuous infusion and DPPI studies were compared to the rCBF images. Intravascular streaming of the infusate was identified by a heterogeneous distribution of the infused H2(15)O in brain compared to rCBF. Extensive and variable intravascular streaming occurred in three patients who received infusions into the supraophthalmic segment of the ICA. Some brain areas received up to 11 times the expected radiotracer delivery, while other regions received as little as one-tenth. This streaming pattern was markedly reduced or eliminated by DPPI. In the five patients who received infraophthalmic infusions, a minimally heterogeneous distribution of the infusate was detected. The authors conclude that extensive intravascular streaming accompanies supraophthalmic ICA infusions in patients. The magnitude of streaming can be substantially reduced or eliminated with DPPI. Those who perform intra-arterial infusion should consider using DPPI to assure uniform drug delivery to brain. abstract_id: PUBMED:34770671 Video Packet Distribution Scheme for Multimedia Streaming Services in VANETs. By leveraging the development of mobile communication technologies and due to the increased capabilities of mobile devices, mobile multimedia services have gained prominence for supporting high-quality video streaming services. In vehicular ad-hoc networks (VANETs), high-quality video streaming services are focused on providing safety and infotainment applications to vehicles on the roads. Video streaming data require elastic and continuous video packet distributions to vehicles to present interactive real-time views of meaningful scenarios on the road. However, the high mobility of vehicles is one of the fundamental and important challenging issues for video streaming services in VANETs. Nevertheless, previous studies neither dealt with suitable data caching for supporting the mobility of vehicles nor provided appropriate seamless packet forwarding for ensuring the quality of service (QoS) and quality of experience (QoE) of real-time video streaming services. To address this problem, this paper proposes a video packet distribution scheme named Clone, which integrates vehicle-to-vehicle and vehicle-to-infrastructure communications to disseminate video packets for video streaming services in VANETs. First, an indicator called current network quality information (CNQI) is defined to measure the feature of data forwarding of each node to its neighbor nodes in terms of data delivery ratio and delay. Based on the CNQI value of each node and the trajectory of the destination vehicle, access points called clones are selected to cache video data packets from data sources. Subsequently, packet distribution optimization is conducted to determine the number of video packets to cache in each clone. Finally, data delivery synchronization is established to support seamless streaming data delivery from a clone to the destination vehicle. The experimental results show that the proposed scheme achieves high-quality video streaming services in terms of QoS and QoE compared with existing schemes. abstract_id: PUBMED:34354631 Modulation of Negative Affect Predicts Acceptance of Music Streaming Services, While Personality Does Not. Music streaming services (MSS) offer their users numerous ways of choosing and implementing their individual approaches to music listening. Personality, uses of music, and the acceptance of MSS can be conceptualized as interdependent. This study investigates whether negative affect modulation strategies explain differences in the acceptance of MSS and integrates findings from previous research into a structural equation model. As for measurements, the Big Five Inventory 2, the Inventory for the Assessment of Activation and Arousal modulation through Music, and adapted scales from previous research on the Technology Acceptance Model were used. A convenience sample of 825 participants (24.3 years; 74% females and 89% students) successfully completed an online questionnaire. In total, 89 percent of the sample reported using MSS regularly. The results show that the tendency to modulate negative affect through music is positively influenced by openness and neuroticism. In turn, the tendency to modulate negative affect through music is shown to increase the perceived usefulness of MSS. However, this study failed to replicate the previous findings that openness increases the attitude toward using and that neuroticism decreases the perceived usefulness. This implies that uses of music are more effective than personality traits at predicting the individual acceptance of MSS. However, personality can be viewed as a predictor for uses of music. The interwovenness of stable and situational factors of music choices is supported. MSS seem to assist their users in coping with negative affect in everyday life, increasing wellbeing. MSS should expand their personalization features to optimize user experience with respect to individual uses of music. abstract_id: PUBMED:3668643 Uniformity of intracarotid drug distribution with diastole-phased pulsed infusion. Drug streaming has been implicated in the development of focal necrotic lesions in perfused tissues following intracarotid chemotherapy of brain tumors at low infusion rates. The narrow infusate path characteristic of streaming within laminar blood flow is not observed at high infusion rates such as are typical in contrast injection for angiography. By periodically pulsing the infusate at a high rate, the mechanisms of rapid mixing can be exploited while retaining the practicality of low average infusion rates. This in vitro study demonstrates the effects of the pulse-controlling parameters and the catheter characteristics and placement on mixing effectiveness. An internal carotid artery model including eight cerebral branches was infused with dye through various indwelling catheters, and individual branch effluents were collected and analyzed spectrophotometrically for dye concentration. While catheter placement dominates the factors that control infusate distribution, judicious selection of the pulse parameters can alleviate that dependence. A primary advantage is gained by phasing the pulse to occur during that period of the cardiac cycle when the blood flow is lowest at the injection site. The data clearly showed that diastole-phased pulsed infusions are highly effective in producing a uniform infusate distribution at low average infusion rates. abstract_id: PUBMED:5100373 Intracarotid phenoxybenzamine for cerebral arterial spasm. Phenoxybenzamine was injected into the carotid artery of 23 patients after operations on their cerebral arterial aneurysms. Rapid improvement was seen in three cases with pronounced neurological disability. The other 20 were treated prophylactically in an attempt to prevent the onset of spasm. These results would appear to indicate that intracarotid phenoxybenzamine may be of value in the treatment of cerebral arterial spasm, provided that it is given before irreversible infarction of the brain has occurred. Until more is known about its action it would be best to give the drug only after operation. abstract_id: PUBMED:3339101 Intravascular streaming and variable delivery to brain following carotid artery infusions in the Sprague-Dawley rat. Intracarotid artery infusions in animals are commonly performed in studies of the blood-brain barrier and in chemotherapy trials. Implicit in the analysis of these experiments is that the infusate will be distributed to the territory of the internal carotid artery in a manner that is proportional to blood flow. Fifteen Sprague-Dawley rats were studied to determine if poor infusate mixing with blood due to intravascular streaming occurred during intracarotid artery drug infusions and if it could be eliminated with fast retrograde infusion. In three experimental groups, a radiolabeled flow tracer--14C-iodoantipyrine (IAP)--was infused retrograde through the external carotid artery into the common carotid artery at slow, medium, and fast rates (0.45, 1.5, and 5.0 ml/min). In a control group, IAP was injected intravenously (i.v.). Local isotope concentrations in the brain were determined by quantitative autoradiography, and the variability of isotope delivery was assessed in the frontoparietal cortex, temporal cortex, and caudate putamen of all animals. Streaming phenomena were manifest in all selected anatomic areas after the slow and medium rates of intraarterial infusion. After fast intracarotid infusion or i.v. injection, there was uniform distribution of isotope in the same brain regions. abstract_id: PUBMED:3808609 Prevention of ocular toxicity of carmustine (BCNU) with supraophthalmic intracarotid infusion. Intracarotid administration of carmustine (BCNU) is an increasingly accepted therapeutic modality for primary brain tumors. Its beneficial effect on prolonging survival, however, is countered by its frequent delayed complication of retinal toxicity and severe visual loss, which reportedly occurs in approximately 70% of the patients. Since this complication probably results from increased flow of the drug into the ophthalmic artery, the authors have attempted to infuse BCNU via a small intracarotid catheter advanced beyond the origin of the ophthalmic artery. In a series of nine patients (average follow-ups, 10.7 months), the authors were successful in treating six patients with exclusively supraophthalmic infusions. None of these patients developed any ocular toxicity. From the remaining three patients, two patients received one infraophthalmic infusion with no retinal toxicity. The third patient, however, who received three infraophthalmic infusions, developed ipsilateral retinopathy. These results suggest that retinopathic complications of intracarotid BCNU can be avoided if the infusion catheter is advanced beyond the origin of the ophthalmic artery. abstract_id: PUBMED:35947994 Interstitial fluid streaming in deep tissue induced by ultrasound momentum transfer for accelerating nanoagent transport and controlling its distribution. Objective.This study aims to theoretically investigate the dynamics of ultrasound-induced interstitial fluid streaming and tissue recovery after ultrasound exposure for potentially accelerating nanoagent transport and controlling its distribution in tissue.Approach.Starting from fundamental equations, the dynamics of ultrasound-induced interstitial fluid streaming and tissue relaxation after an ultrasound exposure were modeled, derived and simulated. Also, both ultrasound-induced mechanical and thermal effects were considered in the models.Main results.The proposed new mechanism was named squeezing interstitial fluid via transfer of ultrasound momentum (SIF-TUM). It means that an ultrasound beam can squeeze the tissue in a small focal volume from all the directions, and generate a macroscopic streaming of interstitial fluid and a compression of tissue solid matrix. After the ultrasound is turned off, the solid matrix will recover and can generate a backflow. Rather than the ultrasound pressure itself or intensity, the streaming velocity is determined by the dot product of the ultrasound pressure gradient and its conjugate. Tissue and nanoagent properties also affect the streaming and recovery velocities.Significance.The mobility of therapeutic or diagnostic agents, such as drugs, drug carriers, or imaging contrast agents, in the interstitial space of many diseased tissues, such as tumors, is usually extremely low because of the inefficiency of the natural transport mechanisms. Therefore, the interstitial space is one of the major barriers hindering agent deliveries. The ability to externally accelerate agent transport and control its distribution is highly desirable. Potentially, SIF-TUM can be a powerful technology to accelerate agent transport in deep tissue and control the distribution if appropriate parameters are selected. Answer: Streaming can affect the cerebral distribution of infraophthalmic intracarotid chemotherapy. However, studies have shown varying results regarding the impact of streaming on the distribution of chemotherapeutic agents. One study demonstrated that there was no evidence of substantial streaming during slow infraophthalmic intracarotid injections in humans. Slow rates of infusion were found to be as effective as high rates for infraophthalmic intracarotid drug delivery, which is important for drugs that are not tolerated at high injection rates (PUBMED:12427632). Another study suggested that pulsatile infusion may be effective in eliminating maldistribution of 99mTc-HMPAO in supraophthalmic catheterization, indicating that the method of infusion can influence the homogeneity of drug distribution (PUBMED:8345691). In contrast, a different study found that extensive and variable intravascular streaming occurred in patients who received infusions into the supraophthalmic segment of the ICA. This streaming pattern was markedly reduced or eliminated by diastole-phased pulsatile infusion (DPPI), suggesting that DPPI can assure uniform drug delivery to the brain (PUBMED:1849555). Additionally, an in vitro study showed that diastole-phased pulsed infusions are highly effective in producing a uniform infusate distribution at low average infusion rates, indicating that the timing of the pulse during the cardiac cycle is crucial for achieving uniform distribution (PUBMED:3668643). In summary, while streaming can potentially affect the cerebral distribution of infraophthalmic intracarotid chemotherapy, the use of specific infusion techniques, such as slow infusion rates or diastole-phased pulsatile infusion, can mitigate the effects of streaming and ensure more uniform drug distribution.
Instruction: Natural animal shaking: a model for non-accidental head injury in children? Abstracts: abstract_id: PUBMED:25129228 Amplitude-integrated EEG revealed nonconvulsive status epilepticus in children with non-accidental head injury. Objective: We describe the clinical course and amplitude-integrated EEG findings in three children with non-accidental head injury and discuss on the importance of continuous aEEG monitoring in infants. Methods: NCSE was defined as a continuous 30-min seizure or briefer seizures occurring consecutively comprising at least 30 min of any 1-h period. Non-accidental head injury was diagnosed on the basis of neuroimaging findings such as subdural hemorrhage. Antiepileptic treatment was performed with continuous amplitude-integrated EEG monitoring. Results: The age of the patients ranged from 48 days to nine months. All of them had loss of consciousness and seizures on presentation. Nonconvulsive status epilepticus without clinical symptoms were recognized in all patients. Vigorous antiepileptic treatment against nonconvulsive status epilepticus was made in two patients, whereas nonconvulsive status epilepticus disappeared within one hour without additional treatment in one. Conclusions: Our experience indicates that nonconvulsive status epilepticus were not uncommon in children with non-accidental head injury. Continuous amplitude-integrated EEG monitoring will be one of the useful methods in encephalopathic children in order to estimate seizure burden objectively and to treat seizures appropriately. abstract_id: PUBMED:26088208 Caregivers' voices: The experiences of caregivers of children who sustained serious accidental and non-accidental head injury in early childhood. Head injury is a leading cause of mortality and acquired neurological impairment in children. Head-injured children may have neurobehavioural deficits that persist for years following injury. Head injury can result in significant and persistent caregiver burden, including mental health issues, family stress and disorganisation, and unmet social and healthcare service needs. Few studies have examined the healthcare and social service needs of children and their families following head injury sustained at an early age. This qualitative study aims to describe the experiences of caregivers of children who sustained a serious head injury (particularly non-accidental head injury) before the age of 2 years. Caregivers were interviewed up to 15 years following the initial injury. Semi-structured interviews with 21 caregivers of 15 children (aged 3-15 years at the time of interview) were completed. Thematic analysis of interview data generated three key themes: impact, support and information. The study's findings reveal the broad impact of serious childhood head injury on caregivers, specifically the significant distress and burden brought about through lack of information, challenges in accessing support and inconsistent care. Recommendations for developing a quality 'model of care' and improving ease of access to supports for caregivers are provided. abstract_id: PUBMED:26835337 Non-accidental trauma in pediatric patients: a review of epidemiology, pathophysiology, diagnosis and treatment. Non-accidental trauma (NAT) is a leading cause of childhood traumatic injury and death in the United States. It is estimated that 1,400 children died from maltreatment in the United States in 2002 and abusive head trauma (AHT) accounted for 80% of these deaths. This review examines the epidemiology and risk factors for NAT as well as the general presentation and required medical work up of abused children. In addition, potential algorithms for recognizing cases of abuse are reviewed as well as outcomes in children with NAT and potential neurosurgical interventions which may be required. Finally, the evidence for seizure prophylaxis in this population is addressed. abstract_id: PUBMED:18259203 Natural animal shaking: a model for non-accidental head injury in children? Background: Non-accidental head injury (NAHI) is a form of child abuse where a perpetrator may violently subject an infant to repeated acceleration-deceleration forces with or without head impact, producing injuries including retinal haemorrhages in most cases. Animal models have included laboratory shaking of mice and rats, but only a small fraction develop retinal haemorrhages presumably due to the small eyes, which would require unattainable force levels to mimic that sustained by the infant eye. Animal models are also problematic due to ethical issues raised by subjecting even anaesthetized animals to abusive injury. Methods: We investigated a naturally occurring event, where three animal victims were shaken by a canine. The eyes were harvested and examined histologically. Results: The victims' eyes showed no haemorrhage or retinoschisis. Conclusions: Our model may not be a complete NAHI mimic. The discrepancies may ensue from anatomical and mechanical differences in the injury mechanism. Other models must be sought to further study this form of abusive eye injury. abstract_id: PUBMED:34137501 Predictive value of non-ocular findings for retinal haemorrhage in children evaluated for non-accidental trauma. Purpose: To determine non-ocular examination findings in non-accidental trauma (NAT) patients that are associated with retinal haemorrhage and warrant urgent examination by an ophthalmologist. Methods: A hospital- and clinic-based retrospective cross-sectional and cohort study of children age 0-12 years who underwent workup for NAT over a span of nine years in a level I trauma and tertiary referral centre. Details of ocular and non-ocular examination and imaging findings at the time of NAT evaluation were collected by chart review. Univariate and logistic regression analysis for association between retinal haemorrhage and non-ocular examination findings was performed. Results: A total of 557 patients with ophthalmology evaluation and 425 without were included in this study. All ages combined, none of the cutaneous signs of trauma or non-skull fractures were associated with retinal haemorrhage on univariate analysis (p &gt; 0.05). By logistic regression, subdural or extra-axial haemorrhage (OR = 16.2; 95% CI [5.11-51.3]), occipital lobe insult (OR = 6.2; 95% CI [1.77-21.6]) and Glasgow coma score (GCS) &lt;15 (OR = 5.8; 95% CI [1.96-17.4]) were significant predictors of retinal haemorrhage. Conclusion: Subdural or undistinguished extra-axial haemorrhage, GCS &lt;15, and occipital lobe insult are risk factors for the presence of retinal haemorrhage in patients with suspected NAT and their presence warrant urgent dilated fundus examination by an ophthalmologist. The presence of cutaneous trauma or non-skull bone fractures without the aforementioned risk factors does not warrant ophthalmology evaluation. abstract_id: PUBMED:18927597 Rare retinal haemorrhages in translational accidental head trauma in children. Purpose: The characteristic findings in accidental head injury consist of linear skull fracture, epidural haematoma, localized subdural haematoma, or cortical contusion because of a linear or translational impact force. Retinal haemorrhages have been found, although uncommon, in accidental head trauma. Methods: We performed a retrospective study of 24 consecutive cases of children with severe head injuries caused by falls. Inclusion criteria were skull fractures and/or intracranial haemorrhages documented by computerized tomography. All patients underwent a careful ophthalmic examination including dilated indirect fundoscopy within the first 48 h following admission. Results: No retinal haemorrhages could be found in patients whose accidents were plausible and physical and imaging findings were compatible with reported histories. Excessive bilateral retinal haemorrhages were found in only three children with the typical signs of shaken baby syndrome. In eight children, trauma had led to orbital roof fractures. Conclusions: Retinal haemorrhages were not found in any of the patients with accidental trauma despite the severity of their head injuries. Hence, we add more evidence that there are strong differences between the ocular involvement in accidental translational trauma and those in victims of non-accidental trauma. Fall-related injuries carry a very low risk of retinal haemorrhages. abstract_id: PUBMED:27832341 Non-accidental trauma: the role of radiology. Non-accidental trauma is a significant source of morbidity and mortality in the pediatric population. Given the vulnerable state of the child, the radiologist can often provide valuable diagnostic information in the setting of suspected abuse. This review focuses on common findings of abusive trauma and serves as a guide to aid in clinical decision-making for providers of emergency medicine and pediatrics. Amid this discussion is an assessment of modern controversies regarding reported mimicking pathologies, recapitulation of the current state of evidence with respect to radiologic findings of abuse, and examination of the contribution that spine imaging may add to the diagnosis of possible abusive head trauma in the acutely injured child. Recommendations for avoiding pitfalls regarding the dating of intracranial injuries are discussed, and illustrated depictions of perpetrator-induced pathology are provided to aid in the understanding of these injuries. Through the use of the appropriate approach to imaging and evidence-based guidelines regarding radiologic findings, the role of radiology is to provide fundamental clues to diagnose and prevent recurrence of abusive injury in patients who cannot speak for themselves. abstract_id: PUBMED:36303077 Hemophilia and non-accidental head trauma in two siblings: lessons and legal implications. Non-accidental head trauma (NAHT) is a common cause of traumatic brain injury in childhood, origin of profound and disabling neurological sequalae, and in the most disgraceful cases, ultimately death.Subdural hematoma (SDH) is the most common intracranial finding in NAHT. On the other hand, congenital bleeding disorders are a minor but a significant cause of ICH in the neonate and toddler. Not uncommonly, intracranial bleeding is the first sign of a severe inherited coagulation disorder. In the presence of an unexpected intracranial bleeding after a minor trauma or without a clear history of the related events, physicians and caregivers may be confronted to the dilemma of a possible child abuse. It must be bear in mind that physical abuse and bleeding disorders can co-exist in the same child.We report here the case of two siblings in whom a diagnosis of hemophilia coexisted with the presumption of a non-accidental head trauma. Child abuses were inflicted in both children with a spare time of 2 years. A diagnosis of mild hemophilia was prompted in the first sibling after initial NAHT, while inflicted trauma was evident in the second sibling after neuroimaging findings and concomitant lesions. Lessons from this case in co-existing bleeding disorders and inflicted trauma and legal implications derived will be discussed thereafter.The possibility of a bleeding disorder should be considered in all children presenting with unexplained bleeding at a critical site in the setting of suspected physical maltreatment, particularly intracranial hemorrhage (ICH). abstract_id: PUBMED:24035422 Pattern of cerebrospinal immediate early gene c-fos expression in an ovine model of non-accidental head injury. Expression of the immediate early gene, c-fos, was examined in a large animal model of non-accidental head injury ("shaken baby syndrome"). Lambs were used because they have a relatively large gyrencephalic brain and weak neck muscles resembling a human infant. Neonatal lambs were manually shaken in a manner similar to that believed to occur with most abused human infants, but there was no head impact. The most striking c-fos expression was in meningothelial cells of the cranial cervical spinal cord and, to a lesser degree, in hemispheric, cerebellar, and brainstem meninges. Vascular endothelial cells also frequently showed c-fos immunopositivity in the meninges and hemispheric white matter. It was hypothesised that this c-fos immunoreactivity was due to mechanical stress induced by shaking, with differential movement of different craniospinal components. abstract_id: PUBMED:31424959 Caring for Children with Non-Accidental Head Injuries: A Case for a Child-Centered Approach. Child abuse remains a significant issue. Non-accidental head injury (NAHI) is a major cause of mortality in young children with survivors often having to live a life with severe developmental and neurological dysfunction. The aim of this hermeneutic phenomenological research study was to examine the lived experiences of nurses who care for children and their families admitted to hospital with a non-accidental head injury. Semi-structured interviews were conducted with six nurses who had at least five years' experience of nursing children with NAHI. Following the interpretive approach described by van Manen data was analyzed and two essential themes identified. Firstly, nursing children with NAHI is different from the care of children admitted with a similar neurological injury related to accidental injuries or medical conditions. Secondly, when nursing these children nurses adopt protective qualities, conceptualized in this study as a shield of protection. The findings of this study support the findings of other studies in regard to the emotional labor required of nurses caring for children who have been subject to child abuse and the importance for nurses of maintaining a professional demeanor. The study highlighted the tensions of sustaining a family-centered care approach when caring for a child with NAHI. It is argued that greater acknowledgment and emotional support for nurses is needed and that a child-centered approach offers a more compatible model of care than Family-Centered Care in the acute phase of caring for children with a NAHI. Answer: The abstract from PUBMED:18259203 explores whether natural animal shaking could serve as a model for non-accidental head injury (NAHI) in children. The background of the study acknowledges that NAHI, a form of child abuse, often results in injuries such as retinal hemorrhages due to violent shaking that subjects an infant to repeated acceleration-deceleration forces, with or without head impact. To investigate this, the study looked at a naturally occurring event where three animal victims were shaken by a canine, and their eyes were examined histologically for signs of injury. The results showed that the victims' eyes did not exhibit hemorrhage or retinoschisis, leading to the conclusion that this natural animal shaking model may not be a complete mimic of NAHI in children. The discrepancies between the model and actual NAHI cases could be due to anatomical and mechanical differences in the injury mechanism. The study suggests that other models must be sought to further study this form of abusive eye injury, indicating that natural animal shaking is not an adequate model for non-accidental head injury in children.
Instruction: A comparison of surgical delays in directly admitted versus transferred patients with hip fractures: opportunities for improvement? Abstracts: abstract_id: PUBMED:24461225 A comparison of surgical delays in directly admitted versus transferred patients with hip fractures: opportunities for improvement? Background: The increasing incidence of hip fractures in our aging population challenges orthopedic surgeons and hospital administrators to effectively care for these patients. Many patients present to regional hospitals and are transferred to tertiary care centres for surgical management, resulting in long delays to surgery. Providing timely care may improve outcomes, as delay carries an increased risk of morbidity and mortality. Methods: We retrospectively reviewed the cases of all patients with hip fractures treated in a single Level 1 trauma centre in Canada between 2005 and 2012. We compared quality indicators and outcomes between patients transferred from a peripheral hospital and those directly admitted to the trauma centre. Results: Of the 1191 patients retrospectively reviewed, 890 met our inclusion criteria: 175 who were transferred and 715 admitted directly to the trauma centre. Transfer patients' median delay from admission to operation was 93 hours, whereas nontransfer patients waited 44 hours (p &lt; 0.001). The delay predominantly occurred before transfer, as the patients had to wait for a bed to become available at the trauma centre. The median length of stay in hospital was 20 days for transfer patients compared with 13 days for nontransfer patients (p &lt; 0.001). Regional policy changes enacted in 2011 decreased the median transfer delay from regional hospital to tertiary care centre from 47 to 27 hours (p = 0.005). Conclusion: Policy changes can have a significant impact on patient care. Prioritizing patients and expediting transfer will decrease overall mortality, reduce hospital stay and reduce the cost of hip fracture care. abstract_id: PUBMED:36606266 Improving osteoporosis treatment rates in inpatients admitted with hip fracture: A healthcare improvement initiative in a tertiary referral hospital. Objective: This healthcare improvement initiative was designed to increase inpatient osteoporosis treatment after hip fracture. Methods: A new protocol was developed by Geriatric Medicine and Endocrinology departments at a tertiary care hospital in Sydney. Its aim was to standardize assessment and treatment of osteoporosis in patients admitted with hip fracture. Eligible inpatients would receive intravenous zoledronic acid during their admission. A 6-month sample of hip fracture patients admitted after the protocol's implementation was compared to a group admitted before. Data collected included demographics, biochemistry, treatment rates, adverse effects, and admission survival. Results: There was a considerable increase in osteoporosis treatment after introducing the protocol. Before the protocol's introduction, none of 36 eligible patients received treatment. After the intervention 79% (23 out of 29) of eligible patients were treated.All treated patients had renal function and serum calcium levels checked post-infusion with no adverse outcomes. Eight patients developed flu-like symptoms within 24 h of the infusion. There were no instances of arrhythmias, ocular inflammation, or death. The cost per patient treated was AUD $87. Conclusion: Adopting a standardized protocol for osteoporosis treatment in patients admitted for hip fracture was effective in improving treatment rates whilst being relatively safe and inexpensive. abstract_id: PUBMED:34513388 Clinical Audit as a Quality Improvement Tool in Measurements of Lying and Standing Blood Pressure for Elderly Patients Admitted With a Hip Fracture. Background: Around one in three adults aged 65 years and over will have a fall at home within a one-year period. Falls are estimated to cost the NHS more than £2.3 billion per year. The National Institute for Health &amp; Care Excellence (NICE) guidelines recommend older people who present for medical attention because of a fall, report recurrent falls in the past year, or demonstrate abnormalities of gait and/or balance should be offered a multifactorial falls risk assessment which includes a cardiovascular examination and review of medications. Orthostatic hypotension (OH) is a common cardiovascular disorder, independently associated with an increased risk of falls in the elderly. Aims & Objectives: This study was carried out to assess improvement in lying and standing blood pressure (LSBP) measurement using clinical audit and staff education. Method: An initial audit of patients over the age of 60, admitted with a hip fracture between the 14th of April and the 25th of May 2020 to assess measurement and accurate recording of LSBP. This cycle was followed by brainstorming, root cause analysis, teaching sessions for staff, and use of aide-memoires. A second audit cycle of patients was admitted with a hip fracture secondary to a fall between the 10th of August and the 21st of September 2020. Result: Our initial audit results showed 68% of patients who met the criteria in the NICE guidelines on measurement of LSBP were not being assessed for OH. Following interventions, the second audit cycle showed significant improvement in compliance, confirming audits to be a powerful tool in quality improvement programs. abstract_id: PUBMED:31496011 Comparative study of outcomes for elderly hip fractures presenting directly to a referral hospital versus those transferred from peripheral centres. Background: The geography of rural Australia poses a myriad of logistical dilemmas, including the provision of timely access to emergency orthopaedic hip fracture surgery. Current guidelines support surgery within 48 h, and delays to transfer to a referral hospital may result in worse outcomes and increase mortality rates. The aim of this study was to examine the effect of transfer delays on the clinical outcomes of hip fractures in a rural setting. Methods: We retrospectively reviewed 265 hip fracture patients who underwent surgical management between 2013 and 2015 at a rural referral hospital. Factors such as age, time to surgery, delay to surgery, preoperative clinical deterioration, preoperative transthoracic echocardiogram, American Society of Anesthesiologists class and 30-day and 1-year mortality rates were examined. Unadjusted odds ratios were calculated for statistically significant primary and secondary outcomes. Results: The mean delay to transfer was 19.9 h. Patients were 6.76 times more likely to undergo surgery within 48 h if they presented to the referral hospital first. Surgery within 48 h was more likely in those who presented to the referral hospital first, had no preoperative transthoracic echocardiogram and did not experience a preoperative clinical deterioration. The 30-day mortality rates were significantly higher in those who had surgery after 48 h or underwent a preoperative clinical deterioration. Conclusion: Increased time to hip fracture surgery was associated with increased mortality rates. Transfer delays from a peripheral hospital had a significant bearing on time to surgery. Early transfer to a referral hospital is recommended. abstract_id: PUBMED:24706043 Use of the National Surgical Quality Improvement Program in orthopaedic surgery. Background: The goal of the American College of Surgeons (ACS) National Surgical Quality Improvement Program (NSQIP) is to improve patient safety. The database has been used by hospitals across the United States to decrease the rate of adverse events and improve surgical outcomes, including dramatic decreases in 30-day mortality, morbidity, and complication rates. However, only a few orthopaedic surgical studies have employed the ACS NSQIP database, all of which have limited their analysis to either single orthopaedic procedures or reported rates of adverse events without considering the effect of patient characteristics and comorbidities. Question/purposes: Our specific purposes included (1) investigating the most common orthopaedic procedures and 30-day adverse events, (2) analyzing the proportion of adverse events in the top 30 most frequently identified orthopaedic procedures, and (3) identifying patient characteristics and clinical risk factors for adverse events in patients undergoing hip fracture repair. Methods: We used data from the ACS NSQIP database to identify a large prospective cohort of patients undergoing orthopaedic surgery procedures from 2005 to 2011 in more than 400 hospitals around the world. Outcome variables were separated into the following three categories: any complication, minor complication, and major complication. The rate of adverse events for the top 30 orthopaedic procedures was calculated. Bivariate and multivariate analyses were used to determine risk factors for each of the outcome variables for hip fracture repair. Results: Of the 1,979,084 surgical patients identified in the database, 146,774 underwent orthopaedic procedures (7%). Of the 30 most common orthopaedic procedures, the top three were TKA, THA, and knee arthroscopy with meniscectomy, which together comprised 55% of patients (55,575 of 101,862). We identified 5368 complications within the top 30 orthopaedic procedures, representing a 5% complication rate. The minor and major complication rates were 3.1% (n = 3174) and 2.8% (n = 2880), respectively. The most common minor complication identified was urinary tract infection (n = 1534) and the most common major complication identified was death (n = 850). An American Society of Anesthesiologists class of 3 or higher was a consistent risk factor for all three categories of complications in patients undergoing hip fracture repair. Conclusions: The ACS NSQIP database allows for evaluating current trends of adverse events in selected surgical specialties. However, variables specific to orthopaedic surgery, such as open versus closed injury, are needed to improve the quality of the results. abstract_id: PUBMED:38420294 A National Database Retrospective Review of Short-Term Postoperative Mortality in the Geriatric Population: A Comparison Between Emergency Spine Fractures and Hip Fractures. Background: Mortality rates following emergency spine fracture surgery are high, especially in the elderly. However, how the postoperative mortality rate following spine fractures compares to other geriatric fractures such as hip fractures remains unclear. Therefore, this retrospective cohort study aimed to compare 30-day mortality rates and risk factors between emergency spine fracture versus hip fracture surgery in the elderly. Methods: The National Surgical Quality Improvement Program (NSQIP) database was queried between 2011 and 2021 for emergency spine fractures and hip fractures in the elderly. Univariate analyses evaluated demographic data, perioperative factors, comorbidities, and 30-day mortality rates as the primary outcomes. A multivariable regression model was then constructed to control for significant baseline and demographic differences and evaluate independent predictors of mortality. Results: A total of 18,287 emergency hip fractures and 192 emergency spine fractures were included in our study. Univariate analysis demonstrated significant differences in female sex, body mass index (BMI), operation time, length of hospital stays, disseminated cancer, and functional dependence between spine and hip fractures. Thirty-day mortality rates were significantly higher in spine versus hip fractures (9.4% vs. 5%). Multivariate regression analysis demonstrated emergent spine fracture surgery, disseminated cancer, functional dependence, and length of stay as independent predictors of mortality in our cohort. Female sex, BMI, and operation time were protective factors for mortality in our cohort. Conclusions: Emergency spine fractures in the elderly represent an independent predictor for 30-day postoperative mortality compared to emergency hip fractures. Disseminated cancer, functional dependence, and length of stay were independent predictors of mortality while female sex, BMI, and operation time were protective factors. These data demonstrate the severity of injury and high rates of mortality that clinicians can use to counsel patients and their families. abstract_id: PUBMED:34026941 Temporal Trends in Hip Fractures: How Has Time-to-Surgery Changed? Background: Surgical fixation of hip fractures within 24-48 hours of hospital presentation is associated with decreased rates of postoperative morbidity and death, and recently, hospitals nationwide have implemented strategies to expedite surgery. Our aim was to describe how time-to-surgery and short-term complication rates have changed using the National Surgical Quality Improvement Program database from 2011 to 2017. Methods: We identified more than 73,000 patients aged ≥65 years who underwent surgical fixation. Poisson regression adjusting for comorbidities, surgery type, type of anesthesia, patient sex, and patient age was performed to quantify annual changes in time-to-surgery. Annual changes in 30-day postoperative complications were analyzed using a generalized linear model with binomial distribution. Results: A significant decrease in time-to-surgery was observed during the study period (mean 30 hours in 2011 versus 26 hours in 2017; P&lt;0.001). Time-to-surgery decreased by 2% annually during the 7-year period (0.5 hour/year, 95% CI: -35, -23; P&lt;0.001). The all-cause 30-day complication rate also decreased annually (annual risk difference: -0.35%, 95% CI: -0.50%, -0.20%; P&lt;0.001). For individual complications, we found significant decreases in deep infection (-0.2%, P=0.002), reintubation (-0.3%, P=0.001), urinary tract infection (-2.5%, P&lt;0.001), and death (-1.3%, P=0.03). We found significant but small increases of pulmonary embolism (0.3%, P=0.03) and myocardial infarction (0.1%, P=0.02). Higher rates of complications were associated with increased time-to-surgery (P&lt;0.001). Conclusion: From 2011 to 2017, time-to-surgery for hip fracture decreased significantly, as did short-term postoperative rates of all-cause complications and death. Longer time-to-surgery was associated with increased number of complications. abstract_id: PUBMED:33517719 Impact of preoperative echocardiography on surgical delays and outcomes among adults with hip fracture. Aims: Echocardiography is commonly used in hip fracture patients to evaluate perioperative cardiac risk. However, echocardiography that delays surgical repair may be harmful. The objective of this study was to compare surgical wait times, mortality, length of stay (LOS), and healthcare costs for similar hip fracture patients evaluated with and without preoperative echocardiograms. Methods: A population-based, matched cohort study of all hip fracture patients (aged over 45 years) in Ontario, Canada between 2009 and 2014 was conducted. The primary exposure was preoperative echocardiography (occurring between hospital admission and surgery). Mortality rates, surgical wait times, postoperative LOS, and medical costs (expressed as 2013$ CAN) up to one year postoperatively were assessed after propensity-score matching. Results: A total of 2,354 of 42,230 (5.6%) eligible hip fracture patients received a preoperative echocardiogram during the study period. Echocardiography ordering practices varied among hospitals, ranging from 0% to 23.0% of hip fracture patients at different hospital sites. After successfully matching 2,298 (97.6%) patients, echocardiography was associated with significantly increased risks of mortality at 90 days (20.1% vs 16.8%; p = 0.004) and one year (32.9% vs 27.8%; p &lt; 0.001), but not at 30 days (11.4% vs 9.8%; p = 0.084). Patients with echocardiography also had a mean increased delay from presentation to surgery (68.80 hours (SD 44.23) vs 39.69 hours (SD 27.09); p &lt; 0.001), total LOS (19.49 days (SD 25.39) vs 15.94 days (SD 22.48); p &lt; 0.001), and total healthcare costs at one year ($51,714.69 (SD 54,675.28) vs $41,861.47 (SD 50,854.12); p &lt; 0.001). Conclusion: Preoperative echocardiography for hip fracture patients is associated with increased postoperative mortality at 90 days and one year but not at 30 days. Preoperative echocardiography is also associated with increased surgical delay, postoperative LOS, and total healthcare costs at one year. Echocardiography should be considered an urgent test when ordered to prevent additional surgical delay. Cite this article: Bone Joint J 2021;103-B(2):271-278. abstract_id: PUBMED:26260772 The effect of surgical timing on the outcome of patients with neck of femur fracture. Aims: To investigate the effect of surgical timing (in hours versus after hours and weekdays versus weekends) on the outcome of patients with neck of femur fracture. Methods: Patients who were admitted to a single tertiary referral hospital for surgical management of femoral neck fractures over a continuous period from 1/11/2002 to 12/7/2012 were identified from medical records and the operating theatre database. Results: A consecutive series of 2334 patients were included in the study. Of the patients who underwent surgery during the weekday and during usual hours, 18% (207/1135) experienced an adverse event, compared to 16% (193/1199) outside of these times. The difference between the two groups was not significant (p = 0.17). The same conclusion was made for the comparison between those who had surgery during the week with those who had surgery on the weekend (17%, 267/1546 and 17%, 133/788, respectively, p &gt; 0.05). The proportion of patients who underwent surgery during hours that experienced an adverse event was significantly higher than those undergoing surgery out of hours (18%, 327/1789 and 13%, 73/545, respectively, p = 0.0081). When adjusted for age, ASA score and pre-operative stay, there was no statistical difference between those different sub-groups. Conclusions: There was no difference in the rates of adverse events between patients who had surgery during hours and weekdays with those who had surgery after hours or weekends. The careful selection of patients with appropriate hospital staff, resources and adequate theatre access, surgery during after hours and weekends may be safely considered to prevent a delay in surgical treatment for patient with neck of femur fracture. abstract_id: PUBMED:29504869 An interdisciplinary approach to improve surgical antimicrobial prophylaxis. Purpose The purpose of this paper is to improve surgical antimicrobial prophylaxis (SAP) prescribing in orthopaedic surgery using the model for improvement framework. Design/methodology/approach Orthopaedic patients receiving joint replacements, hip fracture repairs or open-reduction internal-fixation procedures were included. Antimicrobial(s); dose, time of administration and duration of SAP were evaluated for appropriateness based on the local SAP guidelines. After baseline data collection, a driver diagram was constructed with interventions devised for plan-do-study-act cycles. Data were fed back weekly using a point prevalence design (PPD). Interventions included SAP guideline changes, reminders and tools to support key messages. Findings SAP in 168 orthopaedic surgeries from 15 June 2016 to 31 January 2017 was studied. Prescribing appropriateness improved from 20 to 78 per cent. Junior doctor changeover necessitated additional education and reminders. Practical implications Due to constant staff changeover; continuous data collection, communication, education and reminders are essential to ensure continuous compliance with clinical guidance. Patients with hip fractures are difficult to weigh, requiring weight estimation for weight-based antimicrobial dosing. Unintended consequences of interventions included the necessity to change pre-operative workflow to accommodate reconstitution time of additional antimicrobials and inadvertent continuation of new antimicrobials post-operatively. Originality/value Rather than perform the traditional retrospective focused audit, we established a prospective, continuous, interventional quality improvement (QI) project focusing on internal processes within the control of the project team with rapid cyclical changes and interventions. The weekly PPD was pragmatic and enabled the QI project to be sustained with no additional resources. Answer: The comparison of surgical delays in directly admitted versus transferred patients with hip fractures reveals significant differences in the time to surgery and associated outcomes. Directly admitted patients to a Level 1 trauma centre experienced a median delay from admission to operation of 44 hours, while transferred patients from peripheral hospitals faced a median delay of 93 hours (PUBMED:24461225). The delay for transferred patients was primarily due to the wait for a bed to become available at the trauma centre. Consequently, transfer patients had a longer median length of stay in hospital (20 days) compared to nontransfer patients (13 days) (PUBMED:24461225). Policy changes aimed at prioritizing patients and expediting transfer have been shown to decrease the median transfer delay from regional hospital to tertiary care centre from 47 to 27 hours (PUBMED:24461225). These changes can significantly impact patient care by reducing overall mortality, hospital stay, and the cost of hip fracture care. In rural Australia, patients transferred from peripheral hospitals to a referral hospital for hip fracture surgery were more likely to experience delays, with a mean delay to transfer of 19.9 hours. Patients presenting directly to the referral hospital were 6.76 times more likely to undergo surgery within 48 hours. Delays in surgery were associated with higher 30-day mortality rates (PUBMED:31496011). Overall, the evidence suggests that directly admitted patients receive more timely surgery, which is associated with better outcomes. Transferred patients face significant delays, which can lead to increased morbidity and mortality. Therefore, there are opportunities for improvement by streamlining the transfer process, prioritizing hip fracture patients, and implementing policy changes to reduce surgical delays for transferred patients.
Instruction: Anticoagulant drugs in noncompaction: a mandatory therapy? Abstracts: abstract_id: PUBMED:37267694 Potential anticoagulant of traditional chinese medicine and novel targets for anticoagulant drugs. Background: Anticoagulants are the main drugs used for the prevention and treatment of thrombosis. Currently, anticoagulant drugs are primarily multitarget heparin drugs, single-target FXa inhibitors and FIIa inhibitors. In addition, some traditional Chinese drugs also have anticoagulant effects, but they are not the main direction of treatment at present. But the anticoagulant drugs mentioned above, all have a common side effect is bleeding. Many other anticoagulation targets are under investigation. With further exploration of coagulation mechanism, how to further determine new anticoagulant targets and how to make traditional Chinese medicine play anticoagulant role have become a new field of exploration. Purpose: The purpose of the study was to summarize the recent research progress on coagulation mechanisms, new anticoagulant targets and traditional Chinese medicine. Methods: A comprehensive literature search was conducted using four electronic databases, including PubMed, Embase, CNKI, Wanfang database and ClinicalTrials.gov, from the inception of the study to 28 Feb 2023. Key words used in the literature search were "anticoagulation", "anticoagulant targets", "new targets", "coagulation mechanisms", "potential anticoagulant", "herb medicine", "botanical medicine", "Chinese medicine", "traditional Chinese medicine", "blood coagulation factor", keywords are linked with AND/OR. Recent findings on coagulation mechanisms, potential anticoagulant targets and traditional Chinese medicine were studied. Results: The active components extracted from the Chinese medicinal herbs, Salvia miltiorrhiza, Chuanxiong rhizoma, safflower and Panax notoginseng have obvious anticoagulant effects and can be used as potential anticoagulant drugs, but the risk of bleeding is unclear. TF/FVIIa, FVIII, FIX, FXI, FXII, and FXIII have all been evaluated as targets in animal studies or clinical trials. FIX and FXI are the most studied anticoagulant targets, but FXI inhibitors have shown stronger advantages. Conclusion: This review of potential anticoagulants provides a comprehensive resource. Literature analysis suggests that FXI inhibitors can be used as potential anticoagulant candidates. In addition, we should not ignore the anticoagulant effect of traditional Chinese medicine, and look forward to more research and the emergence of new drugs. abstract_id: PUBMED:31453781 Interaction between Traditional Chinese Medicine and Anticoagulant/Antiplatelet Drugs. Background: Traditional Chinese medicine (TCM) has been used for medical purposes since the ancient time and has gradually gained recognition worldwide. Nowadays, patients with thrombus presiding to anticoagulant/ antiplatelet drugs prefer taking TCM. However, an increasing number of studies on herb-drug interactions have been shown. Nevertheless, findings are frequently conflicting and vague. In this review, we discuss the herb-drug interactions between TCM and anticoagulant/antiplatelet drugs to provide guidance on concomitant ingestion with anticoagulant/antiplatelet drugs. Methods: We undertook a structured search of medicine and drug databases for peer-reviewed literature using focused review questions. Results: Danshen, Ginkgo, Ginger, H. Perforatum, SMY and Puerarin injection had directional regulation effects on the efficacy of anticoagulant drugs by altering the CYPs, pharmacokinetic indexs and hemorheological parameters. H. Perforatum inhibited the efficacy of Clopidogrel by enhancing the CYP3A4 activity and Ginkgo increased the efficacy of Ticlopidine. Additionally, Renshen, the formulae except SMY and injections except Puerarin injection could increase or decrease the efficacy of anticoagulant/antiplatelet drugs via regulating the CYPs, platelet aggregation, hemorheological parameters and others. Conclusion: Some cases have reported that TCMs may increase the bleeding risk or has no effect on coagulation when anticoagulant/antiplatelet drugs are concurrently used. However, pharmacokinetic studies have presented either consistent or slightly varying results. So it is difficult to ascertain whether the concurrent use of TCM may increase or reduce the pharmacologic effects of anticoagulant/antiplatelet drugs with adverse reactions. Therefore, herb-drug interactions of TCM and anticoagulant/antiplatelet drugs should be further explored and defined. abstract_id: PUBMED:37365035 Advances in anticoagulant therapy for cirrhosis combined with atrial fibrillation Relevant research in recent years has demonstrated that the atrial fibrillation occurrence rate is significantly higher in patients with cirrhosis. The most common indication for long-term anticoagulant therapy is chronic atrial fibrillation. The use of anticoagulant therapy greatly reduces the incidence rate of ischemic stroke. Patients with cirrhosis combined with atrial fibrillation have an elevated risk of bleeding and embolism during anticoagulant therapy due to cirrhotic coagulopathy. At the same time, the liver of such patients will go through varying levels of metabolism and elimination while consuming currently approved anticoagulant drugs, thereby increasing the complexity of anticoagulant therapy. This article summarizes the clinical studies on the risks and benefits of anticoagulant therapy in order to provide a reference for patients with cirrhosis combined with atrial fibrillation. abstract_id: PUBMED:15383475 New anticoagulant drugs: the Seventh ACCP Conference on Antithrombotic and Thrombolytic Therapy. This article about new anticoagulant drugs is part of the seventh American College of Chest Physicians Conference on Antithrombotic and Thrombolytic Therapy: Evidence-Based Guidelines. The limitations of existing oral and parenteral anticoagulant agents have prompted a search for novel agents. Focusing on new anticoagulant drugs for the prevention and treatment of arterial and venous thrombosis, this article (1) reviews arterial and venous thrombogenesis, (2) discusses the regulation of coagulation, (3) describes the pathways for testing new anticoagulant agents, (4) describes new anticoagulant strategies focusing primarily on agents in phase II or III clinical testing, and (5) provides clinical perspective as to which of these new strategies is most likely to succeed. abstract_id: PUBMED:29707358 Recent progress and market analysis of anticoagulant drugs. This review describes epidemiology of thromboembolic disease in China and abroad, evaluates trends in the development of anticoagulant drugs, and analyzes the market situation based on large amounts of accumulated data. Specifically, we describe advances in clinical application of anticoagulants and analyze the most commonly used anticoagulants in the market systematically. abstract_id: PUBMED:29662414 Anticoagulant therapy and pregnancy. Low dose aspirin therapy is one of the anticoagulant treatments used during pregnancy. Anticoagulant agents may be useful for several disorders, such as recurrent miscarriage, pre-eclampsia, fetal growth restriction and infertility. However, it is unclear whether anticoagulant therapy can increase the live birth rate in all of these cases. Recent data suggest that a low-dose aspirin and heparin combination therapy is effective in the prevention of recurrent pregnancy loss in women with antiphospholipid syndrome. Thrombogenic diseases, for example, protein C deficiency, protein S deficiency, factor XII deficiency and hyperhomocysteinemia, may cause pregnancy loss. The etiology of recurrent miscarriage is often unclear and may be multifactorial, with much controversy regarding diagnosis and treatment. Although 70% of recurrent pregnancy losses are unexplained, anticoagulant therapy is effective in maintaining pregnancy without antiphospholipid antibody syndrome. We conclude that a low-dose aspirin and heparin combination therapy can be useful for unexplained cases of recurrent pregnancy loss without antiphospholipid antibody syndrome. (Reprod Med Biol 2008; 7: 1-10). abstract_id: PUBMED:27245406 Bilateral renal infarction after discontinuation of anticoagulant therapy Acute renal infarction is an uncommon and often under diagnosed condition mostly because of misleading symptoms. Accurate data regarding clinical presentation, laboratory tests, diagnostic and treatment are lacking. Detection is often delayed or missed because of non-specific clinical presentation. The mechanisms of acute renal infarction are various, mainly embolic or thrombotic. Abdominal CT scan remains the most valuable exam to confirm the diagnosis. Therapeutic guidelines for the treatment of renal embolism have not been well established. The standard treatment strategy includes anticoagulation with or without thrombolysis. Despite the uncertainty regarding management, the renal outcome remains favorable. Some patients do develop some degree of renal insufficiency during the acute episode. We report here the case of a 73-year-old woman with bilateral acute renal infarction after discontinuation of anticoagulant therapy. abstract_id: PUBMED:35462339 Use of anticoagulant drugs in patients with atrial fibrillation. Does adherence to therapy have a prognostic impact? Anticoagulant therapy represents a pivotal element that strongly influences the thromboembolic risk of non-valvular atrial fibrillation (NVAF) subjects. The main purpose of this review was to identify issues and suggest strategies to improve the oral anticoagulants (OACs) treatment adherence, which is the most important predictor of NVAF outcome. Advantages, efficacy, and impact of these drugs on patients' prognosis were revealed in important clinical trials on large cohorts of patients and are often prescribed nowadays. A real-life data registry, the Global Anticoagulant Registry in the Field-Atrial Fibrillation (GARFIELD-AF) analyzed the profile and outcome of patients diagnosed with NVAF receiving oral antithrombotic treatment. The observations gathered in the registry were crucial for identifying relevant elements that clinicians must improve, such as adherence strategies and predisposing factors that correlated with stroke. Adherence to OACs in AF patients is essential from the viewpoint of clinical efficacy and safety. Major adverse events and negative outcome are correlated with a weak anticoagulation control caused by an ineffective treatment adherence strategy. Solving the issue of oral anticoagulation adherence is possible using new technologies, but future directions should be explored. Mobile phone applications centered on patients' needs, telemedicine programs that evaluate patients' evolution and detect adverse reactions or events, encouraging an adequate management of the event without interruption of OACs, represent perspectives with a major impact on treatment adherence. abstract_id: PUBMED:36809516 Ureteroscopic and flexible ureteroscopic lithotripsy: continuation or discontinuation of anticoagulant or antiplatelet drugs? A Chinese survey among urologists. Purpose: To evaluate the management of antithrombotic drugs made by different urologists before ureteroscopic lithotripsy and flexible ureteroscopy in stone patients undergoing active anticoagulant or antiplatelet therapy. Methods: A survey was distributed to 613 urologists in China, which included personal work information and views on the management of anticoagulants (AC) or antiplatelet (AP) drugs during the perioperative period of ureteroscopic lithotripsy (URL) and flexible ureteroscopy (fURS). Results: 20.5% of urologists believed that AP drugs could be continued and 14.7% believed that AC drugs could be continued. 26.1% of the urologists who participated in more than 100 ureteroscopic lithotripsy or flexible ureteroscopy surgeries each year believed that AP drugs could be continued and 19.1% believed that AC drugs could be continued, compared with 13.6% (P &lt; 0.01) and 9.2% (P &lt; 0.01) of the urologists who performed less than 100 surgeries. Among the urologists with more than 20 cases undergoing active AC or AP therapy per year, 25.9% thought that AP drugs could be continued and 19.7% thought that AC drugs could be continued, compared with 17.1% (P = 0.008) and 11.5% (P = 0.005) of the urologists with less than 20 cases. Conclusion: The decision on the continuation of AC or AP drugs before ureteroscopic and flexible ureteroscopic lithotripsy should be individualized. The experience in URL and fURS surgeries and in dealing with patients under AC or AP therapy is the influencing factor. abstract_id: PUBMED:33486650 Anticoagulant therapy management of venous thromboembolism recurrence occurring during anticoagulant therapy: a descriptive study. Limited evidence exists regarding management of recurrent venous thromboembolism (VTE) that occurs during anticoagulant therapy. We aimed to describe patient characteristics, drug therapy management, and outcomes of patients with VTE recurrence during anticoagulant therapy. We identified 30 relevant episodes of VTE recurrence. Mean age was 48.9 (15.9) years, 56.7% were male, and 93.3% were White. Common VTE risk factors included cancer (46.6%), recent surgery (33.3%), and prolonged immobility (30.0%). At the time of recurrent VTE, 40.0% were receiving enoxaparin, 30.0% warfarin, and 23.3% direct oral anticoagulants. Potential causes for VTE recurrence included indwelling venous catheters (40.0%), cancer (33.3%), subtherapeutic anticoagulation (26.7%), and nonadherence (23.3%). Recurrent VTE management strategies included switching anticoagulants (26.7%), increasing anticoagulant dose (20.0%), temporarily adding enoxaparin or unfractionated heparin to oral anticoagulation therapy (13.3%), or no change in anticoagulation therapy (43.3%). Only four adverse 90-day outcomes occurred among 17 patients who received anticoagulant therapy changes in response to VTE recurrence, whereas eight adverse outcomes occurred in the 13 patients who received no change in anticoagulation therapy in response to a recurrent VTE episode (P value 0.04). Regardless of the potential etiology of recurrent VTE during anticoagulant therapy; switching anticoagulants, temporarily adding injectable anticoagulants, or increasing anticoagulant intensity appears preferable to continuing current anticoagulant therapy unchanged. Answer: The provided abstracts do not directly address the use of anticoagulant drugs specifically for noncompaction cardiomyopathy. Noncompaction cardiomyopathy, also known as left ventricular noncompaction (LVNC), is a rare form of cardiomyopathy where the left ventricle of the heart has trabeculations and deep intertrabecular recesses due to the failure of myocardial compaction during embryogenesis. Patients with LVNC are at an increased risk of thromboembolic events, which may necessitate the use of anticoagulant therapy. However, the abstracts do discuss the use of anticoagulant drugs in various other contexts, such as atrial fibrillation, venous thromboembolism, and during pregnancy (PUBMED:37365035, PUBMED:35462339, PUBMED:33486650, PUBMED:29662414). For instance, anticoagulant therapy is crucial in patients with atrial fibrillation to reduce the risk of ischemic stroke (PUBMED:37365035, PUBMED:35462339). The management of anticoagulant therapy is complex in patients with cirrhosis combined with atrial fibrillation due to an elevated risk of bleeding and embolism (PUBMED:37365035). The abstracts also highlight the potential for traditional Chinese medicine to have anticoagulant effects (PUBMED:37267694, PUBMED:31453781), and the importance of managing anticoagulant therapy carefully due to the risk of bleeding (PUBMED:37267694). Additionally, the management of anticoagulant or antiplatelet drugs during surgical procedures is discussed, emphasizing the need for individualized decision-making (PUBMED:36809516). In conclusion, while the abstracts do not provide specific information on anticoagulant therapy for noncompaction cardiomyopathy, they do underscore the importance of anticoagulant drugs in preventing thromboembolic events in various clinical scenarios. Given the thromboembolic risk associated with LVNC, anticoagulant therapy may be considered necessary in certain cases, but this would typically be determined on an individual basis by a healthcare provider, taking into account the patient's overall clinical picture and risk factors.
Instruction: Impaired coronary flow and left ventricular dysfunction after mechanical recanalization in acute myocardial infarction: role of neurohumoral activation? Abstracts: abstract_id: PUBMED:12200640 Impaired coronary flow and left ventricular dysfunction after mechanical recanalization in acute myocardial infarction: role of neurohumoral activation? Background: Reopening of the infarct-related coronary artery is the treatment of choice in the clinical setting of acute myocardial infarction. Nevertheless the removal of the total occlusion obtained either by thrombolysis or by primary angioplasty is followed by the ischemia/reperfusion sequelae. One of many proposed mechanisms playing a role in ischemia/reperfusion damage is a persistent increase in vasoconstrictor tone, which reduces cardiac function and impairs myocardial blood flow during primary percutaneous coronary intervention in acute myocardial infarction (PAMI). Methods: To investigate early neurohumoral changes during PAMI we enrolled 18 patients, who were collated to 13 patients with stable angina undergoing elective PTCA. To evaluate angiotensin II (AngII), endothelin-1 (ET-1), vasopressin (AVP), norepinephrine (NE), troponin T (TNT), creatinephosphate kinase (CPKM) and isoenzyme MB (CPKMB), we collected blood from the pulmonary artery before and immediately after the infarct-related artery (IRA; TIMI 0 --&gt; 2-3) or culprit lesion revascularization. Hemodynamic and angiographic LV-function parameters were compared to biochemical data. Corrected TIMI-frame count (CTFC) was used as an index of coronary blood flow and correlated to the biochemical measurements. Results: CTFC in the IRA correlated inversely (p = 0.03; r = -0.51) with left ventricular ejection fraction measured after 10 days, and positively (p = 0.03; r = 0.54) with the maximal amount of LDH released after onset of AMI. There was an abrupt and long lasting rise in ET-1 (+65 %; p &lt; 0.001) and an instant short lasting increase in AVP (+37 %; p &lt; 0.05), whereas NE concentrations were elevated prior to PAMI and remained elevated during reperfusion. Correlations with CTFC were found for ET-1 (p = 0.01; r = 0.61) and NE (p = 0.01; r = 0.58) during reperfusion. The extent of left ventricular dysfunction correlated with the concentrations of AVP and NE during reperfusion. Conclusions: There is evidence for a distinct pattern of neurohumoral activation during early reperfusion in acute myocardial infarction. In particular, we documented substantial increases in AVP and ET-1. Left ventricular wall-stress appears to be involved in the release of AVP. Elevated levels of ET-1 and NE are associated with impaired angiographic reperfusion and increased myocardial damage after mechanical recanalization. abstract_id: PUBMED:25739451 Left Ventricular Twist Mechanics Are Impaired in Patients with Coronary Slow Flow. Purpose: Coronary slow flow (CSF) is a rare condition described as the delayed angiographic passage of a contrast agent in the absence of stenosis in epicardial coronary arteries. Left ventricular (LV) systolic and diastolic dysfunctions have been described in the presence of CSF. However, the effect of CSF on LV twist functions has not been assessed. We aimed to evaluate the effects of CSF on LV twist mechanics using speckle tracking echocardiography (STE). Methods: Twenty CSF patients (24-60 years) were enrolled according to the exclusion criteria. Twenty subjects with similar demographic characteristics and normal coronary arteries were the controls. Participants were evaluated with conventional echocardiography and STE. Results: The general characteristics of the two groups were similar. LV twist, LV torsion, and apical rotation were impaired in the CSF group (P = 0.015, P = 0.012, and P &lt; 0.001, respectively). Time to peak twist (TPT) and time to peak untwisting (TPU) were prolonged in CSF patients (P &lt; 0.001 and P &lt; 0.001, respectively). In the CSF group, rotation-deformation delay was shortened (P &lt; 0.001) and global longitudinal strain (GLS) was lower (P &lt; 0.001). The thrombolysis in myocardial infarction (TIMI) frame count was negatively correlated with LV twist, LV torsion and apical rotation (P = 0.002, r = -0.624; P = 0.002, r = -0.624; and P = 0.002, r = -0.632, respectively). Conclusions: We demonstrated that LV twist mechanics are impaired in CSF patients. Worse LV twist parameters were associated with greater TIMI frame count. abstract_id: PUBMED:8800019 Differential characterization of human coronary collateral blood flow velocity. The functional importance and protective nature of the coronary collateral circulation has been well established. There are few data, however, regarding the phasic nature and absolute velocities of collateral flow in patients. The aim of this study was to characterize and quantify ipsilateral coronary collateral blood flow velocity in patients during coronary angioplasty. Coronary collateral flow velocity was measured in 49 patients during coronary angioplasty. Angiographic collateral filling was categorized by the Rentrop grading scale (0 to 3) and by anatomic pathway (epicardial, intramyocardial, or unknown [acutely recruited]). Collateral blood flow velocity was measured with a Doppler-tipped guide wire placed distal to the balloon occlusion in the collateralized vessel. Collateral flow velocity was characterized as predominantly systolic or diastolic, and phasic flow patterns were defined as biphasic (both systolic and diastolic), monophasic (only systolic or diastolic), or bidirectional (antegrade and retrograde velocity). Twenty-three (47%) patients had biphasic flow; 17 (35%) patients had monophasic flow; and 9 (18%) patients had bidirectional flow. Thirty-six (73%) of 49 patients had predominantly systolic flow signals. Epicardial collateral pathways had the highest total flow velocity integral, at 15.0 +/- 7.0 (vs intramyocardial [8.4 +/- 5.7] and acutely recruitable [5.4 +/- 2.1]; p &lt; 0.05). There were no differences in flow velocity integrals among the Rentrop angiographic grades of collateral filling. These data establish three patterns of coronary collateral blood flow and demonstrate that the majority of collateral flow in the ipsllateral receiving vessel occurs during systole. The measurement of coronary collateral flow velocity provides a unique means to study the effects of pharmacologic or mechanical interventions on human collateral blood flow. abstract_id: PUBMED:9550589 The treatment of heart failure: the role of neurohumoral activation. Neurohumoral activation refers to increased activity of the sympathetic nervous system, renin-angiotensin system, vasopressin and atrial natriuretic peptide. It is now known that neurohumoral activation contributes to the transition from ventricular dysfunction to clinical heart failure, and is an independent predictor of poor prognosis in heart failure. Although the treatment of heart failure has traditionally focused on drugs to improve ventricular function, there is increasing evidence that therapeutic modulation of neurohumoral activation is a key to successful treatment of heart failure. For example, there is mounting evidence that angiotensin converting enzyme inhibitors (the unquestioned cornerstone for treatment of heart failure), beta receptor blockers, digitalis, and endurance exercise training exert their benefit in heart failure in large part through neurohumoral modulation. This observation--discussed in this brief review--highlights the concept that compensatory neurohumoral activation to decreased cardiac function may itself contribute to the development of heart failure and its poor prognosis. abstract_id: PUBMED:1684082 Activation of neurohumoral systems following acute myocardial infarction. Previous studies have indicated that patients with an acute myocardial infarction have marked activation of all neurohumoral systems on admission to the hospital. This activation begins to subside within the first 72 hours so that by 7-10 days, all plasma neurohormones have returned to normal. The only documented exceptions were found to occur in patients with left ventricular dysfunction and overt heart failure, where both plasma renin activity and atrial natriuretic peptide were increased, and in patients with left ventricular dysfunction but no overt heart failure, where only atrial natriuretic peptide was increased. Although these studies suggest that neurohumoral activation rarely occurs at the time of hospital discharge, they were small and may have missed an important subgroup of patients with persistent neurohumoral activation. In the Survival and Ventricular Enlargement (SAVE) study, 522 patients had plasma neurohumoral levels measured at a mean of 12 days postinfarction. All SAVE patients had left ventricular dysfunction (left ventricular ejection fraction less than or equal to 40%), but no overt heart failure. In this group of patients, all neurohumoral levels (plasma renin activity, norepinephrine, arginine vasopressin, and atrial natriuretic peptide) were found to be increased compared with age-matched control subjects. These results indicate that, in fact, a subgroup of patients without overt heart failure has persistent neurohumoral activation at the time of hospital discharge postinfarction, and that this activation involves several neurohumoral systems. Since patients with persistent neurohumoral activation postinfarction are likely those most at risk of developing complications and the ones most likely to benefit from pharmacologic interventions blunting the effects of neurohumoral activation, measurement of predischarge neurohumoral levels may be useful. abstract_id: PUBMED:8101532 Activation of neurohumoral systems in postinfarction left ventricular dysfunction. Objectives: This study was conducted to evaluate the degree of neurohumoral activation around the time of hospital discharge after myocardial infarction. Background: Because pharmacologic interventions that block the effects of neurohumoral activation improve the prognosis after infarction, we hypothesized that widespread neurohumoral activation persists in some patients until at least the time of hospital discharge and that the determinants of activation vary from one system to another. Methods: Five hundred nineteen patients in the Survival and Ventricular Enlargement Study (SAVE) had plasma neurohormones measured before randomization at a mean of 12 days after infarction. All patients had left ventricular dysfunction (left ventricular ejection fraction &lt; or = 40%) but no overt heart failure. Results: Although all neurohormones except epinephrine were increased compared with values in age-matched control subjects, plasma norepinephrine (301 +/- 193 vs. 222 +/- 87 pg/ml, p &lt; 0.001), renin activity (3.0 +/- 3.7 vs. 1.2 +/- 1.2 ng/ml per h, p &lt; 0.001), arginine vasopressin (1.9 +/- 6.9 vs. 0.7 +/- 0.3 pg/ml, p &lt; 0.001) and atrial natriuretic peptide (75 +/- 75 vs. 21 +/- 9 pg/ml, p &lt; 0.001) values ranged from normal to very high, indicating a wide spectrum of neurohumoral activation. Activation of one system did not correlate with activation of another. The clinical and laboratory variables most closely associated with neurohumoral activation were Killip class, left ventricular ejection fraction, age and use of diuretic drugs. The association between neurohumoral activation and clinical and laboratory variables varied from one neurohormone to another. Conclusions: Neurohumoral activation occurs in a significant proportion of patients at the time of hospital discharge after infarction. Which neurohormone is activated and which clinical and laboratory variables determine this activation vary from one neurohormone to another. abstract_id: PUBMED:25475386 Assessment of risk factors and left ventricular function in patients with slow coronary flow. Slow coronary flow (SCF) is characterized by delayed distal vessel opacification in the absence of significant epicardial coronary disease. Life-threatening arrhythmias and sudden cardiac death can occur; however, the pathological mechanism and influence on left ventricular function remain undetermined. We aimed to assess the risk factors and left ventricular (LV) function in SCF and evaluate the relationships between thrombolysis in myocardial infarction frame count (TFC) and the number of involved coronary arteries with LV function in patients with SCF. We included 124 patients who underwent coronary angiography because of symptoms of angina; 71 patients with angiographically proven SCF and 53 cases with normal coronary flow pattern. SCF was diagnosed as TFC &gt;27 in at least one coronary artery. Complete blood count and biochemical parameters were compared between the two groups. Conventional echocardiography and tissue Doppler imaging were used to assess LV systolic and diastolic function. Platelet aggregation rate induced by ADP was an independent predictor of SCF and positively correlated with coronary artery mean TFC (mTFC) (r = 0.514, P &lt; 0.001) and the number of coronary arteries with SCF (r = 0.628, P &lt; 0.001). Early diastolic mitral inflow velocity (E) (0.66 ± 0.15 vs. 0.74 ± 0.17, P = 0.008), ratio of early to late diastolic mitral inflow velocity (E/A) (0.95 ± 0.29 vs. 1.15 ± 0.35, P = 0.002), global myocardial peak early diastolic velocity (gVe) (4.41 ± 1.25 vs. 4.96 ± 1.45, P = 0.037), and ratio of global myocardial peak early to late diastolic velocity (gVe/gVa: 1.09 ± 0.45 vs. 1.36 ± 0.58, P = 0.006) were decreased in patients with SCF compared with controls. gVe (3 vs. 0 branches, 4.08 ± 1.14 vs. 4.97 ± 1.45, respectively, P = 0.008) deteriorated significantly in patients with SCF involving three coronary arteries. mTFC negatively correlated with E and E/A (r = -0.22, P = 0.02; r = -0.20, P = 0.04, respectively). The number of coronary arteries with SCF negatively correlated with E, E/A, gVe and gVe/gVa (r = -0.23, P = 0.02; r = -0.25, P = 0.009; r = -0.25, P = 0.008; r = -0.21, P = 0.03, respectively). Platelet aggregation rate induced by ADP was an independent predictor of SCF and positively correlated with coronary artery TFC and the number of affected coronary arteries. Left ventricular global and regional diastolic function was impaired in SCF patients. Furthermore, the number of coronary arteries involved rather than coronary artery TFC determined the severity of left ventricular dysfunction in patients with SCF. abstract_id: PUBMED:18320249 Alterations in vasomotor control of coronary resistance vessels in remodelled myocardium of swine with a recent myocardial infarction. The mechanism underlying the progressive deterioration of left ventricular (LV) dysfunction after myocardial infarction (MI) towards overt heart failure remains incompletely understood, but may involve impairments in coronary blood flow regulation within remodelled myocardium leading to intermittent myocardial ischemia. Blood flow to the remodelled myocardium is hampered as the coronary vasculature does not grow commensurate with the increase in LV mass and because extravascular compression of the coronary vasculature is increased. In addition to these factors, an increase in coronary vasomotor tone, secondary to neurohumoral activation and endothelial dysfunction, could also contribute to the impaired myocardial oxygen supply. Consequently, we explored, in a series of studies, the alterations in regulation of coronary resistance vessel tone in remodelled myocardium of swine with a 2 to 3-week-old MI. These studies indicate that myocardial oxygen balance is perturbed in remodelled myocardium, thereby forcing the myocardium to increase its oxygen extraction. These perturbations do not appear to be the result of blunted beta-adrenergic or endothelial NO-mediated coronary vasodilator influences, and are opposed by an increased vasodilator influence through opening of K(ATP) channels. Unexpectedly, we observed that despite increased circulating levels of noradrenaline, angiotensin II and endothelin-1, alpha-adrenergic tone remained negligible, while the coronary vasoconstrictor influences of endogenous endothelin and angiotensin II were virtually abolished. We conclude that, early after MI, perturbations in myocardial oxygen balance are observed in remodelled myocardium. However, adaptive alterations in coronary resistance vessel control, consisting of increased vasodilator influences in conjunction with blunted vasoconstrictor influences, act to minimize the impairments of myocardial oxygen balance. abstract_id: PUBMED:11917193 The prevalence of impaired left ventricular diastolic filling is related to the extent of coronary atherosclerosis in patients with stable coronary artery disease. Aims: The relation between abnormal left ventricular (LV) diastolic filling and the extent of coronary atherosclerosis per se has not been described. We aimed to investigate the prevalence of impaired LV diastolic filling in patients with stable coronary artery disease (CAD) and its relationship to the number and location of coronary lesions visualized at coronary angiography. Methods And Results: In 170 consecutive patients with stable CAD and an abnormal coronary angiogram we assessed LV diastolic filling by Doppler evaluation of the transmitral early to atrial peak flow velocity (E/A) and the systolic to diastolic ratio of the pulmonary venous peak inflow to the left atrium (S/D). Abnormal diastolic filling was defined as E/A &lt; or =0.75, or E/A &gt;1.0 combined with S/D &lt; or =1.0, and was present in 41% of the patients. In patients with one-, two- and three-vessel disease the prevalence of impaired diastolic filling was 27, 30 and 49%, respectively (P = 0.026). In multiple logistic regression analysis diastolic filling was independently correlated with the number of stenotic coronary vessel areas. Conclusion: In patients with stable angiographically verified CAD, the prevalence of impaired diastolic filling was 41%. The prevalence increased with an increasing number of stenotic coronary artery areas independent of other variables tested, including prior myocardial infarction, LV systolic function and mitral regurgitation. abstract_id: PUBMED:27086730 Layer-specific analysis of left ventricular myocardial contractility in patients with coronary slow-flow phenomenon. Purpose: Coronary slow-flow phenomenon (CSFP) is an angiographic diagnosis characterized by delayed coronary opacification in the absence of significant epicardial coronary stenosis. We previously reported impaired left ventricular (LV) function in patients with CSFP. However, the LV wall comprises three myocardial layers whose contractility can be differently affected. Therefore, we evaluated layer-specific (endocardial, mid-myocardial, and epicardial) LV myocardial contractility and assessed its relationships with the number of affected coronary arteries and their flow. Methods: We studied 60 patients with CSFP and 46 controls. CSFP was diagnosed by thrombolysis in myocardial infarction frame count. Echocardiography was performed within 72 hours after the coronary angiography. LV layer-specific longitudinal strain (LS) was measured using two-dimensional speckle-tracking echocardiography. Results And Conclusions: The LS of all three myocardial layers was lower in patients with CSFP than in controls. In patients with CSFP, LS of all three myocardial layers and transmural LS gradient correlated negatively with the number of affected coronary arteries and the mean thrombolysis in myocardial infarction frame count. © 2016 Wiley Periodicals, Inc. J Clin Ultrasound 44:429-436, 2016. Answer: The role of neurohumoral activation in impaired coronary flow and left ventricular dysfunction after mechanical recanalization in acute myocardial infarction (AMI) is supported by evidence of distinct patterns of neurohumoral activation during early reperfusion. In a study investigating early neurohumoral changes during primary percutaneous coronary intervention (PAMI), substantial increases in arginine vasopressin (AVP) and endothelin-1 (ET-1) were documented. Elevated levels of ET-1 and norepinephrine (NE) were associated with impaired angiographic reperfusion and increased myocardial damage after mechanical recanalization (PUBMED:12200640). The study found that the corrected TIMI-frame count (CTFC) in the infarct-related artery correlated inversely with left ventricular ejection fraction measured after 10 days, and positively with the maximal amount of lactate dehydrogenase (LDH) released after the onset of AMI. This suggests that neurohumoral activation, characterized by increased levels of AVP, ET-1, and NE, may contribute to the reduction of cardiac function and impairment of myocardial blood flow during reperfusion in AMI. The role of neurohumoral activation in heart failure is further supported by the observation that neurohumoral activation contributes to the transition from ventricular dysfunction to clinical heart failure and is an independent predictor of poor prognosis in heart failure. Therapeutic modulation of neurohumoral activation, such as with angiotensin-converting enzyme inhibitors, beta receptor blockers, digitalis, and endurance exercise training, is key to the successful treatment of heart failure (PUBMED:9550589). Moreover, persistent neurohumoral activation post-infarction, involving several neurohumoral systems, has been observed in patients with left ventricular dysfunction but no overt heart failure. This subgroup of patients is likely at higher risk of developing complications and may benefit from pharmacologic interventions that blunt the effects of neurohumoral activation (PUBMED:8101532). In conclusion, neurohumoral activation plays a significant role in impaired coronary flow and left ventricular dysfunction following mechanical recanalization in AMI. The activation of neurohumoral systems, including AVP, ET-1, and NE, is associated with reduced cardiac function and impaired myocardial blood flow, which may contribute to the ischemia/reperfusion sequelae observed after PAMI.
Instruction: Does epidural increase the incidence of cesarean delivery or instrumental labor in Saudi populations? Abstracts: abstract_id: PUBMED:18044297 Does epidural increase the incidence of cesarean delivery or instrumental labor in Saudi populations? Background: This is a retrospective review of the medical records of 861 patients admitted for vaginal delivery. Patients were randomized to either epidural analgesia or other analgesic methods of pain relief for labor pains. The primary purpose of this study was to evaluate the effect of epidural analgesia on the rate of cesarean section delivery. Methods: 861 patients were admitted to Saad Specialist Hospital for vaginal delivery between May 1, to September 30, 2006. Patients were divided into Nulliparous (334 patients) and Multiparous (527 patients) populations. Each population was then divided into two groups, epidural and non-epidural group. Epidural analgesia was initiated by a bolus of bupivacaine 0.25% (6 to 10 ml) plus fentanyl (50 to 100 microg), followed by bupivacaine 0.125% plus fentanyl (1 to 2 microg/ml) at the rate of (6 to 12 ml/h). Non-epidural analgesia was initiated by one or mixture of I.M meperidine 50 to 100 mg I.M, promethazine hydrochloride 25 mg, or Ontonox inhalers. Results: In the Nulliparous population a total of 57 patients requested epidural (the epidural gp), while 277 patients received other analgesic methods (the non-epidural gp). There was no difference in the rate of cesarean section deliveries between the two analgesia groups (12 patients of 57 in the epidural group (21.1%), versus 61 patients of 277 (22%) in the non-epidural gp). In the Multiparous population, a total of 49 patients requested epidural analgesia (epidural gp), while 478 patients received other analgesic methods (the non-epidural gp). There was no difference in the rate of cesarean section deliveries between the two analgesia groups (5 of 49 patients (10.2%) in the epidural group compared to 39 of 478 patients (8.2%) in the non-epidural gp). Significantly, more patients in the epidural group had forceps or vacuum assisted deliveries compared to the other analgesia group (the non-epidural gp). This was evident in both the Nulliparous population (7 of 57 patients in the epidural gp (12.3%) compared to 13 of 277 patients (4.7%) in the non-epidural gp, p&lt;0.05), and in the Multiparous population (4 of 49 patients (8.2%) in the epidural gp versus only 6 of 478 patients (1.3%) in the non-epidural gp, p&lt;0.005). Patients in the epidural gp had significantly lower pain scores compared to the other analgesia gp. Nulliparous reported lower pain scores during the first stage (epidural 2 +/- 2 vs non-epidural 5 +/- 4, p&lt;0.0001) and second stage of labor (epidural 3 +/- 3 vs non-epidural 6 +/- 3, p&lt;0.0001). Multiparous scores were (epidural 2 +/- 2 vs non-epidural 4 +/- 3, p&lt;0.0001) in the first stage and (epidural 3 +/- 3 vs non-epidural 5 +/- 4, p = 0.0007) in the second stage. Higher satisfaction rates were reported in epidural groups versus the non-epidural groups. 80% of Nulliparous women who received epidural rated their satisfaction as excellent or good versus 45% in the non-epidural group p&lt;0.001. In the Multiparous population 77% in the epidural versus 43% in the non-epidural groups p&lt;0.001 reported excellent to good satisfaction rates. Conclusion: Epidural analgesia is an effective method of pain relief during labor compared to the other analgesic methods of labor pain relief, and it does not increase the incidence of cesarean section deliveries. abstract_id: PUBMED:1763342 Does epidural anesthesia affect the course of labor and delivery? The progress of labor ideally leads to a normal, spontaneous, vaginal delivery. Many factors--maternal, fetal, obstetrical and anesthetic--may collectively contribute to delaying or precluding an ideal outcome. Adverse outcomes of labor and delivery must be clearly defined in terms of maternal, fetal, or neonatal morbidity or mortality, and should be distinguished from abnormal data (such as fetal heart rate, duration of labor, and incidence of low forceps deliveries) that are only indirectly related to outcome. Epidural anesthesia is not a singular phenomenon. Variations and subtle nuances in the administration of epidural anesthesia create a vast array of potential anesthetic regimens. To discover the anesthetic methods that provide excellent analgesia with no adverse effects on the course of labor is an ideal goal that has not yet been achieved. To approach this goal, the answers to several questions are needed. Are study patient groups adequately matched for risk factors? Are obstetrical practices well-defined with respect to elective versus indicated use of instrumental or cesarean delivery methods? Do the risks of instrumental delivery outweigh the benefit of perineal analgesia associated with epidural anesthesia? Is it possible to provide optimal analgesia throughout stage two labor without tending to increase the risk of instrumental delivery? Can delayed pushing during stage two labor increase the probability of spontaneous vaginal delivery with continuous epidural anesthesia? Well-controlled clinical trials and refinements in both anesthetic and obstetric techniques are necessary to resolve such issues so that every parturient (and fetus) may undergo a safe and painless delivery. abstract_id: PUBMED:25592169 Adjustment for compliance behavior in trials of epidural analgesia in labor using instrumental variable meta-analysis. Objectives: Intention-to-treat (ITT) analysis of randomized controlled trials (RCTs) may cause bias when compliance is poor. Noncompliance describes failure to comply with allocation in the intervention arm, and contamination describes uptake of the intervention in the control arm. Instrumental variable (IV) analysis can be applied in addition to the primary ITT analysis to estimate the causal effect adjusted for noncompliance and contamination, assuming that noncompliers would have had the same treatment benefit as compliers. We aimed to compare ITT and IV meta-analysis of the association between epidural analgesia in labor and cesarean section. Study Design And Setting: The study was restricted to 27 trials in a Cochrane Systematic Review. The association between epidural analgesia in labor and cesarean section was calculated using ITT and IV analyses. Pooled risk ratios (RRs) were calculated using fixed-effects meta-analysis. Results: In 18 trials with compliance data, noncompliance was 23% and contamination was 27%. In 10 trials with outcome data stratified by compliance, the pooled RR for cesarean section following epidural analgesia was 1.37 [95% confidence interval (CI): 1.00, 1.89; P = 0.049] using IV compared with 1.19 (95% CI: 0.93, 1.51; P = 0.16) using ITT. Conclusion: ITT meta-analysis underestimates the effect of receiving epidural analgesia in labor on cesarean section compared with IV meta-analysis. abstract_id: PUBMED:10725574 Adverse effects of epidural analgesia in labor. Objective: To examine the influence of epidural analgesia on labor and delivery in nulliparous and multiparous women. Design: Data were collected on 847 consecutive parturients with singleton pregnancy and vertex presentation (384 nulliparous and 463 multiparous). The obstetrical and labor characteristics including maternal age, parity, gestational age, previous cesarean section, instrumental delivery, mode and timing of analgesia, mode of delivery, indications for cesarean section or instrumental delivery were analyzed comparing patients who received epidural analgesia with women who received systemic analgesia. Results: Epidural analgesia was administered in 233 nulliparous and 141 multiparous women. A stepwise logistic regression analysis revealed that epidural analgesia independently affected the rate of non-spontaneous delivery and the duration of the second stage of labor in nulliparous (P=0.0017 and P=0.0036, respectively) and multiparous (P=0.001 and P=0.0081, respectively) women. Epidural analgesia independently affected the duration of labor only in nulliparous women (P=0.0001). Conclusion: Women should be informed that prolongation of labor and increase in nonspontaneous deliveries should be expected when choosing epidural analgesia in labor. abstract_id: PUBMED:9835854 Epidural analgesia during labor. Epidural analgesia is a commonly employed technique of providing pain relief during labor. The number of parturients given intrapartum epidural analgesia is reported to be over 50 percent at many institutions in the United States. The procedure has few contraindications, the primary ones being patient refusal, maternal hemorrhage and coagulopathy. Induction of epidural analgesia in early labor remains controversial. However, many physicians induce analgesia as soon as the diagnosis of active labor has been established and the patient has requested pain relief. The most common complications occurring with epidural analgesia are maternal hypotension and postdural puncture headache. Retrospective studies have demonstrated an association between epidural analgesia and increases in duration of labor, instrumental vaginal delivery and cesarean section for labor. However, several recent prospective studies have concluded that epidural analgesia does not adversely affect the progress of labor or increase the rate of cesarean section. These remain controversial issues among practicing physicians. abstract_id: PUBMED:9661559 Epidural labor analgesia and the incidence of cesarean delivery for dystocia. Unlabelled: We performed this retrospective study to examine the changes in cesarean delivery rates associated with the establishment of a labor epidural service. In April 1993, St. Louis Regional Medical Center established an on-demand labor epidural service. We obtained demographic data for all patients and reviewed the operative records of all patients undergoing cesarean section who delivered 12 mo before and 16 mo after the start of the labor epidural service. We compared labor epidural rates and total and nulliparous dystocia cesarean delivery rates before and after the epidural service started and among patients who did and did not receive labor epidural analgesia when it was available. Included were 3195 patients who delivered before and 3733 patients who delivered after epidural analgesia became available. Labor epidural rates were 1.2% vs 29.4% for the Before group versus the After group (P &lt; 0.001). Total (9.1% vs 9.7%) and nulliparous dystocia (5.7% vs 6.4%) cesarean delivery rates did not significantly change with the availability of epidural analgesia. However, the total (11.6% vs 8.8%; P = 0.009) and dystocia (8.0% vs 1.0%; P = 0.001) cesarean delivery rates were higher among patients who received epidural analgesia when it was available. We conclude that epidural labor analgesia is associated with, but does not cause, cesarean delivery for dystocia. Implications: Increased epidural analgesia use did not change the overall dystocia cesarean delivery rate, although dystocia was more common among women who chose an epidural analgesia. Consequently, limiting epidural availability will not affect cesarean delivery rates. The evidence does not support advising patients that epidural labor analgesia increases the risk of cesarean delivery. abstract_id: PUBMED:11483916 Does epidural analgesia prolong labor and increase risk of cesarean delivery? A natural experiment. Objective: More than 50% of pregnant women in the United States are using epidural analgesia for labor pain. However, whether epidural analgesia prolongs labor and increases the risk of cesarean delivery remains controversial. Study Design: We examined this question in a community-based, tertiary military medical center where the rate of continuous epidural analgesia in labor increased from 1% to 84% in a 1-year period while other conditions remained unchanged-a natural experiment. We systematically selected 507 and 581 singleton, nulliparous, term pregnancies with spontaneous onset of labor and vertex presentation from the respective times before and after the times that epidural analgesia was available on request during labor. We compared duration of labor, rate of cesarean delivery, instrumental delivery, and oxytocin use between these two groups. Results: Despite a rapid and dramatic increase in epidural analgesia during labor (from 1% to 84% in 1 year), rates of cesarean delivery overall and for dystocia remained the same (for overall cesarean delivery: adjusted relative risk, 0.8; 95% confidence interval, 0.6-1.2; for dystocia: adjusted relative risk, 1.0; 95% confidence interval, 0.7-1.6). Overall instrumental delivery did not increase (adjusted relative risk, 1.0; 95% confidence interval, 0.8-1.4), nor did the duration of the first stage and the active phase of labor (multivariate analysis; P &gt;.1). However, the second stage of labor was significantly longer by about 25 minutes (P &lt;.001). Conclusion: Epidural analgesia during labor does not increase the risk of cesarean delivery, nor does it necessarily increase oxytocin use or instrumental delivery caused by dystocia. The duration of the active phase of labor appears unchanged, but the second stage of labor is likely prolonged. (Am J Obstet Gynecol 2001;185:128-34). abstract_id: PUBMED:33615965 Fetal head malposition and epidural analgesia in labor: a case-control study. Background: The fetal head malposition in labor leads to prolonged labor, cesarean delivery and increased perinatal morbidity. Epidural analgesia has been associated with fetal head malposition, but it remains unknown if this relation is causal. Objective: To compare the incidence of fetal malposition during labor and maternal/fetal outcomes, between women who received epidural analgesia with those who did not use the analgesic method. Study Design: Case control study including 500 women with a single fetus in vertex position who gave birth at term at the Policlinic Hospital of Modena between May 2019 and July 2019. Two-hundred and fifty women belonged to the epidural analgesia (EA) group and 250 to the control group. Results: The rate of posterior occiput positions occurred 4 times more frequently in the EA group than in the control group (8.8% vs 2.2%, p = .004). Cesarean sections were significantly higher in the EA group (11.6% vs 1.6%, p &lt; .0000) as well as the need for augmentation with oxytocin (20% vs 8%, p = .0001) compared to the control group, in which spontaneous delivery prevailed instead. Women with epidural had labors that lasted on average 7.0 h against the 3.30 h of controls (p &lt; .0000). The length of 2nd stage of labor was 55 vs 30 min (p = .009), respectively. No differences in blood loss and Apgar score between groups. Early breastfeeding was significantly higher among controls (82% vs 92.8%, p = .0004). Conclusions: Women receiving epidural analgesia in labor have higher rate of fetal malposition, prolonged labors, and more cesarean sections than controls. However, further studies are required to confirm a causal association between EA and fetal head malposition. abstract_id: PUBMED:15200182 Obstetric epidural analgesia: relationship between obstetric variables and the course of labor Objectives: To analyze the relationship between epidural analgesia and diverse obstetric and fetal variables as well as the impact of epidural analgesia on the rates of instrumental and cesarean delivery. Patients And Methods: Observational study of women who gave birth at Fundación Hospital Alcorcón over a period of 3 years. All the women were offered obstetric epidural analgesia based on 0.0625% bupivacaine plus 2 microg/mL of fentanyl. The following data were recorded: age, nulliparity (yes/no) administration of epidural analgesia (yes/no), induction of labor (yes/no), stimulation of uterine activity with oxytocin (yes/no), type of delivery, fetal weight, duration of dilation, duration of expulsion, cause of cesarean. Results: The records of 4364 women were gathered. The percentages of inductions, nulliparas, oxytocin stimulation, and fetal weight greater than 4 kg and less than 2.5 kg were higher among women taking epidural analgesia. The age of women who received epidurals was significantly lower. The durations of dilation and expulsion were longer among women receiving epidural analgesia, and epidural analgesia was associated with greater risk of instrumental and cesarean deliveries. The significant increase in administration of epidural blocks over the 3-year period of the study was not accompanied by an increased rate of instrumentally assisted deliveries or cesareans. Conclusions: It is difficult to evaluate the real influence of epidural analgesia on certain aspects of labor and its evolution. The strength of the association between epidural analgesia and greater risk of increased rates of instrumental and cesarean deliveries may be influenced by factors not considered in the present study. abstract_id: PUBMED:9127155 Effect of epidural analgesia on duration and outcome of induced labor. Objective: To evaluate the effect of epidural block on the duration of labor and maternal and fetal outcome in induced-labor patients. Method: 210 women admitted for induction of labor at a level III university hospital, during the year 1991, were studied. Of these, 112 were given an epidural block and 98 served as controls. Outcome measures studied were: induction time and time in labor, c-section and instrumental delivery rates, intra-partum and post-partum complications, and Apgar scores at 1 and 5 min. Result: Multiple regression analysis revealed that epidural analgesia, before and after adjustment for confounding factors, significantly prolonged labor time (P &lt; 0.0001) and was associated with an increase in instrumental delivery rates (P &lt; 0.04). A significant reduction in intra-partum complication rate (FHR changes, meconium) was observed (OR = 0.32; P &lt; 0.03), while c-section rate and Apgar scores were not found to be influenced by epidural analgesia. Conclusion: Epidural anesthesia, significantly prolongs labor time in induced patients. While instrumental delivery was more prevalent in these parturients, c-section rate was not increased and intra-partum complications were significantly reduced in these patients. Answer: The retrospective review of medical records of 861 patients admitted for vaginal delivery at Saad Specialist Hospital in Saudi Arabia found that epidural analgesia did not increase the incidence of cesarean section deliveries in either nulliparous or multiparous populations. In the nulliparous population, 21.1% of patients in the epidural group had cesarean sections compared to 22% in the non-epidural group. Similarly, in the multiparous population, 10.2% of patients in the epidural group had cesarean sections compared to 8.2% in the non-epidural group. However, the study did find that significantly more patients in the epidural group had forceps or vacuum-assisted deliveries compared to the non-epidural group, which was evident in both nulliparous and multiparous populations. Despite this, patients in the epidural group reported significantly lower pain scores and higher satisfaction rates compared to the non-epidural group (PUBMED:18044297). Therefore, based on the findings from this study, epidural analgesia is an effective method of pain relief during labor and does not increase the incidence of cesarean section deliveries in the Saudi population, although it may be associated with a higher rate of instrumental labor.
Instruction: Can mammography screening explain the race difference in stage at diagnosis of breast cancer? Abstracts: abstract_id: PUBMED:7697601 Can mammography screening explain the race difference in stage at diagnosis of breast cancer? Background: A race difference in the stage at diagnosis of breast cancer is well established: African American women are less likely than white women to be diagnosed at a localized stage. The purpose of this study was to determine the extent to which the observed race (black/white) difference in stage at diagnosis of breast cancer could be accounted for by race differences in the mammography screening history. Methods: This was a population-based, retrospective study of 145 African American and 177 white women with newly diagnosed breast cancer in Connecticut, between January, 1987 and March, 1989. Cases were ascertained through active surveillance of 22 Connecticut hospitals. Results: Black women were diagnosed more commonly with later stage cancer (TNM stage &gt; or = II) (age-adjusted odds ratio [OR] = 2.01, 95% confidence interval [CI] 1.24-3.24) than were white women. Blacks were also more likely than whites to report that they had not received a mammogram in the 3 years before development of symptoms or diagnosis (OR = 2.05, 95% CI 1.26-3.35); this association was not altered substantially with adjustment for socioeconomic status. In race-specific analyses, mammography was protective against later stage diagnosis in white women, but not in black women. With adjustment for mammography screening, the OR for the race-stage association was reduced only minimally, and race remained a significant predictor of stage at diagnosis. Conclusions: In these population-based data, history of mammography screening was not an important explanatory variable in the race-stage association. Specifically, history of mammographic screening accounted for less than 10% of the observed black/white difference in stage at diagnosis of breast cancer. abstract_id: PUBMED:21246529 Comorbidities and mammography use interact to explain racial/ethnic disparities in breast cancer stage at diagnosis. Background: Interactions with comorbidity burden and comorbidity-related care have not been examined as potential explanations for racial/ethnic disparities in advanced-stage breast cancer at diagnosis. Methods: The authors used linked Surveillance, Epidemiology, and End Results-Medicare data to determine whether comorbidity burden and comorbidity-related care are associated with stage at diagnosis, whether these associations are mediated by mammography use, and whether they explain racial/ethnic disparities. Stage at diagnosis and mammography use were analyzed in multivariate regression models, adjusting for comorbidity burden and comorbidity-race interactions among 118,742 women diagnosed with breast cancer during 1993 to 2005. Results: Mammography utilization was higher among women with ≥ 3 stable comorbidities than among those without comorbidities. Advanced stage at diagnosis was associated with black race (odds ratio [OR], 1.8; 95% confidence interval [CI], 1.6-1.8), Hispanic ethnicity (OR, 1.3; 95% CI, 1.2-1.5), unstable comorbidity, and age ≥ 80 years. Mammography was protective in all racial/ethnic groups, but neither mammography use (OR, 0.3; 95% CI, 0.3-0.3 and OR, 0.2; 95% CI, 0.2-0.2 for women with 1 and ≥ 2 prior mammograms, respectively) nor overall physician service use (OR, 0.7; 95% CI, 0.7-0.8 for women with ≥ 16 visits) explained the association between race/ethnicity and stage at diagnosis. The black/white OR fell to 1.2 (95% CI, 0.9-1.5) among women with multiple stable comorbidities who received ≥ 2 screening mammograms, and 1.0 (95% CI, 0.8-1.3) among mammography users with unstable comorbidities. Conclusions: Comorbidity burden was associated with regular mammography and earlier stage at diagnosis. Racial/ethnic disparities in late stage disease were reduced among women who received both regular mammograms and comorbidity-related care. abstract_id: PUBMED:19524392 Mammography capacity impact on screening rates and breast cancer stage at diagnosis. Background: Mammography capacity in the U.S. reportedly is adequate, but has not been examined in nonmetropolitan areas. This study examined the relationships between in-county mammography facilities and rates of mammography screening and late-stage diagnosis of breast cancers. Methods: The association between a mammography facility in the county of residence (2002-2004) and the odds of screening within 2 years were examined (in 2007) among Texas women aged &gt;40 years who responded to the 2004 Behavioral Risk Factor Surveillance System survey, using multivariate logistic regression to control for age, race, ethnicity, education, income, self-reported health, insurance, and usual source of care. Similarly, the association between an in-county mammography facility and the odds of diagnosis with locally advanced or disseminated disease was examined among Texas women aged &gt;40 years who developed breast cancer in 2004. Results: Half of the 254 counties in Texas had no mammography facility. After controlling for confounding factors, an in-county facility was associated with significantly higher odds of screening (OR=3.27; p=0.03) and lower odds of late-stage breast cancer at diagnosis (OR=0.36; 95% CI=0.26-0.51; p&lt;0.001). The risks of late-stage diagnosis were higher for African-American women (OR=1.52; 95% CI=1.22-1.89; p&lt;0.001) and Hispanic women (OR=1.23; 95% CI=0.99-1.53; p=0.06) than for white women. Conclusions: Although mammography capacity in the U.S. may be adequate on average, the unequal distribution of facilities results in large rural areas without facilities. Screening rates in these areas are suboptimal and are associated with late-stage diagnosis of breast cancer. abstract_id: PUBMED:32156467 Effect of a national population-based breast cancer screening policy on participation in mammography and stage at breast cancer diagnosis in Taiwan. In Taiwan, a Cancer Screening Quality Improvement Program (CAQIP), implemented in 2010, provides financial support to qualified hospitals to improve accessibility. This study examined whether the CAQIP program increased participation in mammography and achieved more early stage diagnosis of breast cancer. We utilised a natural experiment to compare outcomes of interest in women aged 50-69 years with their first mammography date in two different phases, 2005-2009 and 2010-2014. Propensity score matching was used to match comparable cohorts in each phase. In total, 468,259 matched participants in phases 1 and 2 were analyzed. Patient-level logistic regressions were used and adjusted for patient risk factors. Compared with phase 1, our findings indicated women in phase 2 were more likely to have repeat mammography participation (OR, 1.33; 95 % CI, 1.32-1.34), and be diagnosed with early stage breast cancer (OR, 1.15 times; 95 % CI, 1.05-1.26). Women living in rural areas were less likely to repeatedly participate mammography (OR, 0.86; 95 % CI, 0.85-0.86) and experience early diagnosis (OR, 0.90, 95 % CI, 0.81-0.99). Women at low socioeconomic status were less likely to experience early diagnosis, and those with reproductive and hormonal risk factors were less likely to repeatedly participate mammography. Our findings provide evidence of potential benefits of health policy intervention to improve accessibility on participation in mammography and early stage diagnosis of breast cancer. abstract_id: PUBMED:33792894 Racial differences in no-show rates for screening mammography. Background: Differences in utilization of screening mammography partly explain the increased breast cancer mortality observed in African American (AA) women compared with non-Hispanic White women. However, the contribution of noncompliance from women who do not come for their scheduled screening mammography appointment (ie, no-shows) is unknown. The purpose of this study was to investigate racial differences in no-show rates for screening mammography. Methods: Women scheduled for routine screening mammograms between January 2018 and March 2018 were identified from the Joanne Knight Breast Health Center at Siteman Cancer Center in St. Louis, Missouri. Using a case-control design, this study retrospectively identified patients who no-showed for their mammograms (cases) and randomly sampled an equal number of patients who completed their mammograms (controls). These participants were compared by race. The main outcome measure was whether AA race was associated with no-shows for screening mammography. Results: During the study period, 5060 women were scheduled for screening mammography, and 316 (6.2%) did not keep their appointment (ie, they no-showed). Women who no-showed were more likely to be AA than women who kept their appointment (odds ratio, 2.64; 95% confidence interval, 1.90-3.67). Even after adjustments for marital status, insurance type, and place of residence, AA race was still significantly associated with no-shows for screening mammography. Conclusions: This study identified a no-show rate of 6.2% for screening mammography at the authors' institution. Women who no-showed were more likely to be AA than women who completed their mammogram even after adjustments for multiple factors. These data can be leveraged for future studies aimed at improving mammography attendance rates among AA women. abstract_id: PUBMED:23535547 Mammography screening Breast cancer screening using mammography is one of the few screening interventions that have been assessed in several large randomized controlled studies (RCTs). Results from eight RCTs are available and have been summarized in several systematic reviews and meta-analyses, most of which conclude that mammography screening achieve a relative reduction in breast cancer mortality of 20 %. In the Swiss context this translates to one prevented breast cancer death over 10 years among 1000 women who start biannual mammography screening at age 50. However, mammography screening leads to the detection of cancers that would never have been found were it not for the screening (overdiagnosis). Screening thus increases the incidence of breast cancer, by about 20 %, which in turn leads to more therapies (surgery, radiation and chemotherapy). Over ten years after starting mammography at age 50, four additional women out of 1000 will be diagnosed with breast cancer. Without screening, these women would not have had a diagnosis. Over the same ten years (five screening rounds), about one in five women will have at least one mammography result which requires further diagnostic workup. Furthermore, even when mammography screening is implemented, 20 to 30 % breast cancer patients will not be diagnosed by the screening but between screening rounds. The information and knowledge of the population about benefits and harms of mammography screening is inadequate in that the benefits are overestimated and the harms underestimated. The female population needs to be better informed on both these aspects of mammography screening. Women have a right to receive adequate evidence-based information in order to make an informed decision whether to attend mammography screening. abstract_id: PUBMED:23555140 Mammography stage of adoption among Iranian women. Background: Stage of Change and Health Belief Models are two the most common models that have been applied in mammography screening behaviors. The purpose of this study was to identify women in different stages of mammography adoption and to examine association between women's beliefs by stage of change among Iranian women. Methods: In this population-based study, mammography screening behavior determined using the Rakoweski stage of change scale, and women beliefs were determined by Farsi version of Champion's Health Belief Model scale (CHBMS). The obtained data were analyzed by SPSS (version 16.0) using statistical analysis of variance and Chi-square tests that used to determine relation between socio-demographic variables and HBM scales with stage of mammography. In all of the tests, a significant level α = 0.05 was considered. Results: According to stages of change distributions for mammography screening behavior, 36.2% were in pre-contemplation, 21.1% in contemplation, 4.2% in action, 6.2% in maintenance, and 32.3% were in relapse stage. There were significant differences in mammography stage by age, education, married status, family monthly income, husband job, information about breast cancer, and history of breast problems. ANOVA test showed significant differences by stages for all HBM scales, except to perceived severity. Conclusion: The findings indicated that women in pre-contemplation, relapse, and contemplation stages are more likely need to educational intervention, which emphasizes the benefits and barriers of mammography, health motivation, and self-efficacy. Furthermore, examining individuals' beliefs about breast cancer and mammography screening by stage of mammography adoption is especially important in order to development of effectiveness interventions, which targeted to stage of change and promote mammography screening behavior. abstract_id: PUBMED:25399230 Association between persistence with mammography screening and stage at diagnosis among elderly women diagnosed with breast cancer. Previous studies on the association between mammography screening and stage at breast cancer (BC) diagnosis have limitations because they did not analyze persistence with mammography screening and did not distinguish screening from diagnostic mammograms. The objective of this study is to determine the association between persistence with mammography screening and stage at BC diagnosis among elderly women. A retrospective observational study of 39,006 women age ≥70 diagnosed with incident BC from 2005 to 2009 from the Surveillance, Epidemiology, and End Results (SEER)-Medicare dataset was conducted. A validated algorithm with high sensitivity and specificity was used to distinguish between screening and diagnostic mammograms. Persistence with mammography screening was measured as having at least three screening mammograms in five years before BC diagnosis. Multinomial logistic regressions were performed to analyze the association between persistence with mammography screening and stage at diagnosis, in a multivariate framework. Overall, 46% of elderly women were persistent with mammography screening, 26% were not persistent, and 28% did not have any screening mammogram in five years before BC diagnosis. As compared to women who were not persistent with mammography screening, women who were persistent with mammography screening were significantly more likely to be diagnosed at earlier stages of BC. The adjusted odds ratios were 3.28, 2.37, and 1.60 for in situ, local, and regional stages, respectively. A lower proportion of elderly women was persistent with mammography and it was highly associated with earlier stages of BC diagnosis. Interventions designed to promote persistent mammography screening among elderly women are warranted. abstract_id: PUBMED:20438282 Breast cancer stage at diagnosis and geographic access to mammography screening (New Hampshire, 1998-2004). Introduction: Early detection of breast cancer by screening mammography aims to increase treatment options and decrease mortality. Recent studies have shown inconsistent results in their investigations of the possible association between travel distance to mammography and stage of breast cancer at diagnosis. Objective: The purpose of the study was to investigate whether geographic access to mammography screening is associated with the stage at breast cancer diagnosis. Methods: Using the state's population-based cancer registry, all female residents of New Hampshire aged &gt; or =40 years who were diagnosed with breast cancer during 1998-2004 were identified. The factors associated with early stage (stages 0 to 2) or later stage (stages 3 and 4) diagnosis of breast cancer were compared, with emphasis on the distance a woman lived from the closest mammography screening facility, and residence in rural and urban locations. Results: A total of 5966 New Hampshire women were diagnosed with breast cancer during 1998-2004. Their mean driving distance to the nearest mammography facility was 8.85 km (range 0-44.26; 5.5 miles, range 0-27.5), with a mean estimated travel time of 8.9 min (range 0.0-42.2). The distribution of travel distance (and travel time) was substantially skewed to the right: 56% of patients lived within 8 km (5 miles) of a mammography facility, and 65% had a travel time of less than 10 min. There was no significant association between later stage of breast cancer and travel time to the nearest mammography facility. Using 3 categories of rural/urban residence based on Rural Urban Commuting Area classification, no significant association between rural residence and stage of diagnosis was found. New Hampshire women were more likely to be diagnosed with breast cancer at later stages if they lacked private health insurance (p&lt;0.001), were not married (p&lt;0.001), were older (p&lt;0.001), and there was a borderline association with diagnosis during non-winter months (p=0.074). Conclusions: Most women living in New Hampshire have good geographical access to mammography, and no indication was found that travel time or travel distance to mammography significantly affected stage at breast cancer diagnosis. Health insurance, age and marital status were the major factors associated with later stage breast cancer. The study contributes to an ongoing debate over geographic access to screening mammography in different states, which have given contradictory results. These inconsistencies in the rural health literature highlight a need to understand the complexity of defining rural and urban residence; to characterize more precisely the issues that contribute to good preventive care in different rural communities; and to appreciate the efforts already made in some rural states to provide good geographic access to preventive care. In New Hampshire, specific subgroups such as the uninsured and the elderly remain at greatest risk of being diagnosed with later stage breast cancer and may benefit from targeted interventions to improve early detection. abstract_id: PUBMED:16132791 Inadequate follow-up of abnormal screening mammograms: findings from the race differences in screening mammography process study (United States). Objective: Despite relatively high mammography screening rates, there are reports of inadequate follow-up of abnormal results. Our objective was to identify factors associated with inadequate follow-up, and specifically, to determine if this outcome differed by race/ethnicity. Methods: We studied 176 subjects with abnormal or inconclusive mammograms identified from a prospective cohort study of African-American (n = 635) and White (n = 816) women who underwent screening in five hospital-based facilities in Connecticut, October 1996 through January 1998. Using multivariate logistic regression, we identified independent predictors of inadequate follow-up of an abnormal mammogram. Results: Over 28% of women requiring immediate or short-term follow-up did not receive this care within three months of the recommended return date. African-American race/ethnicity, pain during the mammogram, and lack of a usual provider were significant independent predictors of inadequate follow-up. Although many factors were examined, the observed race difference was unexplained. Conclusions: While inadequate follow-up of abnormal exams undermines the potential benefits of mammography screening for all women, the observed race difference in this study may have implications for the persistent race difference in breast cancer stage at diagnosis and survival. More research is needed to identify factors that contribute to poor follow-up among African-American women. Answer: Mammography screening does not fully explain the race difference in the stage at diagnosis of breast cancer. Studies have shown that African American women are less likely than white women to be diagnosed at a localized stage, and this disparity is not entirely accounted for by differences in mammography screening history. For instance, a study found that history of mammography screening accounted for less than 10% of the observed black/white difference in stage at diagnosis of breast cancer (PUBMED:7697601). Additionally, comorbidity burden and comorbidity-related care have been associated with regular mammography and earlier stage at diagnosis, but racial/ethnic disparities in late-stage disease persisted even among women who received both regular mammograms and comorbidity-related care (PUBMED:21246529). Furthermore, mammography capacity, particularly in nonmetropolitan areas, has been linked to higher screening rates and lower odds of late-stage breast cancer diagnosis, indicating that access to facilities can impact diagnosis stages. African-American and Hispanic women were found to have higher risks of late-stage diagnosis compared to white women (PUBMED:19524392). In Taiwan, a national population-based breast cancer screening policy increased participation in mammography and led to more early-stage diagnoses, although women living in rural areas and those with low socioeconomic status were less likely to experience early diagnosis (PUBMED:32156467). Racial differences in no-show rates for screening mammography have also been identified, with African American women more likely to miss their scheduled mammogram appointments, which could contribute to later stage diagnoses (PUBMED:33792894). Overall, while mammography screening is protective in all racial/ethnic groups, it does not fully explain the racial disparities observed in breast cancer stage at diagnosis. Factors such as comorbidities, access to mammography facilities, socioeconomic status, and compliance with screening appointments all interact to influence the stage at which breast cancer is diagnosed across different racial and ethnic groups.
Instruction: Does the relative density of periarticular bone influence the failure pattern of intra-articular fractures? Abstracts: abstract_id: PUBMED:27282689 Does the relative density of periarticular bone influence the failure pattern of intra-articular fractures? Introduction: The architecture of joints almost certainly influences the nature of intra-articular fractures, and the concavity is much more likely to fail than the associated convexity. However, local differences in periarticular bone density potentially also plays a critical role. The purpose of this study was to investigate if there was any difference in periarticular bone density in intra-articular fractures between the two opposing joint surfaces, comparing the convexity to the concavity. Materials And Methods: We retrospectively identified a series of 1003 intra-articular fractures of the hip, knee, and ankle; 129 of these patients had previously undergone CT scanning during their routine clinical assessment. Periarticular bone density was assessed using Hounsfield Units (HU) as a measure of the composite density of the adjacent bone. Bone density was compared between the opposite sides of each joint, to determine if a relationship exists between local bone density and the risk of articular surface fracture. Results: There was a statistically significant difference in density between the two opposing surfaces, with the convexity 19% more dense than the concavity (p=0.0001). The knee exhibited the largest difference (55%), followed by the hip (18%); in the ankle, an inverse relationship was observed, and the concave surface was paradoxically denser (5%). There was no significant difference between those cases where the concavity failed in isolation compared to those where the convexity also failed (p=0.28). Conclusion: When the results were pooled for all three joints, there was a statistically significant higher local bone density demonstrated on the convex side of an intra-articular fracture. However, while this relationship was clearly exhibited in the knee, this was less evident in the other two joints; in the ankle the reverse was true, and the local bone adjacent to the concavity was found to have greater density. This suggests local bone density plays only a minor role in determining the nature of intra-articular fractures. abstract_id: PUBMED:27671082 Stabilisation of periarticular fractures and osteotomies with a notched head locking T-plate. Objective: To report the clinical outcomes and complications of small animals that had articular or periarticular fractures or osteotomies stabilised with a notched head locking T-plate. Methods: Medical records were searched retrospectively to identify animals that had a notched head locking T-plate used to stabilise a small articular or periarticular bone fragment. Results: Nine dogs and two cats had an articular or periarticular bone fragment stabilised with a 2.0- or 2.4-mm notched head locking T-plate (12 procedures). The median body weight was 4.7 kg. The plate was modified by removing holes in 10/12 procedures and a combination of locking and non-locking screws were used in 7/12 procedures. All fractures or osteotomies progressed to clinical union. There were two intraoperative complications (intra-articular screw placement and overlong screw) and two postoperative complications (skin necrosis and stress protection) Conclusions: This study reports the successful use of a 2.0- or 2.4-mm notched head locking T-plate for articular or periarticular fractures or osteotomies in a variety of small-breed dogs and cats. Care must be taken to prevent inadvertent penetration of the articular surface, particularly in regions such as the proximal tibia. The ability to modify the plate dimensions intraoperatively proved beneficial in most cases. abstract_id: PUBMED:32474717 Biomechanical model of distal articular humeral fractures-influence of bone density on the fracture threshold. Introduction: Working hypothesis: The distal humeral bone density influences supracondylar fracture threshold. The aim of this study was first to develop a reproducible model of intra-articular distal humeral fractures and second to establish a relationship between bone mineral density (BMD) and the fracture threshold of the humerus. Materials And Methods: An original model of the fracture was developed using ten sawbones. After obtaining a reproducible and clinically relevant fracture model, we tested 21 cadaveric distal humeri for which the BMD was known with a stainless-steel custom-made proximal ulna jig. Fractures were created using a servo hydraulic-testing machine in axial compression to simulate a fall onto an outstretched hand. Fracture lines, load to failure, and rigidity of the bone were recorded based on the stress-strain curves. Results: The fracture generation was reliable, reproducible, and clinically relevant (type B2). A significant correlation between the BMD and the fracture threshold was found. Mean threshold was 901.86 N/m2. Mean distal humerus BMD was 0.9097 g/cm2 (r = 0.7321). Conclusions: We developed a reproducible articular fracture of the distal humerus model and found a correlation between the fracture threshold and bone mineral density. abstract_id: PUBMED:31234746 Management of severe stifle trauma: 2. Periarticular fractures. Practical Relevance: Periarticular stifle fractures are uncommon but challenging injuries to treat. There may be concurrent damage to the ligaments and stabilising structures of the stifle joint itself, which should also be evaluated (see Part 1 of this article series). Improved knowledge of the fixation options and biomechanical challenges of these injuries will aid clinical decision-making and effective repair. Clinical Challenges: Due to their nature, periarticular fractures may have small juxta-articular fragments, which require additional thought to address during stabilisation. In juvenile animals, these fractures typically involve the physis and often can be managed with cross pin fixation. In adult cats, locking fixation, specialised plates or external skeletal fixators can be utilised to address the juxta-articular fragments. Readers should also be aware of patellar fracture and dental anomaly syndrome (PADS), where cats develop insufficiency fractures affecting the patella and proximal tibia. A careful oral examination should be performed in affected patients, as these cats may have persistent deciduous teeth. Close attention should be paid to preoperative radiographs for evidence of chronicity of the injury including sclerosis of the patella, remodelling and blunting of the fracture lines. Aims: The aims of the article are to review the current literature surrounding periarticular fractures of the stifle joint and to summarise the diagnosis, treatment, outcome and complications of each fracture type. As well as discussing general fracture fixation and biomechanical principles applicable to both dogs and cats, information is provided on specific issues facing cats, such as PADS. Evidence Base: Published data is limited, with reports including only a small number of cats. The information and recommendations in this article have therefore been drawn from a combination of the available literature and the authors' clinical experience. abstract_id: PUBMED:26772943 Arthrofibrosis After Periarticular Fracture Fixation. Arthrofibrosis after periarticular fractures can create clinically significant impairments in both the upper and lower extremities. The shoulder, elbow, and knee are particularly susceptible to the condition. Many risk factors for the development of arthrofibrosis cannot be controlled by the patient or surgeon. Early postoperative motion should be promoted whenever possible. Manipulations under anesthesia are effective for a period of time in certain fracture patterns, and open or arthroscopic surgical debridements should be reserved for the patient for whom nonoperative modalities fail and who has a clinically significant deficit. abstract_id: PUBMED:32409183 Bone density of the calcaneus correlates with radiologic and clinical outcomes after calcaneal fracture fixation. Background: This study aimed to determine whether bone density correlates with radiologic and clinical outcomes after screw fixation of displaced intra-articular calcaneal fractures (DIACF) with the sinus tarsi approach. Materials And Methods: We retrospectively evaluated 43 consecutive cases of unilateral DIACF between March 2015 and December 2017. Radiologic evaluations were performed using preoperative, postoperative, and last follow-up calcaneal lateral and axial radiographs. In all patients, preoperative CT scanning of both injured and uninjured calcaneus were performed at one scanning and dual x-ray absorptiometry (DXA) scans were obtained. Hounsfield unit (HU) measurement values were determined by placing an elliptical region of interest confined to the cancellous region of the uninjured calcaneus. Clinical outcomes were assessed at a minimum of 12 months postoperatively using Foot and Ankle Outcome Scores (FAOS). Results: Mean HU values of the uninjured calcaneus significantly correlated with bone mineral density scores obtained from DXA scans of the lumbar and femur (Spearman ρ = 0.656 - 0.748; p &lt; 0.001 for both). Decreased HU values of the uninjured calcaneus significantly correlated with decreased Böhler's angle and widening of calcaneal width from postoperative to last follow-up. (Pearson r = 0.348, p = 0.022; Pearson r = -0.582, p &lt; 0.001, respectively). Increased HU values of the uninjured calcaneus significantly correlated with improved clinical outcomes in three of five FAOS domains, including activities of daily living, sports, and quality of life (beta = 0.283 - 0.322; p &lt; 0.05 for all). Conclusion: Decreased preoperative bone density significantly correlated with decreased Böhler's angle, widening of calcaneal width, and inferior short-term clinical outcomes after screw fixation of DIACF. By quantifying bone density using HU value in area where DXA cannot be performed, such foot bones, determining whether different fixation methods or systemic treatments can be tailored to bone density could help in optimizing clinical outcomes. Level Of Evidence: Level III, Retrospective case series. abstract_id: PUBMED:22999846 Regional variations in radial head bone volume and density: implications for fracture patterns and fixation. Background: Fractures of the radial head are common with most partial articular fractures resulting in an anterolateral fragment. The exact mechanism of radial head fracture is unknown; however, forces transmitted and variations in local bone density are believed important. This study quantifies the regional variations in bone density and volume of the radial head to better understand the pathomechanics of fracture patterns. Methods: Computer tomography scan data of 18 cadaver elbows were imported into imaging analysis software. The radial head was divided into quadrants based on neutral forearm rotation. Bone density and volume were calculated and compared between quadrants. Results: The regional densities of bone expressed in Hounsfield units (HU) were posteromedial quadrant (PM) 496 ± 87 HU, anteromedial quadrant (AM) 443 ± 72 HU, anterolateral quadrant (AL) 409 ± 60 HU, and posterolateral quadrant (PL) 406 ± 57 HU. The volume of bone in descending order was PM 1138 ± 179 mm(3), PL 1013 ± 213 mm(3), AM 1010 ± 210 mm(3), and AL 938 ± 175 mm(3). The PM quadrant was significantly denser than the AM, AL, and PL quadrants, (P = .001) and the AM quadrant was significantly denser than the AL and PL quadrants (P = .006 and .009). The PM quadrant had significantly more bone volume when compared to the AM, AL, and PL (P = .001). The AM and PL quadrants had significantly greater bone volume compared to AL quadrant (P = .023 and .018, respectively). Conclusion: Radial head bone volume and density is highest in the posteromedial quadrant and lowest in the anterolateral quadrant where fractures occur more frequently. abstract_id: PUBMED:22460347 Use of bioabsorbable pins in surgical fixation of comminuted periarticular fractures. Objectives: To determine if bioabsorbable pins can be used for stabilization of comminuted articular fragments in periarticular fractures with adequate quality of fixation, while eliminating the potential complications related to use of traditional implants. Design: Multicenter retrospective review. Setting: Two Level 1 trauma centers. Patients/participants: Institutional billing records identified all cases (83) in which bioabsorbable pins were implanted. All charts were reviewed, and all periarticular fracture cases (80 fractures in 78 patients) were included. Intervention: Open reduction and internal fixation of highly comminuted periarticular fractures, using bioabsorbable poly-p-dioxanone and poly-l-lactic acid pins to stabilize the fragments of articular surface. Main Outcome Measurements: Outcomes were determined by maintenance of articular reduction assessed at 6 weeks and 3 months; development of posttraumatic arthritis assessed radiographically and by clinical examination at 1 year postoperatively; and rates of local complications including infection, pin migration, and pin-related soft tissue complications evaluated by complete medical record review. Results: No patients showed loss of articular reduction at 6 weeks or 3 months. There were no pin-related local complications or pin migration and no instances of delayed union or nonunion. Radiographs showed 19% arthritic changes at 12 months, with 16% loss to follow-up. Infection rate was 6%. Conclusions: In highly comminuted periarticular fractures, bioabsorbable pins are an intriguing alternative to traditional fixation methods. They afford similar effectiveness in maintaining stability without evidence of pin migration or other concerns of buried metallic implants. abstract_id: PUBMED:24720293 Geriatric trauma: the role of immediate arthroplasty. Periarticular fractures in elderly patients are challenging to manage because the fractures are typically comminuted and the bone is osteopenic, which often result in the failure of internal fixation. Patients who sustain these fractures demand immediate mobilization or they often do not recover their preinjury level of function. In geriatric patients, immediate arthroplasty provides an alternative to internal fixation for many periarticular fractures of the shoulder, elbow, and hip. abstract_id: PUBMED:18349779 Mechanical comparison in cadaver specimens of three different 90-degree double-plate osteosyntheses for simulated C2-type distal humerus fractures with varying bone densities. Objectives: To investigate the bone-implant-anchorage of 90-degree double-plate osteosynthesis in simulated complete intra-articular distal humerus fractures using conventional reconstruction plates (CRP), locking compression plates (LCP), and distal humerus plates (DHP), depending on the bone mineral density (BMD) of the cadaver specimens. Methods: Groups (CRP, LCP, DHP, n=8; LCP, DHP, n=13) in distal humerus cadaver bones were created based on BMD. The fracture model was an unstable intraarticular distal humerus fracture with a transverse osteotomy gap representing metaphyseal comminution (AO type 13-C2.3). Flexion and extension stiffness as well as cycles until failure due to screw pullout under cyclic loading were evaluated. Estimates of BMD values, below which failure was likely to occur, were determined. Results: Stiffness values were not significantly different between groups (extension: P=0.881, flexion: P=0.547). Under cyclic loading, consistent screw pullout failure occurred at BMD values below about 400 mg/cm for CRP and below about 300 mg/cm for LCP constructs. Comparing BMD-matched groups of 8 and 13 specimens respectively, the failure rate was significantly lower for the DHP (0/8) than for the CRP (5/8; P=0.026) and tended to be lower for the DHP (0/13) as compared to the LCP (4/13; P=0.096). Conclusion: Bone-implant anchorage was different between locking and nonlocking plate constructs and depended on BMD. While in good bone quality implant choice was not critical, both locking plates provided superior resistance against screw loosening as compared to the CRP at low BMD values (&lt;420 mg/cm). Based on our laboratory results, we conclude that locking plates such as the LCP and DHP are constructs designed to keep anatomical reduction in the presence of comminution and poor bone quality in a low intra-articular fracture of the distal humerus. Answer: The relative density of periarticular bone does appear to influence the failure pattern of intra-articular fractures, although the extent of this influence may vary depending on the specific joint and bone involved. A study investigating intra-articular fractures of the hip, knee, and ankle found a statistically significant difference in bone density between the convex and concave surfaces of the joints, with the convexity being more dense than the concavity. This was particularly evident in the knee, where the convexity was 55% more dense than the concavity. However, in the ankle, the concave surface was paradoxically denser. The study concluded that while local bone density plays a role in determining the nature of intra-articular fractures, it may only be a minor factor (PUBMED:27282689). In the context of distal articular humeral fractures, a biomechanical model demonstrated a significant correlation between bone mineral density (BMD) and the fracture threshold, suggesting that lower BMD could lead to a higher risk of fracture under stress (PUBMED:32474717). Similarly, a study on calcaneal fractures found that decreased preoperative bone density correlated with worse radiologic and clinical outcomes after screw fixation, indicating that bone density can affect both the immediate and long-term results of fracture treatment (PUBMED:32409183). For radial head fractures, regional variations in bone density and volume were found to be significant, with the posteromedial quadrant being denser and having more bone volume compared to the anterolateral quadrant, where fractures occur more frequently (PUBMED:22999846). This suggests that local variations in bone density may predispose certain areas to fracture. In the case of geriatric trauma, where bone is often osteopenic, immediate arthroplasty has been suggested as an alternative to internal fixation for periarticular fractures, as elderly patients require immediate mobilization to recover their preinjury level of function (PUBMED:24720293). This indicates that in cases of low bone density, alternative treatment strategies may be more appropriate to prevent failure of fixation. Overall, the evidence suggests that the relative density of periarticular bone does influence the failure pattern of intra-articular fractures, with lower bone density being associated with a higher risk of fracture and potentially worse outcomes after fixation.
Instruction: Formal pathology rotation during obstetrics and gynecology residency: is it beneficial? Abstracts: abstract_id: PUBMED:9521360 Formal pathology rotation during obstetrics and gynecology residency: is it beneficial? Background: Our objective was to assess the educational benefits of a formal pathology rotation during an obstetrics and gynecology residency program and to determine the utility of this information in clinical practice. Methods: In this descriptive study, the benefits of a 2-month rotation in pathology for obstetrics and gynecology residents were analyzed. A computerized listing of surgical cases processed by each resident was sent to the obstetrics and gynecology program director. Results: Our resident accessioned 5.4% of the total pathology cases processed each month. Reports from previous residents (over a 17-year period) and from program directors at the annual educational retreat indicate that such information was not relevant to our graduates in their clinical practice. Conclusions: A formal pathology rotation for obstetric residents can improve knowledge base, but the usefulness of this knowledge in clinical practice is dubious. abstract_id: PUBMED:8238186 Effects of rotation discipline on medical student grades in obstetrics and gynecology throughout the academic year. Objectives: Our purpose was to determine whether the sequence of rotation disciplines taken can effect medical student examination scores on the National Board of Medical Examiners Subject Exam score for obstetrics and gynecology. Study Design: A retrospective study was conducted of 439 student files for the academic years 1987 through 1991. The final clerkship grades and subject examination scores for internal medicine, pediatrics, psychiatry, surgery, and obstetrics and gynecology were reviewed. Results: It was determined that completion of internal medicine or surgery before obstetrics and gynecology yielded higher scores in obstetrics and gynecology (p &lt; 0.00001), with the largest improvement (from 533 to 586) observed when obstetrics and gynecology followed the internal medicine rotation. Conclusion: The sequence of rotation can affect scores on the National Board of Medical Examiners subject examination in obstetrics and gynecology. The largest improvement in student scores was observed when obstetrics and gynecology followed the internal medicine or surgery clerkships. abstract_id: PUBMED:37034057 Perceived Impact of a Core Rotation on Hawai'i Island on Future Career Intentions among Obstetrics and Gynecology Residents. Hawai'i Island faces a growing shortage of obstetric and gynecology providers. Increased exposure of obstetrics and gynecology residents to rural rotations during their core training may increase their interest in rural practice after graduation. In an effort to address the health care needs of women on Hawai'i Island, the University of Hawai'i Obstetrics and Gynecology Residency Program established a 4-week required gynecology rotation in the town of Hilo. Between July 2019 and June 2020, third- and fourth-year obstetrics and gynecology residents provided outpatient gynecologic care and participated in gynecologic surgeries at Hilo-based medical facilities. A total of 9 residents participated in this Hilo-based rotation. This retrospective study extracted data from post-rotation evaluations. Eight out of 9 participating residents (89%) felt that the rotation slightly or far exceeded their expectations. After the rotation, 7 residents (78%) reported an interest in practicing in a rural community, while only 3 residents (33%) reported having this interest prior to the rotation. Underserved rural areas seeking to recruit and retain obstetrics and gynecology physicians may benefit from partnering with residency training programs. abstract_id: PUBMED:15932826 Obstetrics and gynecology resident satisfaction with an integrated, comprehensive abortion rotation. Objective: To evaluate obstetrics and gynecology resident satisfaction with a comprehensive, integrated abortion rotation. Methods: The University of California, San Francisco obstetrics and gynecology residency program includes a 6-week PGY-3 family planning rotation at an in-hospital clinic where abortions are provided up to 23 weeks of gestation. Residents annually evaluate the educational value of all clinical rotations on a 5-point Likert scale, with 5 indicating "maximum value," and 1 "no value." Using data from 1998-2003, we compared ratings of the family planning rotation with all other PGY-3 rotations. We also surveyed residents 1 to 3 years after graduation to assess the rotation qualitatively and quantitatively. Results: Forty residents completed the abortion training, none opted out of training, and all completed the evaluations. Of all rotations in the third year, the family planning rotation was the highest rated (4.70), was similar in value to a high-volume surgical rotation (4.51, P &gt; .10) and the elective rotation (4.45, P &gt;.05), and surpassed the average score for all inpatient rotations (4.00, P &lt; .001), continuity clinic (4.10, P &lt; .001), and outpatient clinical experiences (4.06, P &lt;.01). According to residency graduates, the family planning rotation was rated 4.8 (where 5 indicates "far greater value" than other rotations), and 85% of respondents rated it of "maximum learning value". Conclusion: Obstetrics and gynecology residents place high value in the University of California, San Francisco PGY-3 family planning rotation during their training and in their first years of practice. abstract_id: PUBMED:2296408 Obstetrics and gynecology in the USSR. Knowledge of Soviet obstetrics and gynecology has been mostly inaccessible to Western physicians. We share our experience regarding medical education and practice in the USSR. Medical education lasts 6 years and is followed by 2 years of postdoctorate training. Residency is limited to clinical obstetrics and gynecology. No formal subspecialty training exists. Research projects are funded and guided by the Health Ministry and Academy of Medical Science. Most articles are published in Russian and are therefore unknown to physicians in the West. Modern medical technology is not available in the practice of obstetrics and gynecology. Abortion remains the leading method of contraception. Perinatal and infant mortality in the USSR is higher than that in comparable Western countries. abstract_id: PUBMED:8333438 The fourth-year medical school curriculum: recommendations of the Association of Professors of Gynecology and Obstetrics and the Council on Resident Education in Obstetrics and Gynecology. Objectives: The Association of Professors of Gynecology and Obstetrics and the Council on Resident Education in Obstetrics and Gynecology have proposed a fourth-year medical school curriculum for a student interested in pursuing a residency in obstetrics and gynecology. Study Design: Faculty members and residents in North Carolina, Illinois, and Michigan were surveyed as to the ideal curriculum that they would recommend for fourth-year students. The committee members representing the Council on Resident Education in Obstetrics and Gynecology and the Association of Professors of Gynecology and Obstetrics then reviewed these surveys and proposed a final curriculum. Results: A core curriculum of general medicine as an acting internship, an intensive care unit rotation, neonatology, and emergency medicine was recommended. Additional courses strongly considered were ambulatory obstetrics-gynecology, acting internship in obstetrics-gynecology, endocrinology, and general surgery. Conclusion: The committee recommends a curriculum that is broad and balanced in general medical education. abstract_id: PUBMED:7210670 Obstetrics and gynecology: considerations in career selection. Current training programs in obstetrics and gynecology are not producing an excess of specialists in view of future manpower needs. In addition to being specialists and consultants, obstetrician-gynecologists also function as providers of primary care for women. During the last decade, three formal sub-specialties of obstetrics and gynecology have evolved: gynecologic oncology, maternal-fetal medicine and reproductive endocrinology. These have improved patient care and have altered the structure of resident education. With more American medical school graduates entering this specialty, the quality of resident applicants has improved, creating intense competition for desirable training positions. Those inclined toward a career in obstetrics and gynecology can be assured that it will provide an increasingly favorable and challenging environment for professional activity in the future. abstract_id: PUBMED:10432150 A 5-year residency curriculum in obstetrics, gynecology, and family medicine. Objective: To develop a curriculum for residents in obstetrics and gynecology that also provides training in family medicine. Methods: We designed a 5-year curriculum with 36 months of obstetrics and gynecology, 12 of which are as chief resident, with a 4-month rotation through family medicine to meet the primary care requirements, and rotations of 1 month each in geriatrics and emergency medicine. The curriculum includes the 30 months of required rotations mandated by family medicine (three of which are in obstetrics and gynecology), with the 6 months' available elective time allocated to obstetrics and gynecology. Results: The Residency Review Committee for Obstetrics and Gynecology accredited the curriculum, which meets the Accreditation Council for Graduate Medical Education Special Requirements for Family Medicine, in August 1996. Conclusion: This 5-year residency curriculum educates physicians in both obstetrics and gynecology, and family medicine, and graduates are eligible to pursue board certification in both specialties. abstract_id: PUBMED:23566793 Residency training in pediatric and adolescent gynecology across obstetrics and gynecology residency programs: a cross-sectional study. Study Objective: To estimate the prevalence of Pediatric and Adolescent Gynecology formal training in the United States Obstetric and Gynecology residency programs. Design: Prospective, anonymous, cross-sectional study. Participants: United States program directors of Obstetrics and Gynecology residency programs, N = 242; respondents 104 (43%). Results: 104 residency programs responded to our survey. Among the 104 residency programs, 63% (n = 65) have no formal, dedicated Pediatric and Adolescent Gynecology clinic, while 83% (n = 87) have no outpatient Pediatric and Adolescent Gynecology rotation. There is no significant difference in the amount of time spent on a Pediatric and Adolescent Gynecology rotation among residents from institutions with a Pediatric and Adolescent Gynecology fellowship (P = .359), however, the number of surgeries performed is significantly higher than those without a Pediatric and Adolescent Gynecology fellowship (P = .0020). When investigating resident competency in Pediatric and Adolescent Gynecology, program directors reported that residents who were taught in a program with a fellowship-trained Pediatric and Adolescent Gynecology faculty were significantly more likely to be able to interpret results of selected tests used to evaluate precocious puberty than those without (P = .03). Conclusions: Residency programs without fellowship trained Pediatric and Adolescent Gynecology faculty or an established Pediatric and Adolescent Gynecology fellowship program may lack formal training and clinical exposure to Pediatric and Adolescent Gynecology. This information enables residency directors to identify deficiencies in their own residency programs and to seek improvement in resident clinical experience in Pediatric and Adolescent training. abstract_id: PUBMED:17485801 Awareness of infectious diseases in obstetrics and gynecology among residents and residency directors. Awareness of the subspecialty of infectious diseases in obstetrics and gynecology is low among United States residents and residency directors. Objective. Given the burden of infectious diseases on women's health, we sought to assess current awareness, interest, and perceived value of the subspecialty of infectious diseases in obstetrics and gynecology among current United States obstetrics and gynecology residents and residency directors. Methods. Two separate surveys addressing awareness, perceived value and interest in the subspecialty were sent to (1) a random 20% sample of obstetrics and gynecology residents and (2) all obstetrics and gynecology residency directors. Results. Seventy percent of the residency directors were familiar with the subspecialty and 67.0% placed value on infectious disease specialists in an academic department. Thirty percent of the residents reported awareness of the subspecialty. Thirty-six percent of residency directors reported that medical infectious disease specialists deliver formal education to the obstetrics and gynecology residents. Conclusion. United States obstetrics and gynecology residents and residency directors have a low awareness of the subspecialty. An open niche exists for formal education of residents in infectious diseases in obstetrics and gynecology by department specialists. These findings can be incorporated into ongoing recruitment efforts for the subspecialty of infectious diseases in obstetrics and gynecology. Answer: The educational benefits of a formal pathology rotation during an obstetrics and gynecology residency program have been assessed, and the results suggest that while such a rotation can improve the knowledge base of obstetric residents, the utility of this knowledge in clinical practice is questionable. A descriptive study analyzed the benefits of a 2-month rotation in pathology for obstetrics and gynecology residents and found that residents accessioned 5.4% of the total pathology cases processed each month. However, feedback from previous residents over a 17-year period and from program directors at an annual educational retreat indicated that the information gained from the pathology rotation was not relevant to the graduates in their clinical practice (PUBMED:9521360). Therefore, while the rotation may enhance the residents' understanding of pathology, its direct relevance and application to their clinical work in obstetrics and gynecology appear to be limited.
Instruction: Does the degree of smoking effect the severity of tardive dyskinesia? Abstracts: abstract_id: PUBMED:18774276 Does the degree of smoking effect the severity of tardive dyskinesia? A longitudinal clinical trial. Background: Tardive dyskinesia (TD) is a movement disorder observed after chronic neuroleptic treatment. Smoking is presumed to increase the prevalence of TD. The question of a cause-effect-relationship between smoking and TD, however, remains to be answered. Purpose of this study was to examine the correlation between the degree of smoking and the severity of TD with respect to differences caused by medication. Method: We examined 60 patients suffering from schizophrenia and TD. We compared a clozapine-treated group with a group treated with typical neuroleptics. Movement disorders were assessed using the Abnormal-Involuntary-Movement-Scale and the technical device digital image processing, providing rater independent information on perioral movements. Results: We found a strong correlation (.80&lt;r&lt;.90, always p&lt;.0001) between the degree of smoking and severity of TD. Repeated measurements revealed a positive correlation between changes in cigarette consumption and changes of the severity of TD (p&lt;.0001). Analyses of covariance indicated a significant group-effect with a lower severity of TD in the clozapine-group compared to the typical-neuroleptics-group (p=.010). Interaction-analyses indicated a higher impact of smoking on the severity of TD in the typical-neuroleptics-group compared to the clozapine-group (p=.033). Conclusion: Concerning a possible cause-effect-relationship between smoking and TD, smoking is more of a general health hazard than neuroleptic exposure in terms of TD. abstract_id: PUBMED:15706049 Parkinson's disease protects against smoking? Our aim was to estimate the pooled risk of current and former smoking for Parkinson's disease (PD). We have reviewed all observational studies that evaluated the association between PD risk and smoking habit. Twenty six studies were identified: 21 case-control, 4 cohort and 1 cross-sectional. The cross-sectional study did not compare former with never smokers. These studies were carried out between 1968 and 2000. There was an obvious protective effect of current smoking in the pooled estimate [risk estimate 0.37 (95% confidence interval 0.33 to 0.41)]. Former versus never smokers had pooled risk estimate of 0.84 (95% confidence interval 0.76 to 0.92). Current and former smoking do not, therefore, exert the same protective effect against PD so that it is unnecessary to postulate a biological mechanism through which smoking protects against PD. The results show that the reverse direction of causation is a more probable explanation, i.e. movement disorders of PD protect against smoking. Another explanation is that failure to develop strong smoking habits in early adult life might be a prodromal symptom of the disease and could perhaps be its first clinical manifestation. abstract_id: PUBMED:21723906 Smoking and tardive dyskinesia in male patients with chronic schizophrenia. Interactions between smoking and movement disorders include the contrasting associations of more cigarette smoking with reductions in Parkinson's disease and increases in tardive dyskinesia (TD) symptoms. Here we examine the relationship between smoking and TD in a large sample of inpatients with schizophrenia. We used cross-sectional naturalistic methods to analyze the prevalence and severity of neuroleptic-induced TD in relation to cigarette smoking among 764 male chronic and medicated inpatients meeting DSM-IV criteria for schizophrenia. We administered a detailed questionnaire including general information, medical and psychological conditions, and smoking behaviors. We evaluated TD severity using the abnormal involuntary movement scale (AIMS) and psychopathology using the Positive and Negative Syndrome Scale (PANSS). The main statistical analyses used cross-tabulations for the prevalence of TD by smoking and multivariate regression analyses for continuous measures (AIMS and PANSS). We found that the prevalence of TD did not significantly differ between smokers (41%=237/578) and non-smokers (37%=69/186). Secondary outcomes showed a significant association between the AIMS total score and age, duration of illness and hospitalization times. Thus, smoking was not associated with TD in male Chinese schizophrenics, but consistent with previous reports, older patients with a longer duration of illness and more hospitalizations showed greater severity of TD. abstract_id: PUBMED:23731926 Smoking and alcohol consumption patterns among elderly Canadians with mobility disabilities. Background: Mobility disability is a major adverse health outcome associated with aging and an impediment to older adults' well-being and behaviors in social and leisure activities. It has been shown that lifestyle factors, including smoking and alcohol consumption, have been used as coping strategies to deal with the negative impact of disability. The aim of this study was to determine the prevalence of smoking and alcohol consumption among older Canadians with different levels of mobility disabilities and to examine factors associated with these two lifestyle patterns among those with disabilities. Methods: Secondary data analysis was performed using individuals (n = 6,038) aged 65 years and older from both the 2001 Participation and Activity Limitation Survey and the 2003 Canadian Community Health Survey. Multivariate logistic regressions examined the relationship between disability severity and smoking as well as alcohol consumption while controlling for potential confounding socioeconomic factors. Results: The proportion of current smokers among seniors with less-severe and more-severe mobility disabilities and those in the general population was comparable with 12.55%, 11.57% and 11.93%, respectively. Forty-eight percent of seniors in the general population consumed alcohol regularly, compared to only 12.85% with more-severe mobility disabilities. No significant association was shown between the severity level of mobility disabilities and smoking (odds ratio = 0.90, 95% confidence interval: 0.75, 1.08). However, seniors having more-severe disability were less likely to consume alcohol regularly (odds ratio = 0.76, 95% confidence interval: 0.65, 0.89). Other variables including age, gender, income, living status, and social participation also impacted these lifestyle patterns among the study population. Conclusions: Smoking and alcohol patterns present different associations with the severity level of mobility disabilities. Compared with the general population, elderly Canadians with mobility disabilities had similar smoking prevalence but differ significantly in terms of alcohol consumption. Results from this research will be relevant to decision makers involved in program planning, health education, and policy development as it pertains to the prevention and management of age-related disability. abstract_id: PUBMED:12790158 Smoking and tardive dyskinesia: lack of involvement of the CYP1A2 gene. Objective: To establish if there is an association between cigarette smoking and tardive dyskinesia (TD) in patients with schizophrenia and to evaluate the role of the CYP1A2 polymorphism in TD in patients of Chinese descent. Method: Two-hundred and ninety-one patients diagnosed with schizophrenia according to DSM-IV criteria were included in the study. Dyskinesia was assessed by the Abnormal Involuntary Movement Scale and TD by the criteria of Schooler and Kane. Demographic and clinical data and information on smoking habits were collected, and patients of Chinese descent with a well established smoking history were subsequently genotyped for CYP1A2. Results: Forty-three (41.3%) of the 104 patients with a history of smoking and 52 (27.8%) of the 187 non-smokers were diagnosed with TD. The prevalence of TD was significantly higher among smokers than non-smokers (chi2 = 5.57, p = 0.018). Logistic regression using TD as the dependent variable revealed smokers to be at a significantly higher risk for TD (p &lt; 0.005). Genotyping of smokers of Chinese descent for CYP1A2 polymorphism revealed no significant differences in the genotypic or allelic distribution between those with and without TD. Conclusions: Consistent with other studies, the prevalence of TD was significantly higher among smokers than non-smokers; however, we did not find an association between the C --&gt; A genetic polymorphism of CYP1A2 and TD. abstract_id: PUBMED:16150537 The effect of depression on motor function and disease severity of Parkinson's disease. Objectives: Approximately 40% of patients with Parkinson's disease (PD) experience symptoms of depression. Our aim was to evaluate the effect of depression on disease severity, motor function and other phenotypic characteristics of PD. Patients And Methods: We studied 32 PD patients with major depression (PD-D) according to the DSM-IV criteria and 32 PD patients with no depression (PD-C) matched for gender, age of onset and duration. Results: Major depression in PD patients was associated with increased disease severity, poorer motor function and worse performance in the activities of daily living as measured by UPDRS scores. Furthermore, there was an association of depression with the severity of bradykinesia and axial rigidity. Conclusions: Depression in PD can have a profound negative impact on a patient's sense of wellbeing and motor functioning. Therefore, PD patients should be routinely and carefully screened for the presence of depression and appropriate management should be considered. Larger studies on the subject are warranted. abstract_id: PUBMED:34713931 Characterization of Lifestyle in Spinocerebellar Ataxia Type 3 and Association with Disease Severity. Background: Lifestyle could influence the course of hereditary ataxias, but representative data are missing. Objective: The objective of this study was to characterize lifestyle in spinocerebellar ataxia type 3 (SCA3) and investigate possible associations with disease parameters. Methods: In a prospective cohort study, data on smoking, alcohol consumption, physical activity, physiotherapy, and body mass index (BMI) were collected from 243 patients with SCA3 and 119 controls and tested for associations with age of onset, disease severity, and progression. Results: Compared with controls, patients with SCA3 were less active and consumed less alcohol. Less physical activity and alcohol abstinence were associated with more severe disease, but not with progression rates or age of onset. Smoking, BMI, or physiotherapy did not correlate with disease parameters. Conclusion: Differences in lifestyle factors of patients with SCA3 and controls as well as associations of lifestyle factors with disease severity are likely driven by the influence of symptoms on behavior. No association between lifestyle and disease progression was detected. © 2021 The Authors. Movement Disorders published by Wiley Periodicals LLC on behalf of International Parkinson and Movement Disorder Society. abstract_id: PUBMED:1680470 Smoking and movement disorders in psychiatric patients. Previous studies have suggested that tardive dyskinesia may occur more frequently in patients who smoke. Further evidence of an interaction between smoking and movement disorders includes the low lifetime exposure to cigarettes found in Parkinson's disease patients. In this study 126 patients with chronic psychiatric illnesses were blindly evaluated for tardive dyskinesia, neuroleptic-induced parkinsonism, and akathisia. Patients who smoked received significantly higher doses of neuroleptics but did not have significantly more frequent or more severe tardive dyskinesia or parkinsonism. Female smokers did have significantly more akathisia. These results are discussed with regard to interactions between smoking, central dopaminergic tone, and the psychopathology of extrapyramidal syndromes. The effect of smoking on neuroleptic blood levels as well as clinical symptomatology is also discussed. abstract_id: PUBMED:15331239 Smoking, nicotine and Parkinson's disease. Epidemiological studies show that smoking is associated with a lower incidence of Parkinson's disease (PD). This finding is important because it could provide clues about therapeutic strategies for protection against this debilitating movement disorder. Smoke contains numerous chemicals that could be responsible for the apparent protective effect. Here, a role for nicotine is considered, because this chemical stimulates brain dopaminergic systems and provides some symptomatic benefit in PD. Nicotine also has a neuroprotective action. Putative factors and signaling pathways involved in the actions of nicotine are discussed. An understanding of the molecular basis for the reduced occurrence of PD in tobacco users is crucial for the development of intervention strategies to reduce or halt disease progression. abstract_id: PUBMED:16681289 Influence of prior cognitive impairment on the severity of delirium symptoms among older patients. Delirium is common among hospitalized elderly patients with prior cognitive impairment. Detecting delirium superimposed on dementia is a challenge for nurses and doctors. As a result, delirium among demented elderly patients is of increasing interest to healthcare professionals. So far, studies have failed to describe how symptoms of delirium are altered by severity of dementia. This would be valuable information to improve the rate of detection by nurses of delirium among demented patients. However, until now no research has examined the effect of severity of prior cognitive impairment on the severity of delirium symptoms among institutionalized older patients. This study describes the effect of severity of prior cognitive impairment on the severity of delirium symptoms among institutionalized older patients with delirium at the time of their admission to an acute care hospital. One hundred four institutionalized elderly people were included in this study and screened for delirium using the confusion assessment method. Patients with delirium (n = 71) were evaluated with the delirium index to determine the severity of the symptoms of delirium. The results showed that the severity of prior cognitive impairment influences the severity of most of the symptoms of delirium, particularly disordered attention, orientation, thought organization, and memory. Thus, taking into account the severity of prior cognitive impairment could help nurses to detect delirium among older patients. Answer: Yes, the degree of smoking appears to affect the severity of tardive dyskinesia (TD). A study examining the correlation between the degree of smoking and the severity of TD in patients suffering from schizophrenia found a strong correlation between the two, with higher degrees of smoking associated with more severe TD symptoms. This correlation persisted even when changes in cigarette consumption were measured against changes in the severity of TD. The study also noted that the impact of smoking on TD severity was greater in patients treated with typical neuroleptics compared to those treated with clozapine (PUBMED:18774276). Another study, which looked at the prevalence and severity of TD in relation to cigarette smoking among male chronic and medicated inpatients with schizophrenia, did not find a significant difference in the prevalence of TD between smokers and non-smokers. However, it did find that older patients with a longer duration of illness and more hospitalizations showed greater severity of TD (PUBMED:21723906). Additionally, a study that investigated the association between cigarette smoking and TD in patients with schizophrenia and the role of the CYP1A2 polymorphism found that the prevalence of TD was significantly higher among smokers than non-smokers. Smokers were at a significantly higher risk for TD, although no association was found between the CYP1A2 polymorphism and TD (PUBMED:12790158). These findings suggest that there is a relationship between smoking and the severity of TD, although the exact nature of this relationship and the underlying mechanisms remain to be fully elucidated.
Instruction: First-trimester screening for trisomy 21 in twin pregnancy: does the addition of biochemistry make an improvement? Abstracts: abstract_id: PUBMED:16231401 First-trimester screening for trisomy 21 in twin pregnancy: does the addition of biochemistry make an improvement? Objective: To evaluate the effectiveness of the addition of first-trimester biochemistry to fetal nuchal translucency (NT) measurement in the Combined Test when screening for trisomy 21 in twin pregnancies. Methods: Maternal serum free beta-hCG and PAPP-A were determined at 8 to 12 weeks and fetal NT was measured at 11 to 14 weeks. The individual risks were estimated for each of the fetuses using both NT screening alone and the Combined Test. An invasive diagnostic procedure was offered when the risk was 1:250 or over in either one of the fetuses. In the first period, only the results of NT screening were clinically applied. After previous analysis, the Combined Test was introduced into clinical practice. Results: In the two-and-a-half-year study period, a complete follow-up was obtained in 100 twin pregnancies. Three fetuses (two pregnancies) with trisomy 21 were detected by both methods. The false-positive rate achieved by NT screening (14.3% of pregnancies and 8.6% of fetuses) was substantially reduced when first-trimester biochemistry was added in the Combined Test (5.1% of pregnancies and 3.6% of fetuses). Conclusion: The Combined Test appears to maintain the detection rate achieved by NT screening for trisomy 21 in twin pregnancies, but false-positive rates and invasive diagnostic procedures are reduced. abstract_id: PUBMED:10908764 First-trimester Down syndrome screening using dried blood biochemistry and nuchal translucency. Objective: To assess the effectiveness of free beta-hCG, pregnancy-associated plasma protein A, and nuchal translucency in a prospective first-trimester prenatal screening study for Down syndrome and trisomy 18. Methods: Risks were calculated for Down syndrome and trisomy 18 based on maternal age and biochemistry only (n = 10,251), nuchal translucency only (n = 5809), and the combination of nuchal translucency and biochemistry (n = 5809). Results: The study population included 50 Down syndrome and 20 trisomy 18 cases. Nuchal translucency measurement was done on 33 Down syndrome and 13 trisomy 18 cases. Down syndrome screening using combined biochemistry and ultrasound resulted in a false-positive rate of 4.5% (95% confidence interval [CI] 3.9%, 5.2%) and detection rate of 87.5% (95% CI 47%, 100%) in patients under age 35 years. In older patients, the false-positive rate was 14.3% (95% CI 12.7%, 15. 8%) and detection rate was 92% (95% CI 74%, 99%). For trisomy 18 screening, the false-positive rate was 0.4% (95% CI 0.24%, 0.69%) and detection rate was 100% (95% CI 40%, 100%) in younger patients, whereas in older patients the false-positive rate was 1.4% (95% CI 0. 9%, 2.0%) and detection rate was 100% (95% CI 66%, 100%). Using modeling, at a fixed 5% false-positive rate, the Down syndrome detection rate was 91%. Conversely, at a fixed 70% Down syndrome detection rate, the false-positive rate was 1.4%. Conclusion: First-trimester screening for Down syndrome and trisomy 18 is effective and offers substantial benefits to clinicians and patients. abstract_id: PUBMED:25582972 Information and knowledge about Down syndrome among women and partners after first trimester combined testing. We assessed reasons among women and partners for choosing combined ultrasound-biochemistry testing, information and knowledge about Down syndrome and decisions concerning invasive procedures and termination of pregnancy in a prospective cohort study in Uppsala County. In all 105 pregnant women and 104 partners coming for a combined ultrasound-biochemistry test answered a questionnaire. The most common reason for a combined ultrasound-biochemistry test was "to perform all tests possible to make sure the baby is healthy". Internet and midwives were the most common sources of information. Seventy-two percent had not received information on what it means to live with a child with Down syndrome. Many expectant parents perceived information as insufficient. Both women and partners had varying or low levels of knowledge about medical, cognitive and social consequences of Down syndrome. Twenty-five percent had not decided on an invasive test if indicated and only 42% would consider termination of pregnancy with a Down syndrome diagnosis. abstract_id: PUBMED:9476320 First-trimester screening for fetal aneuploidy: biochemistry and nuchal translucency. Maternal dried whole-blood specimens were collected prospectively from 2010 singleton pregnancies between 9 + 0 and 13 + 4 weeks that included 18 chromosomally abnormal pregnancies (11 Down's syndrome, four trisomy 18, two trisomy 13 and one triploidy). A subset of 744 pregnancies underwent ultrasound nuchal translucency measurement and included seven Down's syndrome, four trisomy 18, two trisomy 13 and one triploidy. Patients were evaluated for risk of Down's syndrome and trisomy 18 based on biochemistry (free beta-human chorionic gonadotropin and pregnancy-associated plasma protein A), nuchal translucency and the combination of both. In prospective biochemical screening, false-positive rates for Down's syndrome and trisomy 18 were 5.1% (66/1297) and 1.9% (25/1297) in women &lt; 35 years of age and 14.2% (99/695) and 1.6% (11/695) in women &gt; or = 35 years of age, respectively. The detection efficiency of aneuploidy was 6/6 (100%) in women &lt; 35 years and 11/12 (92%) in women &gt; or = 35 years. Nuchal translucency measurement alone detected 57% (8/14) of cases of aneuploidy at a 5.8% (42/730) false-positive rate. Modelling with the age distribution of live births, a 5% false-positive rate resulted in Down's syndrome detection efficiency of 61% by biochemistry, 73% by nuchal translucency and 87% by combining both methods. The data in this study demonstrate that combined biochemical and ultrasound evaluation for Down's syndrome and other chromosomal abnormalities in the first trimester of pregnancy yield a detection capability that may exceed that of current second-trimester prenatal screening protocols. The potential for enhanced detection coupled to an earlier alert of fetal complications could represent a substantial advantage to both clinician and patient. abstract_id: PUBMED:15191443 Prospective evaluation of a first trimester screening program for Down syndrome and other chromosomal abnormalities using maternal age, nuchal translucency and biochemistry in an Australian population. Background: A combination of maternal age and ultrasound assessment of the nuchal translucency (NT) has been used in the first trimester to screen for chromosomal abnormality. In the United Kingdom, the addition of NT screening was shown to be beneficial. Aims: To report the sensitivity of combined first trimester biochemistry and ultrasound screening for Down syndrome in an Australian private practice specialising in obstetric ultrasound. Methods: A prospective study in a private obstetric ultrasound practice. Over 22 months, 2121 patients were screened and data was analysed for sensitivity (detection) and false positive rates for all chromosome abnormalities. Results: There were 17 chromosomal abnormalities, five of which were Down syndrome. Using maternal age alone or age and biochemistry, four of the Down syndrome cases were detected for a 29 and 19% false positive rate, respectively. Using age and NT or age, NT and biochemistry, all the Down syndrome cases were detected, for a false positive rate of 5.7 and 7.2%, respectively. The difference in detection rates for Down syndrome or other chromosomal abnormalities, using the four screening methods, did not reach statistical significance. However, the false positive rates in screening methods without ultrasound to assess the NT was significantly higher (P &lt; 0.01). Conclusions: A combination of maternal age, NT and maternal serum biochemistry gives a high detection rate for both trisomy 21 and other chromosomal abnormalities. Down syndrome screening using either maternal age alone or age in combination with first trimester biochemistry conferred screen positive rates significantly higher than when combined with NT. abstract_id: PUBMED:15300745 First-trimester screening for trisomy 21 combining biochemistry and ultrasound at individually optimal gestational ages. An interventional study. Objective: To assess the effectiveness of the Combined Test in the prenatal detection of trisomy 21 in the general pregnant population using a new timing for the screening approach. Methods: First-trimester maternal serum biochemical markers (pregnancy-associated plasma protein-A and free-beta hCG) were determined in maternal serum at 7 to 12 weeks. Fetal nuchal translucency and gestational age were assessed at the 10- to 14-week ultrasound scan. A combined risk was estimated and delivered to the women the same day. When the risk was 1:250 or above, chorionic villus sampling was offered. Results: Mean gestational age at biochemistry was 9.4 weeks, being 12.3 at ultrasound. In the 2780 studied pregnancies with a complete follow-up, observed detection rates were 88% (7/8) for trisomy 21 and 75% (3/4) for trisomy 18, with a 3.3% (92/2765) false-positive rate. Conclusion: The Combined Test, assessing biochemistry and ultrasound at individually optimal ages in the first trimester, showed an 88% detection rate for trisomy 21 with a remarkably reduced false-positive rate (3.3%). abstract_id: PUBMED:27403933 First trimester combined screening - focus on early biochemistry. First trimester combined screening (cFTS) for foetal trisomy 21 has become an established method in many countries. The screening is based on a combination of maternal-age-related risk, ultrasound (nuchal translucency) and two maternal serum biochemical markers, free beta human chorionic gonadotropin (FbhCG) and pregnancy associated plasma protein A (PAPP-A). The concentrations of these biochemical markers are affected by several maternal and pregnancy factors, which are discussed herein. Improvements in the algorithm have extended the screening to include trisomy 21 in mono- and dichorionic twin pregnancies, trisomy 18, trisomy 13 and triploidy. The results from large databases have shown that the screening algorithms are efficient for a range of rare autosomal trisomies and marker chromosomes and for a broad range of other chromosomal aberrations. Recent data show that the strength of the individual markers is highly dependent on the gestational age of sampling and indicate a general increase in the performance of the screening for trisomy 21 when using blood samples from early in the first trimester at gestational age 8-10 weeks. abstract_id: PUBMED:29057435 Influence of first trimester biochemistry methodology on detection rate in screening for trisomy 21. Objectives: The purpose of the study was to compare detection rates (DR) of FMF-certified and non-certified biochemical tests (BC) in trisomy 21 screening at 11-13 + 6 weeks. Material And Methods: In 2267 singleton pregnancies FMF-certified doctors measured crown to rump length (CRL) and nuchal translucency (NT). Serum samples were tested for free β-hCG and the PAPP-A using 2 analysers (Delfia - Perkin Elmer and Immulite 2000 - DPC), the results were expressed in MoM values and used for computer calculation of the risk for trisomy 21. The cut-off value for the high trisomy 21 risk was 1:300. Results: Comparison of free β-hCG MoMs by DPC and Delfia demonstrated statistically significant differences in normal, and trisomy 21 fetuses respectively. Similarly, statistically significant differences were noted for PAPP-A MoMs. The above differences in MoMs resulted in altered sensitivity in screening for aneuploidy. The application of the FMF-certified method ensures a markedly higher DR = 74%, compared to non-certified tests (64%), both at 5% FPR. The ROC analysis was performed in order to assess the efficacy of both tests. Results of trisomy 21 BC + NT risk scales using the Delfia and DPC methods are highly significant (p &lt; 0.0001), which means that their discrimination ability is &gt; 90%. The difference between results obtained using the Delfia and DPC methods is AUC = 0.0150 and is statistically significant (Z = 2.4728, p = 0.0134). Conclusions: The use of FMF-certified first trimester biochemistry analysers improves DR for trisomy 21. The use of non-certified analysers causes reduction of DR and an increase of invasive procedure rate. abstract_id: PUBMED:15057947 The impact of correcting for smoking status when screening for chromosomal anomalies using maternal serum biochemistry and fetal nuchal translucency thickness in the first trimester of pregnancy. Objectives: To evaluate the influence of cigarette smoking status on maternal serum free beta-hCG, PAPP-A and fetal nuchal translucency (NT) thickness at 11 to 14 weeks of gestation in a large cohort of women screened prospectively for chromosomal anomalies. Methods: Information on maternal cigarette smoking status, maternal age, maternal serum biochemical marker levels and fetal NT were collected from the prenatal screening computer records in two OSCAR screening centres. Data was available from 32,730 unaffected pregnancies and from 124 with Down syndrome. Statistical analysis of the marker levels in the smoking and non-smoking group were carried out. The impact on false-positive rate of correcting for smoking status was assessed from a modelling exercise. Results: Prevalence of smoking was significantly affected by maternal age with an overall incidence of 11.5%, which varied from 35% in women under 20 to 7% in women over 35. In the unaffected population, the median free beta-hCG MoM was significantly lower in the smoking group (0.97 vs 1.00) as was that for PAPP-A (0.84 vs 1.02). The standard deviation of the log(10) MoM free beta-hCG was lower in the smoking group and that for PAPP-A was higher in the smoking group. The difference in median marker levels did not seem to be related to the number of cigarettes smoked per day. In the group with Down syndrome, the median MoM free beta-hCG was not significantly different in the smokers (1.69 vs 1.86) as was that for PAPP-A (0.53 vs 0.57). Fetal delta NT was not significantly different in the unaffected smokers (0.11 vs 0.0 mm) or in those with Down syndrome (1.96 vs 2.25 mm). In the smoking group, when screening using maternal serum biochemistry and age alone, the false-positive rate was 6.17%, compared to 4.67% in an age-matched group of non-smokers. Correcting for smoking status by dividing the measured MoM by the median found in the smoking group resulted in the false-positive rate falling to 4.40%. When screening using NT, maternal serum biochemistry and age, the false-positive rate in smokers was 4.48%, which reduced to 3.46% after correction-in line with the 3.76% in the non-smoking group. The impact on detection rate was too small to be accurately measured. Conclusions: The impact of smoking on first-trimester biochemical marker levels does not seem to be dose related. Whilst correcting first-trimester biochemical markers for maternal smoking status has little impact at the population level for detection rates, a considerable reduction in false-positive rate can be achieved, reducing the level to that seen in non-smokers. However, the effect on the individual patient-specific risk can be substantial and could certainly make a difference to the patient's decision on whether to have an invasive test. abstract_id: PUBMED:19690509 Technical standards and guidelines: prenatal screening for Down syndrome that includes first-trimester biochemistry and/or ultrasound measurements. This statement is intended to augment the current general ACMG Standards and Guidelines for Clinical Genetics Laboratories and to address guidelines specific to first-trimester screening for Down syndrome. The aim is to provide the laboratory the necessary information to ensure accurate and reliable Down syndrome screening results given a screening protocol (e.g., combined first trimester and integrated testing). Information about various test combinations and their expected performance are provided, but other issues such as availability of reagents, patient interest in early test results, access to open neural tube defect screening, and availability of chorionic villus sampling are all contextual factors in deciding which screening protocol(s) will be selected by individual health care providers. Individual laboratories are responsible for meeting the quality assurance standards described by the Clinical Laboratory Improvement Act, the College of American Pathologists, and other regulatory agencies, with respect to appropriate sample documentation, assay validation, general proficiency, and quality control measures. These guidelines address first-trimester screening that includes ultrasound measurement and interpretation of nuchal translucency thickness and protocols that combine markers from both the first and second trimesters. Laboratories can use their professional judgment to make modification or additions. Answer: Yes, the addition of first-trimester biochemistry to fetal nuchal translucency (NT) measurement in the Combined Test does make an improvement when screening for trisomy 21 in twin pregnancies. A study evaluating the effectiveness of this approach found that while both NT screening alone and the Combined Test detected three fetuses with trisomy 21, the false-positive rate was substantially reduced from 14.3% of pregnancies and 8.6% of fetuses with NT screening alone to 5.1% of pregnancies and 3.6% of fetuses with the addition of first-trimester biochemistry in the Combined Test. This indicates that the Combined Test maintains the detection rate achieved by NT screening but reduces the false-positive rates and the number of invasive diagnostic procedures (PUBMED:16231401).
Instruction: Does probability guided hysteroscopy reduce costs in women investigated for postmenopausal bleeding? Abstracts: abstract_id: PUBMED:25785283 Does probability guided hysteroscopy reduce costs in women investigated for postmenopausal bleeding? Objective: To evaluate whether a model to predict a failed endometrial biopsy in women with postmenopausal bleeding (PMB) and a thickened endometrium can reduce costs without compromising diagnostic accuracy. Design, Setting, And Population: Model based cost-minimization analysis. Methods: A decision analytic model was designed to compare two diagnostic strategies for women with PMB: (I) attempting office endometrial biopsy and performing outpatient hysteroscopy after failed biopsy and (II) predicted probability of a failed endometrial biopsy based on patient characteristics to guide the decision for endometrial biopsy or immediate hysteroscopy. Robustness of assumptions regarding costs was evaluated in sensitivity analyses. Main Outcome Measures: Costs for the different strategies. Results: At different cut-offs for the predicted probability of failure of an endometrial biopsy, strategy I was generally less expensive than strategy II. The costs for strategy I were always € 460; the costs for strategy II varied between € 457 and € 475. At a 65% cut-off, a possible saving of € 3 per woman could be achieved. Conclusions: Individualizing the decision to perform an endometrial biopsy or immediate hysteroscopy in women presenting with postmenopausal bleeding based on patient characteristics does not increase the efficiency of the diagnostic work-up. abstract_id: PUBMED:22442519 Hysteroscopy in postmenopausal bleeding. Objectives: 1) To study the etiology of postmenopausal bleeding (PMB). 2) To study the significance of hysteroscopy in evaluation of the etiopathogenic factors. 3) Correlating the diagnosis after transvaginal sonography (TVS), hysteroscopy and histopathologic diagnosis. 4) Feasibility of conservative management with hysteroscopy in PMB. Design: Prospective study from January 2008 to June 2009. Setting: Department of Obstetrics and Gynecology of Ruby Hall Clinic, Pune. Patients: Sixty postmenopausal women with complaint of bleeding per vaginum. Interventions: Clinical and sonographic evaluation, followed by diagnostic and/or therapeutic hysteroscopy and guided biopsy. Hysteroscopic images were analyzed and compared with histopathologic results. Measurements And Main Results: On hysteroscopy, endometrium was classified as suggestive of normal, atrophic, endometrial hyperplasia or endometrial carcinoma. Histopathologic diagnosis is taken as a gold standard to determine the efficacy of hysteroscopy in diagnosing endometrial pathologies. The sensitivity and specificity of hysteroscopy in diagnosing endometrial pathologies was assessed. Conclusions: In women with PMB, hysteroscopy is the basic tool that allows precise diagnosis of various endouterine pathologies. The average sensitivity of hysteroscopy was 97% in our study and the specificity was 98.66%. Hence, we can conclude that it is highly accurate for evaluating endometrial pathologies. For obvious benign lesions, it also provides treatment in the same sitting, therefore avoiding an extensive, morbid, and expensive procedure like hysterectomy. abstract_id: PUBMED:32218372 Clinical and Sonographic Evaluation of Postmenopausal Bleeding (PMB) Followed by Diagnostic and/or Therapeutic Hysteroscopy and Guided Biopsy in Jordanian Hospitals. Background and Objectives: The goal of this study was to evaluate the clinical sonographic evaluation of postmenopausal bleeding (PMB) followed by diagnostic and/or therapeutic hysteroscopy and guided biopsy in Jordanian hospitals. Materials and Methods: A retrospective multi-centric study was performed in hospitals in Al-Karak and Amman from 2014-2016. The study recruited 189 cases to evaluate the aetiology of postmenopausal bleeding. Atrophic endometrium was observed as a major cause of postmenopausal bleeding according to histopathology. The cases were also distributed according to parity in which nulliparous patients were observed. Results: Hysteroscopy was observed to be effective for the diagnosis of postmenopausal bleeding. Conclusion: There is a need to assess more approaches for the diagnosis of postmenopausal bleeding among women. abstract_id: PUBMED:30193870 Office hysteroscopy for postmenopausal women: Feasibility and correlation with transvaginal ultrasound. Objective: To evaluate feasibility of office hysteroscopy in postmenopausal women and correlation between transvaginal ultrasound (TV-US) and hysteroscopy. Methods: Postmenopausal women addressed at office hysteroscopy for postmenopausal bleeding or increase endometrial thickness were included. A 3.5-mm rigid hysteroscope was used without cervical dilatation or local anaesthesia. Speculum was never used. Data concerning women's symptoms and TV-US findings were collected. Results: Four-hundred-seventy postmenopausal women referred for office hysteroscopy were included in this descriptive study. Three-hundred-fifty women (74.5%) experienced abnormal uterine bleeding. The success rate of office hysteroscopy was 76.4% and was significantly higher in cases of postmenopausal bleeding (80.9%) than in women without postmenopausal bleeding (63.3%) (p=0.01). Three-hundred-sixteen women had both a successful hysteroscopy and TV-US. The correlation between hysteroscopy and TV-US was 68.5% for the diagnosis of increased endometrial thickness, polyps and submucosal myoma (Kappa=0.28). The rate of endometrial cancer for women with postmenopausal bleeding was 12.6% (n=44) while it was 1.7% (n=2) for asymptomatic women. Two (4.3%) out of these 46 women with endometrial cancer had normal hysteroscopy, while 7 (15.2%) had a normal TV-US (including endometrial thickness below 5mm). Among the 54 women without bleeding and with a thickened endometrium, one (1.8%) had endometrial cancer. Conclusion: Office hysteroscopy is successful without anaesthesia for 76.4% of postmenopausal women. The correlation between TV-US and hysteroscopy is low, and we recommend to practice both sonography and hysteroscopy in women with postmenopausal bleeding. abstract_id: PUBMED:27935919 Hysteroscopy in Abnormal Uterine Bleeding vs Ultrasonography and Histopathology Report in Perimenopausal and Postmenopausal Women. Introduction: Abnormal uterine bleeding is one of the most common morbidity landing women to attend medical attention in gynecology outpatient department. This study aims to evaluate the hysteroscopy finding in diagnosis of AUB and its correlation with ultrasonography finding and histopathological reports. Methods: This was a prospective comparative study in which ultrasonography was performed in fifty patients with abnormal uterine bleeding attending OPD then hysteroscopy was performed.After removing hysteroscopy dilation and curettage was done and sample sent for histopathology.At the end reports of hysteroscopy finding were compared with sonograhic and pathological results. Results: In the study women aged from 45 to 64 years with the symptoms duration of 15 days to 6 months. Post menopausal women seek medical attention more early then the menstruating women.Common symptoms are menorrhagia, metrorrhagia and postmenopausal bleeding. Most common abnormality was menorrhagia (32%) followed by post menopausal bleeding 28 %. Ultrasonography showed 46% of abnormal finding in which 17 (47%) menstruating women and 6 (43%) women were postmenopausal women. Among 50 women 17 (34%) had negative finding and 43 (66%) had abnormal finding in which most common finding of hyperplastic endometrium. Hysteroscopy correctly diagnose atrophic endometrium, polyp and endometrial Ca which is also confirmed by histopathology finding. Conclusions: Hysteroscopy is reliable method for evaluating cases of AUB and it can be used as first line diagnostic method for benign lesions. Hysteroscopy guided biopsy has most accurate in diagnosing pathology. abstract_id: PUBMED:17785039 Hysteroscopy in the diagnosis of postmenopausal bleeding. Postmenopausal bleeding (PMB) accounts for 5% of gynaecology referrals. Investigations should exclude malignancy and pre-malignancy, and diagnose the benign conditions that need treatment. The three modalities that are most commonly used are transvaginal ultrasound scan (TVS), endometrial biopsy (EB) and hysteroscopy. Most authors agree that the first-line investigation should be TVS, followed by endometrial assessment (EB or hysteroscopy) if the endometrial thickness is &gt;4 mm. When scanning demonstrates the possibility of pathology, outpatient hysteroscopy and biopsy are the gold standard for investigating the endometrial cavity. Focal pathology can be removed during the hysteroscopy, thereby reducing hospital admissions and costs. abstract_id: PUBMED:14616256 The value of outpatient hysteroscopy in diagnosing endometrial pathology in postmenopausal women with and without hormone replacement therapy. Background: This study examined the endometrial pathology in postmenopausal women who had experienced abnormal uterine bleeding, both with and without hormone replacement therapy (HRT), and who subsequently underwent outpatient hysteroscopy. Methods: This is a retrospective observational and comparative study. Patient data in the period from 1998 to 2001 inclusive were examined. The influence of various factors on the incidence of pathology was determined, for example HRT use, the type of HRT, age, etc. In addition, the reliability of hysteroscopy in identifying pathology was calculated. Results: A total of 503 patients presented with postmenopausal bleeding, 299 postmenopausal women and 204 women on HRT. The incidence of endometrial carcinoma was significantly higher in the postmenopausal women not on HRT (RR &gt; 10), and occurred in women who were significantly older (p = 0.003). Other forms of pathology, such as complex hyperplasia, benign endometrial polyps and fibroids, did not significantly differ between the groups. Outpatient hysteroscopy correctly identified endometrial carcinoma in 98.2% of cases; however, it was less reliable in diagnosing other forms of pathology. Conclusions: The relative risk of a postmenopausal woman presenting to a hysteroscopy clinic with abnormal bleeding caused by endometrial cancer is more than 10 times the risk of an HRT counterpart. Furthermore, both hysteroscopy and pipelle sampling are required to accurately identify all cases of endometrial carcinoma. abstract_id: PUBMED:17516956 Diagnostic hysteroscopy in abnormal uterine bleeding: a systematic review and meta-analysis. Background: This study was conducted to assess the accuracy and feasibility of diagnostic hysteroscopy in the evaluation of intrauterine abnormalities in women with abnormal uterine bleeding. Search Strategy: Electronic databases were searched from 1 January 1965 to 1 January 2006 without language selection. The medical subject heading (MeSH) and textwords for the following terms were used: hysteroscopy, diagnosis, histology, histopathology, hysterectomy, biopsy, sensitivity and specificity. Setting: University Hospital. Selection Criteria: The inclusion criteria were report on accuracy of diagnostic hysteroscopy in women with abnormal uterine bleeding compared to histology collected with guided biopsy during hysteroscopy, operative hysteroscopy or hysterectomy. Data Collection And Analysis: Electronic databases were searched for relevant studies and references were cross-checked. Validity was assessed and data were extracted independently by two authors. Heterogeneity was calculated and data were pooled. Subgroup analysis was performed according to validity criteria, study quality, menopausal state, time, setting and performance of the procedure. The pooled sensitivity, specificity, likelihood ratios, post-test probabilities and feasibility of diagnostic hysteroscopy on the prediction of uterine cavity abnormalities. Post-test probabilities were derived from the likelihood ratios and prevalence of intrauterine abnormalities among included studies. Feasibility included technical success rate and complication rate. Main Results: One population of homogeneous data could be identified, consisting of patients with postmenopausal bleeding. In this subgroup the positive and negative likelihood ratios were 7.9 (95% CI 4.79-13.10) and 0.04 (95% CI 0.02-0.09), raising the pre-test probability from 0.61 to a post-test probability of 0.93 (95% CI 0.88-0.95) for positive results and reducing it to 0.06 (95% CI 0.03-0.13) for negative results. The pooled likelihood ratios of all studies included, calculated with the random effects model, were 6.5 (95% CI 4.1-10.4) and 0.08 (95% CI 0.07-0.10), changing the pre-test probability of 0.46 to post-test probabilities of 0.85 (95% CI 0.78-0.90) and 0.07 (0.06-0.08) for positive and negative results respectively. Subgroup analyses gave similar results. The overall success rate of diagnostic hysteroscopy was estimated at 96.9% (SD 5.2%, range 83-100%). Conclusions: This systematic review and meta-analysis shows that diagnostic hysteroscopy is both accurate and feasible in the diagnosis of intrauterine abnormalities. abstract_id: PUBMED:23292675 Ultrasound-guided reoperative hysteroscopy: managing endometrial ablation failures. Endometrial ablation and hysteroscopic myomectomy and polypectomy are having an increasing impact on the care of women with abnormal uterine bleeding (AUB). The complications of these procedures include the late onset of recurrent vaginal bleeding, cyclic lower abdominal pain, hematometra and the inability to adequately sample the endometrium in women with postmenopausal bleeding. According to the 2007 ACOG Practice Bulletin, approximately 24% of women treated with endometrial ablation will undergo hysterectomy within 4 years. By employing careful cervical dilation, a wide variety of gynecologic resectoscopes, and continuous sonographic guidance it is possible to explore the entire uterine cavity in order to locate areas of sequestered endometrium, adenomyosis, and occult hematometra. Sonographically guided reoperative hysteroscopy offers a minimally invasive technique to avoid hysterectomy in over 60% to 88% of women who experience endometrial ablation failures. The procedure is adaptable to an office-based setting and offers a very low incidence of operative complications and morbidity. In addition, the technique provides a histologic specimen, which is essential in adequately evaluating the endometrium in postmenopausal women or women at high risk for the development of adenocarcinoma of the endometrium. abstract_id: PUBMED:32368939 Office hysteroscopy in the management of women with postmenopausal bleeding. Postmenopausal bleeding (PMB) is a relevant aspect for health-care providers in clinical practice: the first objective is to rule out potential gynecological cancer. The purpose of this narrative review is to evaluate the role of office hysteroscopy in the management of PMB. Office hysteroscopy is a minimally invasive procedure allowing direct visualization of uterine pathology without the need for general anesthesia and the use of an operating room, generating cost savings and greater compliance among patients. Here, we focus on major intrauterine diseases (polyps, submucosal myomas, endometrial hyperplasia, and cancer) as causes of PMB. Office hysteroscopy appears to be safe and feasible, and could allow accurate diagnosis of intrauterine pathologies, especially that with a focal growth pattern, otherwise misdiagnosed with blinded procedures. However, studies focusing exclusively on postmenopausal women are still few, so further research, especially randomized controlled trials, is needed. Answer: Based on the provided abstracts, a model to predict a failed endometrial biopsy in women with postmenopausal bleeding (PMB) and a thickened endometrium was evaluated to see if it could reduce costs without compromising diagnostic accuracy. The study compared two diagnostic strategies: (I) attempting office endometrial biopsy and performing outpatient hysteroscopy after failed biopsy and (II) using predicted probability of a failed endometrial biopsy based on patient characteristics to guide the decision for endometrial biopsy or immediate hysteroscopy. The results indicated that strategy I was generally less expensive than strategy II, with the costs for strategy I always being € 460, while the costs for strategy II varied between € 457 and € 475. At a 65% cut-off, a possible saving of € 3 per woman could be achieved. The conclusion was that individualizing the decision to perform an endometrial biopsy or immediate hysteroscopy based on patient characteristics does not increase the efficiency of the diagnostic work-up (PUBMED:25785283). Therefore, according to this study, probability-guided hysteroscopy does not significantly reduce costs in women investigated for postmenopausal bleeding.
Instruction: Can training improve laypersons helping behaviour in first aid? Abstracts: abstract_id: PUBMED:22562070 Can training improve laypersons helping behaviour in first aid? A randomised controlled deception trial. Background: There is limited evidence indicating that laypersons trained in first aid provide better help, but do not help more often than untrained laypersons. This study investigated the effect of conventional first aid training versus conventional training plus supplementary training aimed at decreasing barriers to helping. Methods: The authors conducted a randomised controlled trial. After 24 h of conventional first aid training, the participants either attended an experimental lesson to reduce barriers to helping or followed a control lesson. The authors used a deception test to measure the time between the start of the unannounced simulated emergency and seeking help behaviour and the number of particular helping actions. Results: The authors randomised 72 participants to both groups. 22 participants were included in the analysis for the experimental group and 36 in the control group. The authors found no statistically or clinically significant differences for any of the outcome measures. The time until seeking help (geometrical mean and 95% CI) was 55.5 s (42.9 to 72.0) in the experimental group and 56.5 s (43.0 to 74.3) in the control group. 57% of the participants asked a bystander to seek help, 40% left the victim to seek help themselves and 3% did not seek any help. Conclusion: Supplementary training on dealing with barriers to helping did not alter the helping behaviour. The timing and appropriateness of the aid provided can be improved. Trial Registration: The authors registered this trial at ClinicalTrials.gov as NCT00954161. abstract_id: PUBMED:19157654 Effectiveness of nonresuscitative first aid training in laypersons: a systematic review. Study Objective: This study reviewed evidence on the effects of nonresuscitative first aid training on competence and helping behavior in laypersons. Methods: We identified randomized and nonrandomized controlled trials and interrupted time series on nonresuscitative first aid training for laypersons by using 12 databases (including MEDLINE, EMBASE, and PsycINFO), hand searching, reference checking, and author communication. Two reviewers independently evaluated selected studies with the Cochrane Effective Practice and Organisation of Care Review Group quality criteria. One reviewer extracted data with a standard form and another checked them. In anticipation of substantial heterogeneity across studies, we elected a descriptive summary of the included studies. Results: We included 4 studies, 3 of which were randomized trials. We excluded 11 studies on quality issues. Two studies revealed that participants trained in first aid demonstrated higher written test scores than controls (poisoning first aid: relative risk 2.11, 95% confidence interval [CI] 1.64 to 2.72; various first aid cases: mean difference 4.75, 95% CI 3.02 to 6.48). Two studies evaluated helping responses during unannounced simulations. First aid training improved the quality of help for a bleeding emergency (relative risk 25.94; 95% CI 3.60 to 186.93), not the rate of helping (relative risk 1.13; 95% CI 0.88 to 1.45). Training in first aid and helping behavior increased the helping rates in a chest pain emergency compared with training in first aid only (relative risk 2.80; 95% CI 1.05 to 7.50) or controls (relative risk 3.81; 95% CI 0.98 to 14.89). Participants trained in first aid only did not help more than controls (relative risk 1.36; 95% CI 0.28 to 6.61). Conclusion: First aid programs that also train participants to overcome inhibitors of emergency helping behavior could lead to better help and higher helping rates. abstract_id: PUBMED:28228110 A nationwide survey of first aid training and encounters in Norway. Background: Bystander first aid can improve survival following out-of-hospital cardiac arrest or trauma. Thus, providing first aid education to laypersons may lead to better outcomes. In this study, we aimed to establish the prevalence and distribution of first aid training in the populace, how often first aid skills are needed, and self-reported helping behaviour. Methods: We conducted a telephone survey of 1000 respondents who were representative of the Norwegian population. Respondents were asked where and when they had first aid training, if they had ever encountered situations where first aid was necessary, and stratified by occupation. First aid included cardio-pulmonary resuscitation (CPR) and basic life support (BLS). To test theoretical first aid knowledge, respondents were subjected to two hypothetical first aid scenarios. Results: Among the respondents, 90% had received first aid training, and 54% had undergone first aid training within the last 5 years. The workplace was the most common source of first aid training. Of the 43% who had been in a situation requiring first aid, 89% had provided first aid in that situation. There were considerable variations among different occupations in first aid training, and exposure to situations requiring first aid. Theoretical first aid knowledge was not as good as expected in light of the high share who had first aid training. In the presented scenarios 42% of respondent would initiate CPR in an unconscious patient not breathing normally, and 46% would provide an open airway to an unconscious road traffic victim. First aid training was correlated with better theoretical knowledge, but time since first aid training was not. Conclusions: A high proportion of the Norwegian population had first aid training, and interviewees reported high willingness to provide first aid. Theoretical first aid knowledge was worse than expected. While first aid is part of national school curriculum, few have listed school as the source for their first aid training. abstract_id: PUBMED:31524986 Mapping the public first-aid training landscape: a scoping review. While the public can play a vital role in saving lives during emergencies, intervention is only effective if people have the skills, confidence, and willingness to help. This review employs a five-stage framework to systematically analyse first aid and emergency helping literature from 22 countries (predominately in Asia, Australia, Europe, and the United States). The review covers 54 articles that investigate public first-aid knowledge and uptake of first-aid training (40); public confidence in first-aid skills and willingness to help during an emergency (21); and barriers to or enablers of learning first aid and delivering first aid in an emergency (25). The findings identify high levels of perceived knowledge, confidence, and willingness to help, supporting the contention that the public can play a vital role during an emergency. However, the findings also point to low uptake levels, low tested skill-specific knowledge, and barriers to learning first aid and helping, indicating that the first-aid training landscape is in need of improvement. abstract_id: PUBMED:26093230 Evidence-based educational pathway for the integration of first aid training in school curricula. Background: "Calling for help, performing first aid and providing Cardiopulmonary Resuscitation (CPR)" is part of the educational goals in secondary schools in Belgium (Flanders). However, for teachers it is not always clear at what age children can be taught which aspects of first aid. In addition, it is not clear what constitutes "performing first aid" and we strongly advocate that the first aid curriculum is broader than CPR training alone. Objectives: To develop an evidence-based educational pathway to enable the integration of first aid into the school curriculum by defining the goals to be achieved for knowledge, skills and attitudes, for different age groups. Methods: Studies were identified through electronic databases research (The Cochrane Library, MEDLINE, Embase). We included studies on first aid education for children and adolescents up to 18 years old. A multidisciplinary expert panel formulated their practice experience and expert opinion and discussed the available evidence. Results: We identified 5822 references and finally retained 30 studies (13 experimental and 17 observational studies), including studies concerning emergency call (7 studies), cardiopulmonary resuscitation (18 studies), AED (Automated External Defibrillator) use (6 studies), recovery position (5 studies), choking (2 studies), injuries (5 studies), and poisoning (2 studies). Recommendations (educational goals) were derived after carefully discussing the currently available evidence in the literature and balancing the skills and attitudes of children of different ages. Conclusions: An evidence-based educational pathway with educational goals concerning learning first aid for each age group was developed. This educational pathway can be used for the integration of first aid training in school curricula. abstract_id: PUBMED:36834439 Evaluation of the Conversations about Non-Suicidal Self-Injury Mental Health First Aid Course: Effects on Knowledge, Stigmatising Attitudes, Confidence and Helping Behaviour. Background: Non-suicidal self-injury (NSSI) is a common mental health problem, with a 19% lifetime prevalence in Australian adolescents and 12% in adults. Though rates of professional help-seeking for NSSI are low, disclosure to family and friends is more common, providing opportunities for them to encourage professional support. Mental Health First Aid® Australia's Conversations about Non-Suicidal Self-Injury course provides evidence-based training for the general public to support a person engaging in NSSI. Methods: This uncontrolled trial evaluated the effects of the Conversations about Non-Suicidal Self-Injury course on participants' knowledge, confidence, stigmatising attitudes, and intended and actual helping behaviours. Surveys were administered pre- and post-course, and at a six-month follow-up. A linear mixed-model analysis determined mean change over time, and effect sizes were estimated using Cohen's d. Course satisfaction was assessed using descriptive statistics and summative content analysis of qualitative data. Results: The pre-course survey was completed by 147 Australian participants (77.5% female, mean age 45.8 years), 137 (93.2%) at post-course and 72 (49%) at follow-up. Knowledge, confidence, quality of intended helping behaviours, and quality of actual helping behaviours increased significantly at both time points. Social distance decreased significantly at all time points and stigma decreased significantly at post-course. The course was perceived to be highly acceptable by participants. Conclusions: There is initial evidence that the Conversations about Non-Suicidal Self-Injury course is effective and acceptable for members of the public who may support a person engaging in NSSI. abstract_id: PUBMED:29851974 Systematic review and meta-analysis of Mental Health First Aid training: Effects on knowledge, stigma, and helping behaviour. Objective: To provide an up-to-date assessment of the effectiveness of the Mental Health First Aid (MHFA) training program on improving mental health knowledge, stigma and helping behaviour. Design: Systematic review and meta-analysis. Methods: A systematic search of electronic databases was conducted in October 2017 to identify randomised controlled trials or controlled trials of the MHFA program. Eligible trials were in adults, used any comparison condition, and assessed one or more of the following outcomes: mental health first aid knowledge; recognition of mental disorders; treatment knowledge; stigma and social distance; confidence in or intentions to provide mental health first aid; provision of mental health first aid; mental health of trainees or recipients of mental health first aid. Risk of bias was assessed and effect sizes (Cohen's d) were pooled using a random effects model. Separate meta-analyses examined effects at post-training, up to 6 months post-training, and greater than 6 months post-training. Results: A total of 18 trials (5936 participants) were included. Overall, effects were generally small-to-moderate post-training and up to 6 months later, with effects up to 12-months later unclear. MHFA training led to improved mental health first aid knowledge (ds 0.31-0.72), recognition of mental disorders (ds 0.22-0.52) and beliefs about effective treatments (ds 0.19-0.45). There were also small reductions in stigma (ds 0.08-0.14). Improvements were also observed in confidence in helping a person with a mental health problem (ds 0.21-0.58) and intentions to provide first aid (ds 0.26-0.75). There were small improvements in the amount of help provided to a person with a mental health problem at follow-up (d = 0.23) but changes in the quality of behaviours offered were unclear. Conclusion: This review supports the effectiveness of MHFA training in improving mental health literacy and appropriate support for those with mental health problems up to 6 months after training. Trial Registration: PROSPERO (CRD42017060596). abstract_id: PUBMED:27579055 Suicide first aid guidelines for Sri Lanka: a Delphi consensus study. Background: Sri Lanka has one of the highest suicide rates in the world. Gatekeeper programs aimed at specific target groups could be a promising suicide prevention strategy in the country. The aim of this study was to develop guidelines that help members of the public to provide first aid to persons in Sri Lanka who are at risk of suicide. Methods: The Delphi method was used to elicit consensus on potential helping statements to include in the guidelines. These statements describe information members of the public should have and actions they can take to help a person who is experiencing suicidal thoughts. An expert panel, comprised of mental health and suicide experts in Sri Lanka, rated each statement. The panellists were encouraged to suggest any additional action that was not included in the original questionnaire and, in particular, to include items that were culturally appropriate or gender specific. Responses to open-ended questions were used to generate new items. These items were included in the subsequent Delphi rounds. Three Delphi rounds were carried out. Statements were accepted for inclusion in the guidelines if they were endorsed (rated as essential or important) by at least 80 % of the panel. Statements endorsed by 70-79 % of the panel were re-rated in the following round. Statements with less than 70 % endorsement, or re-rated items that did not receive 80 % or higher endorsement were rejected. Results: The output from the Delphi process was a set of endorsed statements. In the first round questionnaire 473 statements were presented to the panel and 58 new items were generated from responses to the open-ended questions. Of the total 531 statements presented, 304 were endorsed. These statements were used to develop the suicide first aid guidelines for Sri Lanka. Conclusion: By engaging Sri Lankans who are experts in the field of mental health or suicide this research developed culturally appropriate guidelines for providing mental health first aid to a person at risk of suicide in Sri Lanka. The guidelines may serve as a basis for developing training for members of the public to provide mental health first aid to persons at risk of suicide as part of Sri Lanka's suicide prevention strategy. abstract_id: PUBMED:35331327 Evaluation of the Conversations About Gambling Mental Health First Aid course: effects on knowledge, stigmatising attitudes, confidence and helping behaviour. Background: The effects of problem gambling are wide-ranging, affecting many aspects of health and negatively impacting the person who gambles, their family and friends, and their community. People experiencing problem gambling have low rates of help-seeking and perceive many barriers to treatment, although evidence suggests that encouragement and support from friends and family can increase rates of help-seeking. Mental Health First Aid Australia's Conversations About Gambling course aims to teach members of the public evidence-based strategies for recognising and responding to signs of problem gambling in a person they know. Methods: This research evaluated the effects of the Conversations About Gambling course on participants' knowledge, confidence, stigmatising attitudes, intended helping behaviour and actual helping behaviour towards a person experiencing problem gambling. Participants from Australia completed surveys before the course, immediately after the course and six months later. Changes over time (pre-course to post-course, and pre-course to 6-month follow-up) were assessed with linear mixed models. Descriptive statistics and content analyses of open-ended questions pertaining to participants' satisfaction with the course were also produced. Results: Between 2018 and 2020, 166 participants were recruited into this study. At 6-month follow-up 87 participants (52.4%) provided data. Participants' knowledge about gambling and gambling problems, confidence, desire for social distance and intentions to help a person experiencing problem gambling significantly improved from pre-course to post-course, and from pre-course to 6-month follow-up. The quality of some actions taken to support a person they knew who was experiencing problem gambling also improved from pre-course to 6-month follow-up, in line with the teachings of the course. Participants perceived the course to be highly acceptable. Conclusions: The results of this initial evaluation of Mental Health First Aid Australia's Conversations About Gambling course suggest that it is an effective and acceptable educational intervention for those who wish to support a person experiencing problem gambling. abstract_id: PUBMED:30012118 Re-development of mental health first aid guidelines for supporting Aboriginal and Torres Strait islanders who are experiencing suicidal thoughts and behaviour. Background: Suicide is a leading cause of death among Indigenous Australians. Friends, family and frontline workers (for example, teachers, youth workers) are often best positioned to provide initial assistance if someone is suicidal. Culturally appropriate expert consensus guidelines on how to provide mental health first aid to Australian Aboriginal and Torres Strait Islander persons who are experiencing suicidal thoughts or behaviour were developed in 2009. This study describes the re-development of these guidelines to ensure they contain the most current recommended helping actions. Methods: The Delphi consensus method was used to elicit consensus on potential helping statements to be included in the guidelines. These statements describe helping actions that Indigenous community members and non-Indigenous frontline workers can take, and information they should have, to help someone who is experiencing suicidal thoughts or displaying suicidal behaviour. A panel was formed, comprising 27 Aboriginal and Torres Strait Islander people who have expertise in Indigenous suicide prevention. The panellists were presented with the helping statements via online questionnaires and were encouraged to suggest re-wording of statements and any additional helping statements that were not included in the original questionnaire. Statements were only accepted for inclusion in the guidelines if they were endorsed by ≥90% of panellists as essential or important. Results: From a total of 301 statements shown to the expert panel, 172 were endorsed as helping statements to be including in the re-developed guidelines. Conclusions: Aboriginal and Torres Strait Islander suicide prevention experts were able to reach consensus on appropriate strategies for providing mental health first aid to an Aboriginal or Torres Strait Islander person experiencing suicidal thoughts or behaviour. The re-development of the guidelines has resulted in more comprehensive guidance than the earlier version, for which the panel had rated 166 helping statements and had endorsed 52. These re-developed guidelines can be used to inform Indigenous suicide gatekeeper training courses. Answer: Yes, training can improve laypersons' helping behavior in first aid. Studies have shown that participants trained in first aid demonstrated higher written test scores and improved quality of help for certain emergencies, such as a bleeding emergency, compared to controls (PUBMED:19157654). Additionally, a nationwide survey in Norway found that 90% of respondents had received first aid training, and 89% of those who had been in a situation requiring first aid had provided it, indicating a high willingness to provide first aid among trained individuals (PUBMED:28228110). Furthermore, a scoping review identified that while there is high perceived knowledge, confidence, and willingness to help among the public, actual uptake levels and tested skill-specific knowledge were low, suggesting that training can enhance these aspects (PUBMED:31524986). However, it is important to note that not all training may lead to increased helping behavior. A randomised controlled trial found that supplementary training aimed at decreasing barriers to helping did not alter the helping behavior of participants who had already received conventional first aid training (PUBMED:22562070). This suggests that while first aid training can improve helping behavior, the design and focus of the training are crucial for it to be effective. Moreover, evidence-based educational pathways have been developed to integrate first aid training into school curricula, which can define goals for knowledge, skills, and attitudes for different age groups, potentially improving laypersons' helping behavior from a young age (PUBMED:26093230). Training programs like Mental Health First Aid also show effectiveness in improving knowledge, reducing stigma, and increasing confidence and helping behavior towards those with mental health problems (PUBMED:29851974). In summary, training can improve laypersons' helping behavior in first aid, but the effectiveness of the training may depend on its content, delivery, and focus on overcoming barriers to helping.
Instruction: Does off-pump coronary surgery reduce morbidity and mortality? Abstracts: abstract_id: PUBMED:12324727 Does off-pump coronary surgery reduce morbidity and mortality? Objective: To compare hospital outcomes of on-pump and off-pump coronary artery bypass surgery. Methods: From 1997 to 2000, primary coronary artery bypass grafting was performed in 481 patients off pump and in 3231 patients on pump. Hospital outcomes were compared between propensity-matched pairs of 406 on-pump and 406 off-pump patients. The 2 groups were similar in age (P =.9), left ventricular function (P =.7), extent of coronary artery disease (P =.5), carotid artery disease (P =.4), and chronic obstructive pulmonary disease (P =.5). However, off-pump patients had more previous strokes (P =.05) and peripheral vascular disease (P =.02); on-pump patients had a higher preoperative New York Heart Association class (P =.01). Results: In the matched pairs the mean number of bypass grafts was 2.8 +/- 1.0 in off-pump patients and 3.5 +/- 1.1 in on-pump patients (P &lt;.001). Fewer grafts were performed to the circumflex (P &lt;.001) and right coronary (P =.006) artery systems in the off-pump patients. Postoperative mortality, stroke, myocardial infarction, and reoperation for bleeding were similar in the 2 groups. There was more encephalopathy (P =.02), sternal wound infection (P =.04), red blood cell use (P =.002), and renal failure requiring dialysis (P =.03) in the on-pump patients. Conclusions: Both off- and on-pump procedures produced excellent early clinical results with low mortality. An advantage of an off-pump operation was less postoperative morbidity; however, less complete revascularization introduced uncertainty about late results. A disadvantage of on-pump bypass was higher morbidity that seemed attributable to cardiopulmonary bypass. abstract_id: PUBMED:12142195 The effect off-pump coronary artery bypass surgery on in-hospital mortality and morbidity. Objective: Off-pump coronary artery bypass (OPCAB) surgery is being increasingly reported to show better outcomes compared to conventional on bypass grafting. We examined the effect of OPCAB on in-hospital mortality and morbidity, while adjusting for patient and disease characteristics, in four institutions in the North West of England. Methods: Between April 1997 and March 2001, 10,941 consecutive patients underwent isolated coronary artery bypass surgery at these four institutions. Of these, 7.7% were performed off-pump. We used logistic regression to examine the effect of OPCAB on in-hospital mortality and morbidity after adjusting for potentially confounding variables. Results: The crude odds ratio (OR) for death (off-pump versus on-pump coronary bypass grafting) was 0.48 (95% confidence interval, CI 0.26-0.92; P=0.023). After adjustment for all major risk factors, the OR for death was 0.59 (95% CI 0.31-1.12; P=0.105). Off-pump patients had a substantially reduced risk of post-operative stroke (0.6 versus 2.3%, respectively; adjusted OR 0.26 (95% CI 0.09-0.70; P=0.008) and a significant reduction in post-operative hospital stay. Other morbidity outcomes were similar in both groups. Conclusions: Off-pump coronary artery bypass incurs no increased risk of in-hospital mortality. In contrast, there is a significant reduction in morbidity in patients undergoing off-pump coronary bypass grafting when compared to that performed on cardiopulmonary bypass. abstract_id: PUBMED:23520171 Does off-pump coronary revascularization reduce mortality in re-operative coronary artery surgery? A meta-analysis of observational studies. Background: Re-operative coronary artery bypass grafting (CABG) is a challenging operation that is often performed in a high-risk patient group. Avoiding cardiopulmonary bypass (CPB) in these patients is hypothesised to be advantageous due to the reduced invasiveness and physiological stress of off-pump coronary artery bypass grafting (OPCAB). The aims of this study were to assess whether OPCAB may improve outcomes in patients undergoing re-operative CABG. Methods: Twelve studies, incorporating 3471 patients, were identified by systematic literature review. These were meta-analysed using random-effects modelling. Primary endpoints were 30-day and mid-term mortality. Secondary endpoints were completeness of revascularization, mean number of grafts per patient and the effect of intra-operative conversion on mortality. Results: A significantly lower rate of 30-day mortality was observed with OPCAB (OR 0.51, 95% CI [0.35, 0.74]), however, no difference was demonstrated in mid-term mortality. Significantly less complete revascularization and mean number of grafts per patient were observed in the OPCAB group. Meta-regression revealed no change in 30-day mortality when the effect of conversion from one technique to the other was assessed. Conclusions: Off-pump techniques may reduce early mortality in selected patients undergoing re-operative CABG; however, this does not persist into mid-term follow-up. OPCAB may also lead to intra-operative conversion and, although this did not affect outcomes in this study, these results are constrained by the limited data available. Furthermore, OPCAB may increase target vessel revascularization and, consequently, incomplete revascularization which, whilst not reflected in the short-term outcomes, requires longer-term follow-up in order to be fully assessed. abstract_id: PUBMED:23123992 Increasing surgical experience with off-pump coronary surgery does not mitigate the morbidity of emergency conversion to cardiopulmonary bypass. Objective: Emergency conversion during off-pump coronary artery bypass (OPCAB) confers significant morbidity. We sought to determine whether the outcomes in these patients have changed as our experience with off-pump techniques has increased. Methods: Between January 1999 and December 2010, 4763 patients underwent coronary artery surgery. An off-pump strategy was attempted in 4415 cases (92.7%). The results of the most recent 50 patients who required emergency conversion were compared with the preceding 50 conversions and with patients who underwent either OPCAB (n = 2737) or on-pump coronary surgery (coronary artery bypass grafting) (n = 268) during the same time frame. Results: The emergency conversion rate was 2.27% (n = 100), being 2.97% for the first 50 cases and 1.77% for the subsequent 50 patients. The two sequential groups of emergency conversions had similar indications and timing of conversion and comparable outcomes. When compared with patients who underwent OPCAB, the more recent 50 conversions had higher mortality (P = 0.002) and more frequent sternal wound infection (P = 0.036), hemorrhage requiring reoperation (P = 0.003), respiratory failure (P &lt; 0.0001), and all-cause sepsis (P = 0.001). Compared with the on-pump group, the more recent conversions had higher mortality (P = 0.055) and a greater rate of postoperative sepsis (P = 0.002). Conclusions: The incidence of emergency conversion during OPCAB has decreased with increasing surgical experience; however, the morbidity in these patients remains essentially unchanged. The outcomes in these patients remain worse than those in nonconverted patients. Safer bailout strategies during OPCAB are still warranted. abstract_id: PUBMED:15514591 Emergency conversion to cardiopulmonary bypass during attempted off-pump revascularization results in increased morbidity and mortality. Objective: We sought to evaluate outcomes and predictors of emergency conversion to cardiopulmonary bypass during attempted off-pump coronary bypass surgery. Methods: From January 1999 through July 2002, 1678 consecutive isolated coronary artery bypass operations were performed at Lenox Hill Hospital, with the intention to treat all patients with off-pump coronary bypass surgery. Fifty (2.97%) patients required urgent conversion to cardiopulmonary bypass. All the preoperative, intraoperative, and postoperative variables were collected and analyzed in accordance with the New York State Cardiac Surgery Reporting System. Multivariate regression analysis was performed to determine predictors for conversion. Results: In-hospital mortality and major morbidity were significantly lower in the nonconverted group compared with the converted patients (mortality: 1.47% [n = 24] vs 12% [n = 6], P = .001; stroke: 1.1% [n = 18] vs 6% [n = 3], P = .02; renal failure: 1.23% [n = 20] vs 6% [n = 3], P = .02; deep sternal wound infection: 1.54% [n = 25] vs 8% [n = 4], P = .009; respiratory failure: 3.75% [n = 61] vs 28% [n = 14], P &lt; .0001; nonconverted vs converted patients, respectively). The annual incidence of conversion decreased during the study period. There was a significant reduction in the incidence of conversion after routine use of a cardiac positioning device to performing lateral and inferior wall grafts (4.2% [n = 27] vs 2.3% [n = 23], P = .04). None of the preoperative variables were independent predictors of conversion on multivariate regression analysis. Conclusions: Because emergency conversion to cardiopulmonary bypass during attempted off-pump coronary bypass surgery results in significantly higher morbidity and mortality, studies comparing off-pump coronary bypass surgery with conventional coronary artery surgery should include converted patients in the off-pump group. In our experience, emergency conversion is an unpredictable event. The incidence of conversion decreases with increasing experience of surgeons in performing off-pump coronary surgery and use of a cardiac positioning device. abstract_id: PUBMED:25328605 Early outcome of off-pump versus on-pump coronary revascularization. Introduction: The use of coronary artery bypass surgery (CABG) with cardiopulmonary bypass (CPB) or without CPB technique (off-pump) can be associated with different mortality and morbidity and their outcomes remain uncertain. The goal of this study was to evaluate the early outcome of on-pump versus off-pump CABG. Methods: We conducted a retrospective database review of 13866 patients (13560 patients undergoing on-pump CABG and 306 patients undergoing off-pump CABG) at Tehran Heart Center between January 2002 and January 2007. We compared preoperative, operative, and postoperative characteristics between them. Results: In-hospital mortality in the on-pump group was 0.8% compared to 0.7% in the off-pump group (P=0.999) and in-hospital morbidity was 11.7% and 6.5%, respectively (OR: 1.533, 95%CI: 0.902-2.605, P=0.114). Postoperative atrial fibrillation was more prevalent in on-pump versus off-pump surgery (6.0% vs 3.0%, P=0.028), however there were no statistical significant differences in other postoperative complications with regard to cardiac arrest (P=0.733), prolonged ventilation (P=0.363), brain stroke (P=0.999), renal failure (P=0.525), and postoperative bleeding (P=0.999). The mean length of stay in hospital (P=0.156) and in ICU (P=0.498) was also similar between the two groups. Conclusion: The results from an Iranian population-based study showed similar early mortality and morbidity of off-pump CABG in comparison to on-pump surgery. abstract_id: PUBMED:38344305 High Preoperative Systemic Immune-Inflammation Index Values Significantly Predicted Poor Outcomes After on-Pump Coronary Artery Bypass Surgery. Purpose: On-pump coronary artery bypass grafting (CABG) is one of the most common revascularization surgical therapies for coronary artery disease (CAD). However, owing to the use of cardiopulmonary bypass ("on-pump"), the body develops systemic inflammatory response syndrome, which leads to the risk of morbidity and mortality. This study aimed to determine the perioperative outcomes of patients who underwent on-pump CABG surgery using the systemic immune-inflammation index (SII). Patients And Methods: This single-center retrospective study used secondary data from patients' electronic medical records and medical records archives at the National Cardiovascular Center Harapan Kita, who underwent on-pump CABG from January 2019 to December 2022. A total of 1056 on-pump CABG procedures were performed after exclusion. Lymphocyte, segmental neutrophil, and platelet counts from the preoperative data were used to calculate the SII values. An SII cutoff value of 528.715 × 103/mm3 divided the patients into two groups. Results: The SII value of 1056 patients were calculated, among which 490 (46%) patients had a preoperative SII value of ≥528.715 × 103/mm3. Multivariate analysis showed that a high SII significantly prolonged the duration of surgery (OR 1.005, 95% CI 1.003-1.006) and cardiopulmonary bypass (CPB) time (OR 1.007, 95% CI 1.005-1.009). High SII values significantly predicted prolonged mechanical ventilation (OR 6.154, 95% CL 3.938-9.617), intensive care unit (ICU) stay (OR 6.332, 95% CL 4.232-9.474), and hospital stay (OR 3.517, 95% CL 2.711-4.562). Regarding other perioperative outcomes, a high SII significantly predicted the risk of postoperative atrial AF, cardiac arrest, acute myocardial infarction, and mortality. Conclusion: A high preoperative SII value can predict morbidity and mortality in patients undergoing on-pump CABG surgery. abstract_id: PUBMED:11603449 Off-pump coronary artery bypass grafting decreases risk-adjusted mortality and morbidity. Background: The purpose of this study was to determine whether coronary artery bypass grafting without cardiopulmonary bypass (off-pump CABG) decreases risk-adjusted operative death and major complications after coronary artery bypass grafting in selected patients. Methods: Using The Society of Thoracic Surgeons (STS) National Adult Cardiac Surgery Database, procedural outcomes were compared for conventional and off-pump CABG procedures from January 1, 1998, through December 31, 1999. Mortality and major complications were examined, both as unadjusted rates and after adjusting for known base line patient risk factors. Results: A total of 126 experienced centers performed 118,140 total CABG procedures. The number of off-pump CABG cases was 11,717 cases (9.9% of total cases). The use of an off-pump procedure was associated with a decrease in risk-adjusted operative mortality from 2.9% with conventional CABG to 2.3% in the off-pump group (p &lt; 0.001). The use of an off-pump procedure decreased the risk-adjusted major complication rate from 14.15% with conventional CABG to 10.62% in the off-pump group (p &lt; 0.0001). Patients receiving off-pump procedures were less likely to die (adjusted odds ratio 0.81, 95% CI 0.70 to 0.91) and less likely to have major complications (adjusted odds ratio 0.77, 95% CI 0.72 to 0.82). Conclusions: Off-pump CABG is associated with decreased mortality and morbidity after coronary artery bypass grafting. Off-pump CABG may prove superior to conventional CABG in appropriately selected patients. abstract_id: PUBMED:16002958 Bypassing the pump: changing practices in coronary artery surgery. Improvements in techniques in coronary revascularization over the past decade have led to a revival of interest in off-pump coronary artery surgery. A fifth of coronary revascularization procedures are now performed off-pump. Randomized trials comparing off-pump surgery with conventional coronary artery bypass grafting using cardiopulmonary bypass (CPB) mainly included low-risk patients and were therefore underpowered to detect a difference in mortality. Current evidence, however, suggests a significant reduction in morbidity with off-pump surgery. The avoidance of CPB and the elimination of any aortic manipulation may significantly reduce the risk of stroke. Those benefits are likely to be most marked in older, sicker patients, who represent an increasing proportion of the surgical population. abstract_id: PUBMED:24200395 Morbidity but not mortality is decreased after off-pump coronary artery bypass surgery. Background: Off-pump coronary bypass surgery (CABG) has been advocated to avoid the physiologic perturbations related to cardiopulmonary bypass and improve outcomes compared with on-pump CABG. Previous reports have been inconsistent, and thus its benefits remain uncertain. This retrospective study compared outcomes between on-pump and off-pump CABG from a large multicenter cohort of propensity-matched patients. Methods: The study consisted of 21,640 patients (19,639 [90.8%] on-pump, 2,001 [9.2%] off-pump) who underwent isolated CABG between January 1, 2008, and June 30, 2011, and were entered into a statewide collaborative database. Univariate analysis compared 37 baseline characteristics between on-pump and off-pump procedures. Patients were matched 1:1 based on similarities in propensity scores derived from statistically significant baseline characteristics. Propensity scores and surgery type were used in conditional logistic regression models for predicting each of 14 postoperative outcomes using the sample of 3,898 matched procedures. Results: Patients undergoing off-pump CABG had significantly fewer complications overall, including decreased red blood cell transfusion, stroke, intensive care unit and ventilator time, reoperation for bleeding, and length of stay. There was no difference in renal failure, wound infection, discharge location, or 30-day readmission rate. Although off-pump patients received fewer bypass grafts per patient (2.5 ± 1.2 versus 3.0 ± 1.1; p &lt; 0.001), operative mortality was similar for the two groups (1.8% on-pump versus 2.3% off-pump; p = 0.30). Conclusions: Off-pump CABG was associated with less morbidity, shorter length of stay, and similar mortality compared with on-pump procedures, suggesting that it can be a safe and effective alternative to standard on-pump CABG. However, the limited use of off-pump CABG in this multicenter analysis may restrict the generalizability of these results, and realistically defines the limited degree of acceptance of this technique in a real-world environment. Answer: The evidence from the provided abstracts suggests that off-pump coronary artery bypass grafting (OPCAB) may reduce morbidity in certain patient populations, but the impact on mortality is less clear and may depend on specific circumstances. Several studies indicate that OPCAB is associated with a reduction in postoperative complications compared to on-pump coronary artery bypass grafting (CABG). For instance, one study found that OPCAB was associated with a lower incidence of encephalopathy, sternal wound infection, red blood cell use, and renal failure requiring dialysis (PUBMED:12324727). Another study reported a significant reduction in postoperative stroke and a shorter postoperative hospital stay for off-pump patients (PUBMED:12142195). Similarly, a meta-analysis of observational studies suggested that off-pump techniques might reduce early mortality in selected patients undergoing re-operative CABG, although this benefit did not persist into mid-term follow-up (PUBMED:23520171). However, the impact of OPCAB on mortality is less definitive. Some studies found no significant difference in mortality between off-pump and on-pump groups (PUBMED:25328605), while others reported a decrease in risk-adjusted operative mortality from conventional CABG to off-pump CABG (PUBMED:11603449). It is important to note that emergency conversion from off-pump to on-pump during surgery is associated with increased morbidity and mortality (PUBMED:15514591), and increasing surgical experience with off-pump techniques does not seem to mitigate the morbidity of emergency conversion (PUBMED:23123992). In summary, off-pump coronary surgery may reduce morbidity, particularly in terms of specific postoperative complications, but the evidence for a reduction in mortality is mixed. The benefits of OPCAB may be more pronounced in certain high-risk patient groups or in the context of re-operative surgery. It is also crucial to consider the surgical team's experience with off-pump techniques and the potential risks associated with emergency conversion to on-pump CABG (PUBMED:24200395).
Instruction: Is increased choroidal thickness association with primary angle closure? Abstracts: abstract_id: PUBMED:24697943 Is increased choroidal thickness association with primary angle closure? Purpose: To investigate whether increased choroidal thickness is a risk factor for primary angle closure. Methods: The study consisted of 162 patients with primary angle-closure (primary angle-closure suspect [PACS], 73 eyes; primary angle closure [PAC], 33 eyes; and primary angle-closure glaucoma [PACG], 56 eyes) and 87 healthy controls with no ophthalmic symptoms. EDI-OCT was used to measure and to compare the macular choroidal thickness between the primary angle-closure and normal eyes. The association between the choroidal thickness and the presence of primary angle closure was evaluated using logistic regression models to determine the odds ratio. Results: The primary angle-closure eyes had a thicker choroid than the control eyes at all macular locations (all p &lt; 0.05). It remained significantly thicker after controlling for age, axial length (AL) and gender, except at 1 mm, 3 mm superior and 3 mm nasal from the fovea. Univariate analysis showed that the subfoveal choroidal thickness (SFCT) was significantly associated with primary angle closure. After adjustment for the anterior chamber depth (ACD), vitreous chamber depth (VCD) and lens thickness (LT), the SFCT was still significantly associated with primary angle closure. The odds ratio (OR) and 95% confidence interval (CI) was 1.008 (1.003, 1.014). The choroidal thickness at most macular locations was positively associated with primary angle closure. The area under the curve (AUC) for detecting primary angle closure was 0.619 with SFCT, and the result was statistically significant (p = 0.002). Conclusions: Primary angle-closure eyes have a higher level of macular choroidal thickness than normal eyes. Increased choroidal thickness might be association with primary angle closure. However, the SFCT showed only low specificity in screening for primary angle closure. abstract_id: PUBMED:23599327 Does acute primary angle-closure cause an increased choroidal thickness? Purpose: We compared the choroidal thickness of the eyes of patients with acute primary angle-closure (APAC) with fellow eyes in the same patients. Methods: The analysis included 21 participants with unilateral APAC affected eyes and 21 fellow eyes with a diagnosis of primary angle-closure suspect (PACS). Enhanced depth imaging-optical coherence tomography (EDI-OCT) was used to measure the macular and peripapillary retinal and choroidal thickness in both eyes. The average choroidal thickness of the APAC eyes at each location or segment was compared to that of the fellow eyes. Results: At all macular locations, the choroidal thickness was greatest at the subfovea for both groups. Comparison of the choroidal thickness between the groups showed that the thickness in the APAC eyes was significantly greater than in the PACS eyes at all locations except at 1 mm, 3 mm superior, 3 mm inferior, and 3 mm temporal from the fovea (P &lt; 0.005). The mean subfoveal choroidal thickness was 349.0 ± 78.1 μm in the APAC eyes and 308.1 ± 70.5 μm in the PACS eyes, with a statistically significant difference (P &lt; 0.005). Multivariable linear regression analysis showed that the subfoveal choroidal thickness was significantly greater in association with the APAC diagnosis and diastolic blood pressure and thinner in association with older subjects. Conclusions: APAC eyes have a higher level of macular choroidal thickness than PACS eyes when the IOP is reduced. However, the source of this difference is unclear and must be investigated further. abstract_id: PUBMED:38239961 Three-dimensional choroidal vascularity index and choroidal thickness in fellow eyes of acute and chronic primary angle-closure using swept-source optical coherence tomography. Aim: To compare the three-dimensional choroidal vascularity index (CVI) and choroidal thickness between fellow eyes of acute primary angle-closure (F-APAC) and chronic primary angle-closure glaucoma (F-CPACG) and the eyes of normal controls. Methods: This study included 37 patients with unilateral APAC, 37 with asymmetric CPACG without prior treatment, and 36 healthy participants. Using swept-source optical coherence tomography (SS-OCT), the macular and peripapillary choroidal thickness and three-dimensional CVI were measured and compared globally and sectorally. Pearson's correlation analysis and multivariate regression models were used to evaluate choroidal thickness or CVI with related factors. Results: The mean subfoveal CVIs were 0.35±0.10, 0.33±0.09, and 0.29±0.04, and the mean subfoveal choroidal thickness were 315.62±52.92, 306.22±59.29, and 262.69±45.55 µm in the F-APAC, F-CPACG, and normal groups, respectively. All macular sectors showed significantly higher CVIs and choroidal thickness in the F-APAC and F-CPACG eyes than in the normal eyes (P&lt;0.05), while there were no significant differences between the F-APAC and F-CPACG eyes. In the peripapillary region, the mean overall CVIs were 0.21±0.08, 0.20±0.08, and 0.19±0.05, and the mean overall choroidal thickness were 180.45±54.18, 174.82±50.67, and 176.18±37.94 µm in the F-APAC, F-CPACG, and normal groups, respectively. There were no significant differences between any of the two groups in all peripapillary sectors. Younger age, shorter axial length, and the F-APAC or F-CPACG diagnosis were significantly associated with higher subfoveal CVI and thicker subfoveal choroidal thickness (P&lt;0.05). Conclusion: The fellow eyes of unilateral APAC or asymmetric CPACG have higher macular CVI and choroidal thickness than those of the normal controls. Neither CVI nor choroidal thickness can distinguish between eyes predisposed to APAC or CPACG. A thicker choroid with a higher vascular volume may play a role in the pathogenesis of primary angle-closure glaucoma. abstract_id: PUBMED:27588269 Association between choroidal thickness and anterior chamber segment in eyes with narrow or open-angle. Aim: To investigate the relationship between choroidal thickness and anterior chamber segment in subjects with eyes with narrow or open-angle. Methods: The subfoveal choroidal thickness was measured with enhanced depth-imaging optical coherence tomography and anterior chamber parameters were measured with ultrasound biomicroscopy in one eye of 23 subjects with open-angle eyes and 38 subjects with narrow-angle eyes. The mean age was 59.52±7.04y for narrow-angle subjects and 60.76±7.23y for open-angle subjects (P=0.514). Multivariate linear regression analysis was performed to assess the association between choroidal thickness and narrow-angle parameters. Results: There were no differences in subfoveal choroidal thickness between open- and narrow-angle subjects (P=0.231). Anterior chamber parameters, including central anterior chamber depth, trabecular iris angle, iris thickness 500 µm from the scleral spur (IT500), and ciliary body thickness at 1 mm and 2 mm from the scleral spur (CBT1, CBT2) showed significant differences between the two groups (P&lt;0.05). Subfoveal choroidal thickness showed negative correlation (β=-0.496, P=0.016) only with anterior chamber depth in the open-angle group and with age (β=-0.442, P=0.003) and IT500 (β=-0.399, P=0.008) in the narrow-angle group. However, subfoveal choroidal thickness was not correlated with trabecular iris angle, anterior chamber depth, ciliary body thickness, or central corneal thickness in the narrow-angle group. Conclusion: Choroidal thickness does not differ in the two groups and has not correlated with anterior chamber parameters in narrow-angle subjects, suggesting a lack of relationship between choroidal thickness and primary angle-closure glaucoma. abstract_id: PUBMED:34333686 Clinical observation of macular choroidal thickness in primary chronic angle-closure glaucoma. Purpose: To comparatively analyze differences in macular choroidal thickness and volume in primary chronic angle-closure glaucoma (PACG) eyes. Methods: Thirty-one PACG patients were sequentially selected for this case-control study. Thirty-one eyes with PACG were included in group A, 31 fellow eyes were included in group B, and group C included 67 normal eyes. Enhanced-depth imaging optical coherence tomography (EDI-OCT) was used to measure choroidal thickness and volume. Results: The choroidal thicknesses and volumes of the central subfield macula (CSM), nasal inner macula (NIM), temporal inner macula (TIM), inferior inner macula (IIM), temporal outer macula (TOM), inferior outer macula (IOM), and mean macula (MM) in group A were all higher than those in group C (P &lt; 0.05). The choroidal thicknesses and volumes of the NIM, superior inner macula (SIM), IIM, nasal outer macula (NOM), and MM in group B were all higher than those in group C (P &lt; 0.05). No statistically significant differences were found between groups A and B (P &gt; 0.05). The choroidal thicknesses of different macular regions in group A were not correlated with the mean defect (MD). Conclusion: Increased macular choroidal thickness may be a common anatomical characteristic of PACD eyes. Macular choroidal thickness is not a good marker for assessing PACG severity. abstract_id: PUBMED:36675335 Changes in Anterior Chamber Angle and Choroidal Thickness in Patients with Primary Angle-Closure Glaucoma after Phaco-Goniosynechialysis. We aimed to observe changes in angle width and choroidal thickness (CT) before and after phacoemulsification intraocular lens implantation (PEI) combined with goniosynechialysis (GSL) in patients with primary angle-closure glaucoma (PACG) complicated by cataracts. This prospective cohort study included 60 patients with PACG complicated by cataracts from the Department of Ophthalmology of Shaoxing People’s Hospital, China. Patients underwent PEI combined with GSL (PEI-GSL group) or laser peripheral iridotomy (LPI) followed by PEI (PEI group). Intraocular pressure (IOP) and endothelial counts were significantly decreased in both groups after surgery, while best-corrected visual acuity and central anterior chamber depth were significantly increased. However, there were no significant differences between the two groups. The opening degrees of room corners at 12, 3, 6, and 9 o’clock were recorded as AA12, AA3, AA6, and AA9, respectively. Anterior chamber depth was significantly increased postoperatively compared to preoperatively in both groups, with no significant between-group differences (all ps &gt; 0.05). At 1, 3, and 6 months postoperatively, the width at AA12, AA3, and AA9 points was higher in the PEI-GSL group than in the PEI group (all ps &lt; 0.05). Significant between-group differences at AA6 were observed preoperatively (p = 0.023) and at 1 (p = 0.027) and 3 (p = 0.033) months postoperatively but not at 6 months postoperatively (p = 0.055). CT was smaller postoperatively than preoperatively (all ps &lt; 0.001). The present study suggests that patients with PACG who underwent PEI with or without GSL had reduced IOP and CT after surgery. abstract_id: PUBMED:31576492 Increased choroidal thickness in primary angle closure measured by swept-source optical coherence tomography in Caucasian population. Purpose: A role of the choroid has been suggested in the pathophysiology of angle closure. We assessed the choroidal thickness (CT) in Caucasian patients with primary angle closure (PAC) and in a subgroup of patients with plateau iris using swept-source optical coherence tomography (SS-OCT) compared to normal eyes. Methods: This prospective cohort study in a hospital-based population in a tertiary center compared consecutive patients with PAC to healthy controls. A subgroup analysis of patients with plateau iris was also performed. Choroidal thickness was measured by SS-OCT in the subfoveal area (SFCT) and at 1- and 3-mm eccentricity superiorly, inferiorly, nasally and temporally from the fovea. Results: Compared to the 25 eyes of 13 control patients [7 women, mean (SD) age, 56.6 (15.7) years], the 45 eyes of 25 patients with PAC [15 women, mean (SD) age, 55.7 (10.7) years] had a significantly increased SFCT. SFCT was 355.36 μm (SD 85.97) in PAC eyes versus 286.08 μm (SD 98.09) in control eyes (p = 0.009). The CT was also significantly increased compared to control eyes in other macular areas (p &lt; 0.05), except at 3 mm temporal to the fovea. In the plateau iris subgroup, a not significant (except 3 mm nasal to the fovea) trend toward an increased CT was observed in all studied macular areas compared to control eyes. Conclusion: In eyes of Caucasian patients with PAC, the CT is increased compared to controls. Increased CT could contribute to the pathophysiology of PAC with a possible choroidal expansion and dysfunction of choroidal ganglion cells. abstract_id: PUBMED:26164737 Choroidal physiology and primary angle closure disease. Primary angle closure disease (PACD), prevalent in Asian countries, is generally associated with a shallower anterior chamber, a shorter axial length, thicker lens, hyperopia, and female sex. Other physiologic factors, however, may be important, especially with regard to triggering acute primary angle closure. Thickening of the choroid has been demonstrated in untreated and treated, acute and chronic PACD eyes. Recently, there has been growing interest in studying the role of the choroid in the pathophysiology of PACD. The emergence of new imaging technology such as the enhanced depth imaging mode of spectral-domain optical coherence tomography and swept-source optical coherence tomography has contributed to understanding PACD pathologies. We summarize the functions of the choroid and choroidal changes in the pathogenesis of PACD, and discuss potential future developments. abstract_id: PUBMED:34937227 Correlation between choroidal thickness and intraocular pressure control in primary angle-closure glaucoma. Purpose: To study the correlation between choroidal thickness (CT) and IOP control in primary angle-closure glaucoma (PACG). Methods: In total, 61 patients (102 eyes) with PACG underwent subfoveal CT (SFCT) scanning using enhanced depth imaging-optical coherence tomography. The subjects with PACG were further grouped as controlled IOP (≤21 mm Hg on maximal medical therapy) and uncontrolled IOP (&gt;21 mm Hg on maximal medical therapy). The average CT of the PACG eyes was calculated and compared between both groups. A correlation analysis was done between CT and intereye difference in CT with the disease parameters. Results: The mean CT was 274.38 ± 42.10 μm in 102 PACG eyes. SFCT was significantly increased in the uncontrolled IOP group as compared with the controlled IOP group. The mean SFCT was 245.57 ± 62.10 μm in the controlled group and 294.46 ± 51.05 μm in the uncontrolled group (P &lt; 0.01). Factors associated with a thicker choroid were younger age, high IOP, and higher optic nerve head cupping (P &lt; 0.001). Neither the visual field-mean deviation (VF-MD) nor pattern standard deviation (PSD) was found to be associated with overall CT. The intereye asymmetry between CT was significantly associated with poor VF-MD and PSD. Conclusion: PACG eyes with thicker choroid may be a risk factor for poor IOP control on medical anti-glaucoma therapy. Thicker choroid as compared to the fellow eye is a poor prognostic sign and these eyes should be monitored closely. abstract_id: PUBMED:25907997 Is increased choroidal thickness a risk factor for primary angle-closure glaucoma Primary angle-closure glaucoma (PACG) is the main form of glaucoma in China. The pathological mechanism of PACG is not yet well understood. Traditional anatomic disorders are characterized by biometric factors such as shallow anterior chamber depth, narrow anterior chamber angle, short axial length, and thick crystalline lens.Newly found potential risk factors include dynamic changes of iris and choroid.With the development of enhanced depth imaging spectral-domain optical coherence tomography (EDI-OCT) and swept source OCT (SS-OCT), it is possible to measure choroidal thickness accurately.Several studies have found the choroid of PACG eyes was thicker than that of primary open-angle glaucoma (POAG) and normal subject eyes, which suggests that increased choroidal thickness might be another risk factor for PACG. Answer: Yes, increased choroidal thickness is associated with primary angle closure. Studies have shown that eyes with primary angle-closure have a higher level of macular choroidal thickness compared to normal eyes (PUBMED:24697943). This association remains significant even after adjusting for factors such as anterior chamber depth, vitreous chamber depth, and lens thickness (PUBMED:24697943). In acute primary angle-closure (APAC) eyes, the choroidal thickness is also significantly greater than in primary angle-closure suspect (PACS) eyes (PUBMED:23599327). Furthermore, the fellow eyes of unilateral APAC or asymmetric chronic primary angle-closure glaucoma (CPACG) have higher macular choroidal vascularity index (CVI) and choroidal thickness than those of normal controls (PUBMED:38239961). However, it is important to note that while there is an association, the specificity of subfoveal choroidal thickness (SFCT) for screening primary angle closure is low (PUBMED:24697943). Additionally, in a study comparing choroidal thickness in narrow-angle and open-angle subjects, no significant differences were found, suggesting a lack of relationship between choroidal thickness and primary angle-closure glaucoma (PUBMED:27588269). Despite this, other studies have found increased macular choroidal thickness in primary chronic angle-closure glaucoma (PACG) eyes compared to normal eyes (PUBMED:34333686), and a significant increase in choroidal thickness in Caucasian patients with primary angle closure (PAC) compared to healthy controls (PUBMED:31576492). Overall, the evidence suggests that increased choroidal thickness may play a role in the pathogenesis of primary angle-closure glaucoma, although its utility as a screening or diagnostic marker may be limited due to variability and overlap with normal eyes (PUBMED:24697943, PUBMED:27588269, PUBMED:34333686, PUBMED:31576492).
Instruction: Is echography an adequate method for assessing the thickness of intra-abdominal fat? Abstracts: abstract_id: PUBMED:9465239 Is echography an adequate method for assessing the thickness of intra-abdominal fat? A comparison with computed tomography Purpose: The methods to measure intraabdominal fat amount and to distinguish visceral from subcutaneous fat are useful and needed because visceral obese people are at risk of developing cardiovascular disorders. We investigated US capabilities in measuring intraabdominal fat thickness and distribution distinguishing visceral from subcutaneous fat. The results were compared with those obtained with CT, the gold standard, and with the waist-hip ratio (W/H). Material And Methods: Thirty obese women admitted to the Internal Medicine I Department, Ospedale S. Raffaele (Milan, Italy) were examined. The patients, aged 18-60 years and with BMI ranging 29.0-47.3, were submitted to consecutive double blind measurements with US and CT. The following anthropometric values were compared for every patient: W/H, US visceral/subcutaneous thickness, CT visceral/subcutaneous thickness, CT visceral area, CT subcutaneous adipose area and CT visceral/subcutaneous adipose area. Results: The classification of visceral obesity by W/H (&gt; .85) was confirmed by CT visceral/subcutaneous adipose area (&gt; .491). The W/H correlated significantly with CT visceral/subcutaneous adipose thickness and CT visceral/subcutaneous adipose area (r = .52, p &lt; .004; r = .51, p &lt; .004), but not with US visceral/subcutaneous adipose thickness (r = .42, p &lt; .06). Significant correlations were found between Ct visceral/subcutaneous adipose area and with both US and CT visceral/subcutaneous adipose thickness (r = .59, p &lt; .006; = .71, p &lt; .0001). A high correlation was found between US visceral/subcutaneous adipose thickness and CT visceral/subcutaneous adipose thickness (r = .96, p &lt; .0001). Conclusion: Analyzing the results of the different methods, we conclude that US can always be used to study abdominal fat amount and distribution in obese women because this method exhibits significant correlations with CT, the gold standard. The W/H is not sufficient to distinguish visceral from subcutaneous intraabdominal fat. abstract_id: PUBMED:29705051 Intra-abdominal fat measurement by ultrasonography: association with anthropometry and metabolic syndrome in adolescents. Objective: To associate intra-abdominal fat thickness measured by ultrasonography to the factors related to metabolic syndrome and to determine cutoff points of intra-abdominal fat measurement associated with a greater chance of metabolic syndrome in adolescents. Methods: This was a cross-sectional study, with 423 adolescents from public schools. Intra-abdominal fat was measured by ultrasonography. Anthropometric data were collected, and biochemical analyses were performed. Results: Intra-abdominal fat was measured by ultrasonography, showing a statistically significant association with the diagnosis of metabolic syndrome (p=0.037), body mass index (p&lt;0.001), elevated triglyceride levels (p=0.012), decreased plasma HDL levels (p=0.034), and increased systemic blood pressure values (p=0.023). Cutoff values of intra-abdominal fat thickness measurements were calculated by ultrasound to estimate the individuals most likely to develop metabolic syndrome. In the logistic regression models, the cutoff values that showed the highest association with metabolic syndrome in males were 4.50, 5.35, 5.46, 6.24, and 6.50cm for the ages of 14, 15, 16, 17, and 18/19 years, respectively. In the female gender, the cutoff values defined for the same age groups were 4.46, 4.55, 4.45, 4.90, and 6.46cm. In an overall analysis using the ROC curve, without gender and age stratification, the cut-off of 3.67cm showed good sensitivity, but low specificity. Conclusion: Ultrasonography is a useful method to estimate intra-abdominal adipose tissue in adolescents, which is associated with the main factors related to obesity and metabolic syndrome. abstract_id: PUBMED:28955425 Intra-abdominal fat: Comparison of computed tomography fat segmentation and bioimpedance spectroscopy. Background: Intra-abdominal fat is an important factor in determining the metabolic syndrome/insulin resistance, and thus the risk of diabetes and ischaemic heart disease. Computed Tomography (CT) fat segmentation represents a defined method of quantifying intra-abdominal fat, with attendant radiation risks. Bioimpedance spectroscopy may offer a method of assessment without any risks to the patients. A comparison is made of these two methods. Methods: This was a preliminary study of the utility of multifrequency bioimpedance spectroscopy of the mid abdomen as a measure of intra-abdominal fat, by comparison with fat segmentation of an abdominal CT scan in the -30 to -190 HU range. Results: There was a significant (P &lt; 0.01) correlation between intra-abdominal fat and mid-upper arm circumference, as well as the bioimpedance parameter, the R/S ratio. Multivariate analysis showed that these were the only independant variables and allowed the derivation of a formula to estimate intra-abdominal fat: IAF = 0.02 × MAC - 0.757 × R/S + 0.036. Conclusions: Circumabdominal bioimpedance spectroscopy may prove a useful method of assessing intra-abdominal fat, and may be suitable for use in studies to enhance other measures of body composition, such as mid-upper arm circumference. abstract_id: PUBMED:24364134 Intra-abdominal adiposity in preterm infants: an explorative study Objective: The aim of the present study was to compare the total body fat mass and the intra-abdominal adipose tissue between preterm infants assessed at term corrected age and full-term newborns. Methods: An observational explorative study was conducted. 25 preterm and 10 full term infants were evaluated at 0-1 month of corrected and postnatal age, respectively. The total body fat mass was assessed by means of an air displacement plethysmography system (Pea Pod COSMED, USA) and the intra-abdominal adipose tissue by means of magnetic resonance imaging (software program SliceOMatic, Version 4.3,Tomovision, Canada). Results: Total body fat mass (g) of preterm and term infants was 633 (+/- 183) and 538 (+/- 203) respectively while intra-abdominal fat mass (g) was 14.2 (+/- 4.9) and 19.9 (+/- 11.4). Conclusions: Preterm infants, although exhibiting a total body fat mass higher than full term infants, do not show an increased intra-abdominal adipose tissue. abstract_id: PUBMED:24901453 Abdominal subcutaneous and visceral fat thickness in newborns: correlation with anthropometric and metabolic profile. Objective: To correlate abdominal subcutaneous and visceral fat thickness with anthropometric data and metabolic profile in newborns. Study Design: A cross-sectional study with 99 newborns was performed at Instituto de Medicina Integral Professor Fernando Figueira, Brazil. Abdominal subcutaneous and visceral thickness were measured by ultrasound. Glucose, insulin, homeostatic model assessment-insulin resistance (HOMA-IR), total cholesterol, high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, very-low-density lipoprotein cholesterol and triglycerides were determined. Pearson correlation coefficients were calculated. Result: Abdominal subcutaneous fat thickness was positively correlated with birth weight (r=0.31; P&lt;0.001), height (r=0.27; P&lt;0.001) and abdominal circumference (r=0.26; P&lt;0.001), but not with metabolic profile. Abdominal visceral fat thickness was correlated with abdominal circumference (r=0.23; P=0.01), insulin (r=0.21; P=0.04) and HOMA (r=0.24; P=0.02). The results remained the same among males and females. Conclusion: Abdominal fat distribution in newborns has a different correlation with anthropometric and metabolic profile. Abdominal subcutaneous fat thickness is positively correlated with anthropometric data whereas abdominal visceral fat thickness is correlated with insulin and HOMA-IR. abstract_id: PUBMED:17356264 Ultrasonographic measurement of intra-abdominal fat thickness in HIV-infected patients treated or not with antiretroviral drugs and its correlation to lipid and glycemic profiles. Aims: To compare the intra-abdominal fat thickness measured by ultrasound between HIV-infected patients treated or not with antiretroviral drugs and to correlate these visceral adiposity measurements to other parameters of cardiovascular risks. Methods: In a transversal observational study, 160 HIV-infected patients were recruited and divided in two groups, i.e., 123 antiretroviral (ARV)-treated and 37 ARV-naïve patients. These patients were submitted to anthropometric determinations, laboratorial analysis, ultrasonographic measurements of subcutaneous and intra- abdominal fat thickness and to tetrapolar bioelectrical impedance analysis in order to measure the body composition. Results: In the patients treated with highly active antiretroviral therapy (HAART) the intra-abdominal fat pad was significantly thicker than that of the untreated group (69 +/- 21 mm, n = 123 vs. 60 +/- 18 mm, n = 37; p = 0.03 Student's t test). The intra-abdominal fat thickness correlated significantly with plasma triglyceride, total cholesterol, fasting glucose, glucose measurements 2 h after dextrose load, fasting insulin, HOMA-IR index, systolic and diastolic blood pressures, weight, BMI, WHR and caliper-measured total fat percentage. Conclusion: The results showed that antiretroviral therapy is associated with increased ultrasonographic measurements of visceral adiposity. Our data demonstrated a strong correlation between intra-abdominal fat thickness and independent risk factors of cardiovascular disease: atherogenic lipid profile and insulin resistance. abstract_id: PUBMED:17228033 Intra-abdominal fat burden discriminated in vivo using proton magnetic resonance spectroscopy. Objective: To assess proton magnetic resonance spectroscopy (1H-MRS) as a means to distinguish among mice with disparate intra-abdominal body fat compositions, and to measure changes in intra-abdominal fat burden during weight loss and regain. Research Methods And Procedures: Intra-abdominal fat burden was analyzed as a ratio of integrated areas under the curves of fat to water (1)H-MRS signals collected from a region of interest standardized across B6.V-Lep(ob), C57BL/6, and A-ZIP/F mice that exhibited various genotypically related body fat compositions, ranging from obese (B6.V-Lep(ob)) to minimal body fat (A-ZIP/F). 1H-MRS analysis of fat burden was compared with intra-abdominal fat volume and with a single cross-sectional intra-abdominal fat area calculated from segmented magnetic resonance images. Similar measurements were made from obese B6.V-Lep(ob) mice before, during, and after they were induced to lose weight by leptin administration. Results: Relative amounts of intra-abdominal fat analyzed by 1H-MRS differed significantly according to body composition and genotype of the three strains of mice (p &lt; 0.05). Intra-abdominal fat assessed by 1H-MRS correlated with both intra-abdominal fat volume (r = 0.88, p &lt; 0.001) and body weight (r = 0.82, p &lt; 0.001) among, but not within, all three genotypes. During weight loss and regain, there was a significant overall pattern of changes in intra-abdominal fat quantity that occurred, which was reflected by 1H-MRS (p = 0.006). Discussion: Results support the use of localized 1H-MRS for assessing differences in intra-abdominal fat. Refinements in 1H-MRS voxel region of interest size and location as well as instrument precision may result in improved correlations within certain body compositions. abstract_id: PUBMED:26063668 Change in Intra-Abdominal Fat Predicts the Risk of Hypertension in Japanese Americans. In Japanese Americans, intra-abdominal fat area measured by computed tomography is positively associated with the prevalence and incidence of hypertension. Evidence in other populations suggests that other fat areas may be protective. We sought to determine whether a change in specific fat depots predicts the development of hypertension. We prospectively followed up 286 subjects (mean age, 49.5 years; 50.4% men) from the Japanese American Community Diabetes Study for 10 years. At baseline, subjects did not have hypertension (defined as blood pressure ≥140/90 mm Hg) and were not taking blood pressure or glucose-lowering medications. Mid-thigh subcutaneous fat area, abdominal subcutaneous fat area, and intra-abdominal fat area were directly measured by computed tomography at baseline and 5 years. Logistic regression was used to estimate odds of incident hypertension over 10 years in relation to a 5-year change in fat area. The relative odds of developing hypertension for a 5-year increase in intra-abdominal fat was 1.74 (95% confidence interval, 1.28-2.37), after adjusting for age, sex, body mass index, baseline intra-abdominal fat, alcohol use, smoking status, and weekly exercise energy expenditure. This relationship remained significant when adjusted for baseline fasting insulin and 2-hour glucose levels or for diabetes mellitus and pre-diabetes mellitus classification. There were no significant associations between baseline and change in thigh or abdominal subcutaneous fat areas and incident hypertension. In conclusion, in this cohort of Japanese Americans, the risk of developing hypertension is related to the accumulation of intra-abdominal fat rather than the accrual of subcutaneous fat in either the thigh or the abdominal areas. abstract_id: PUBMED:19639387 Sonographic assessment of abdominal fat distribution in infancy. There is growing evidence that not only the total amount of fat, but also the distribution of body fat determines risks for metabolic and cardiovascular disease. Developmental studies on factors influencing body fat distribution have been hampered by a lack of appropriate techniques for measuring intraabdominal fat in early life. Sonography, which is an established method for assessing abdominal fat distribution in adults, has not yet been evaluated in infants. To adapt the sonographic measurement of abdominal fat distribution to infants and study its reliability. The Generation R study, a population-based prospective cohort study. We included 212 one- and 227 two-year old Dutch infants in the present analysis. Sixty-two infants underwent replicate measurements to assess reproducibility. We developed a standardized protocol to measure the thickness of (1) subcutaneous and (2) preperitoneal fat in the upper abdomen of infants. To this end we defined infancy specific measurement areas to quantify fat thickness. Reproducibility of fat measurements was good to excellent with intraclass correlation coefficients of 0.93-0.97 for intra-observer agreement and of 0.89-0.95 for inter-observer agreement. We observed a pronounced increase in preperitoneal fat thickness in the second year of life while subcutaneous fat thickness increased only slightly, resulting in an altered body fat distribution. Gender did not significantly influence fat distribution in the first two years of life. Our age specific protocol for the sonographic measurement of central subcutaneous and preperitoneal fat is a reproducible method that can be instrumental for investigating fat distribution in early life. abstract_id: PUBMED:27446599 Intra-abdominal fat. Part III. Neoplasms lesions of the adipose tissue. This article focuses on various cancerous lesions that are found beyond organs in the intra-abdominal fat and can be visualized with ultrasonography. These lesions are divided into five groups. The first group includes primary benign tumors containing adipocytes, such as lipoma, lipoblastoma, hibernoma and other lesions with an adipose tissue component, such as myolipoma, angiomyolipoma, myelolipoma and teratoma. The second group comprises primary malignant adipocytecontaining tumors, including liposarcoma and immature teratoma. The third group contains primary benign tumors without an adipocyte component that are located in intra-abdominal fat. This is a numerous group of lesions represented by cystic and solid tumors. The fourth group encompasses primary malignant tumors without an adipocyte component that are located in intra-abdominal fat. These are rare lesions associated mainly with sarcomas: fibrosarcoma, malignant fibrous histiocytoma, hemangiopericytoma and leiomyosarcoma. An epithelioid tumor at this site is mesothelioma. The last but not least group includes secondary malignant tumors without an adipocyte component located in intra-abdominal fat. This is the most numerous group with prevailing carcinoma foci. For each of these groups, the authors present ultrasound features of individual lesions and discuss their differential diagnosis. In the vast majority of cases, the material for cytological and histological analysis can be obtained during ultrasound-guided procedures. This is the advantage of this imaging modality. Answer: Yes, echography, also known as ultrasonography, is considered an adequate method for assessing the thickness of intra-abdominal fat. Studies have shown that ultrasonography exhibits significant correlations with computed tomography (CT), which is the gold standard for measuring abdominal fat amount and distribution (PUBMED:9465239). Ultrasonography has been successfully used to estimate intra-abdominal adipose tissue in adolescents and is associated with the main factors related to obesity and metabolic syndrome (PUBMED:29705051). Additionally, ultrasonography has been used to measure intra-abdominal fat thickness in HIV-infected patients and its correlation to lipid and glycemic profiles, demonstrating its utility in assessing cardiovascular risk factors (PUBMED:17356264). Furthermore, sonographic assessment of abdominal fat distribution has been adapted and found to be a reproducible method for investigating fat distribution in early life (PUBMED:19639387). Therefore, echography is a useful and non-invasive tool for assessing intra-abdominal fat thickness.
Instruction: Is lobular carcinoma in situ as a component of breast carcinoma a risk factor for local failure after breast-conserving therapy? Abstracts: abstract_id: PUBMED:9713268 Extensive and predominant in situ component in breast carcinoma: their influence on treatment results after breast-conserving therapy. Intramammary tumour recurrence is one of the most important problems in breast-conserving therapy. We reviewed a series of 957 patients treated with breast-conserving therapy for primary invasive breast carcinomas between 1 January 1985 and 31 December 1992 at the University of Heidelberg. All histological slides were re-evaluated for risk factors with special emphasis on the extent and subclassification of the in situ tumour and the margin status. Six parameters were identified as significant risk factors for intramammary recurrence in the univariate analysis, including extensive or predominant in situ component (EIC, with at least twice the greatest dimension of the invasive tumour component), histological grade, angioinvasion, lobular tumour type, involved resection margin and lymph node status. The presence of an EIC was statistically correlated with low tumour grade, tumour at the resection margins and in re-excision specimens and with multifocal tumour invasion. Multivariate logistic regression analysis revealed that EIC (relative risk (RR) = 1.9), tumour grade (RR = 1.76), angioinvasion (RR = 1.34), lobular tumour type (RR = 1.65) and young age (&lt; or = 40 years, RR = 1.39) were independent predictors of local recurrence. When combining these factors in a linear model, the simultaneous presence of at least two of the five risk factors predicted a 5-year risk of intramammary recurrence of 20.9% compared with a risk of only 1-5% when none or one of these risk factors were identifiable. We conclude that the risk of subsequent intramammary recurrence after breast-conserving therapy can be estimated from a scoring system that includes four histological risk factors and the patient's age. abstract_id: PUBMED:16329136 Is lobular carcinoma in situ as a component of breast carcinoma a risk factor for local failure after breast-conserving therapy? Results of a matched pair analysis. Background: The goals of the current study were to compare the clinicopathologic presentations of patients with lobular carcinoma in situ (LCIS) as a component of breast carcinoma who were treated with breast conserving surgery (BCS) and radiation therapy (RT) with those of patients without LCIS as part of their primary tumor and to report rates of local control by overall cohort and specifically in patients with positive margins for LCIS and multifocal LCIS. Methods: Sixty-four patients with Stages 0-II breast carcinoma with LCIS (LCIS-containing tumor group, LCTG) that had received BCS+RT treatment at the University of Michigan between 1989 and 2003 were identified. These patients were matched to 121 patients without LCIS (control group) in a 1:2 ratio. Results: The median follow-up time was 3.9 years (range, 0.3-18.9 yrs). There were no significant differences between the two groups with regard to clinical, pathologic, or treatment-related variables or in mammographic presentation, with the exception of a higher proportion of the LCTG patients who received adjuvant hormonal therapy (P = 0.01). The rates of local control at 5 years were 100% in the LCTG group and 99.1% in the control group (P = 0.86). The presence of LCIS at the margins and the size and presence of multifocal LCIS did not alter the rate of local control. Conclusions: The extent of LCIS and its presence at the margins did not reduce the excellent rates of local control after BCS+RT. The data suggest that LCIS in the tumor specimen, even when multifocal, should not affect selection of patients for BCS and whole-breast RT. abstract_id: PUBMED:17115109 Margin status and the risk of local recurrence after breast-conserving treatment of lobular breast cancer. Background: Invasive lobular breast carcinoma is known for its multicentricity and is associated with a higher incidence of incomplete excision after breast-conserving therapy. The aim of the study was to examine the influence of positive surgical margins on the local recurrence rate in patients diagnosed with invasive lobular cancer and treated with breast-conserving therapy. Methods: All 416 women diagnosed with invasive lobular breast cancer and undergoing breast-conserving treatment between 1995 and 2002 were selected from the population-based Eindhoven Cancer Registry. Their medical charts were reviewed and detailed information was collected. Results: The risk of margin involvement was 29% after the first operation and 17% when taking into account the final margin status of the patients undergoing re-excision. During follow-up, 18 patients developed a local recurrence. The 5 year actuarial risk of developing a local recurrence was 3.5% (95% confidence interval 2.5-4.5) and the 8 year risk was 6.4% (95% confidence interval 4.7-8.0). There was no influence of positive surgical margins on the risk of local recurrence, neither in the univariate analysis nor after adjustment for age, tumour size, nodal status and adjuvant systemic treatment. Conclusion: Patients with invasive lobular cancer, treated with breast-conservation, have a low risk of local recurrence, despite their high risk of having a microscopically incomplete excision of the tumour. abstract_id: PUBMED:17020974 Gene expression profiles of primary breast carcinomas from patients at high risk for local recurrence after breast-conserving therapy. Purpose: Several risk factors for local recurrence of breast cancer after breast-conserving therapy (BCT) have been identified. The identification of additional risk factors would be very useful in guiding optimal therapy and also in improving understanding of the mechanisms underlying local recurrence. We used cDNA microarray analysis to identify gene expression profiles associated with local recurrence. Experimental Design: Using 18K cDNA microarrays, gene expression profiles were obtained from 50 patients who underwent BCT. Of these 50 patients, 19 developed a local recurrence; the remaining 31 patients were selected as controls as they were free of local recurrence at least 11 years after treatment. For 9 of 19 patients, the local recurrence was also available for gene expression profiling. Unsupervised and supervised methods of classification were used to separate patients in groups corresponding to disease outcome and to study the overall gene expression pattern of primary tumors and their recurrences. Results: Hierarchical clustering of patients did not show any grouping reflecting local recurrence status. Supervised analysis revealed no significant set of genes that was able to distinguish recurring tumors from nonrecurring tumors. Paired-data analysis of primary tumors and local recurrences showed a remarkable similarity in gene expression profile between primary tumors and their recurrences. Conclusions: No significant differences in gene expression between primary breast cancer tumors in patients with or without local recurrence after BCT were identified. Furthermore, analyses of primary tumors and local recurrences show a preservation of the overall gene expression pattern in the local recurrence, even after radiotherapy. abstract_id: PUBMED:15748463 Predictors of local recurrence after breast-conservation therapy. Breast-conserving therapy (BCT) is a proven local treatment option for select patients with early-stage breast cancer. This paper reviews pathologic, clinical, and treatment-related features that have been identified as known or potential predictors for ipsilateral breast tumor recurrence in patients treated with BCT. Pathologic risk factors such as the final pathologic margin status of the excised specimen after BCT, the extent of margin involvement, the interaction of margin status with other adverse features, the role of biomarkers, and the presence of an extensive intraductal component or lobular carcinoma in situ all impact the likelihood of ipsilateral breast tumor recurrence. Predictors of positive repeat excision findings after conservative surgery include young age, presence of an extensive intraductal component, and close or positive margins in prior excision. Finally, treatment-related factors predicting ipsilateral breast tumor recurrence include extent of breast radiation therapy, use of a boost to the lumpectomy cavity, use of tamoxifen or chemotherapeutic agents, and timing of systemic therapy with irradiation. The ability to predict for an increased risk of ipsilateral breast tumor recurrence enhances the ability to select optimal local treatment strategies for women considering BCT. abstract_id: PUBMED:8080378 Margin status and local recurrence after breast-conserving surgery. Objective: To examine the effect of microscopic tumor at the margins on local recurrence after breast-conserving surgery for invasive carcinoma. Design: Retrospective review of patients treated with surgical resection followed by radiation therapy. Setting: A university-based radiation department and a community-based cancer referral center. Patients: A consecutive series of 272 women treated between 1982 and 1990. Main Outcome Measure: Local recurrence according to the histopathologic status of excised margins and the total dose of radiation. Results: During a mean follow-up period of 48 months, the overall rate of local recurrence was 6.3%. Local recurrence was more frequent (P = .0001) in patients with histologically positive margins (18.2%) than in those with unknown margins (7.1%) or negative margins (3.7%). In the 44 patients with positive margins, the local recurrence rate was 8.3% after radiation doses of 66 Gy or more compared with 21.9% following lower doses. Conclusions: Microscopic involvement of resection margins increases the risk of local recurrence following breast-conserving surgery for invasive carcinoma. Therefore, every effort should be made to achieve negative margins intraoperatively. abstract_id: PUBMED:10699897 The relation between the presence and extent of lobular carcinoma in situ and the risk of local recurrence for patients with infiltrating carcinoma of the breast treated with conservative surgery and radiation therapy. Background: When found in an otherwise benign biopsy, lobular carcinoma in situ (LCIS) has been associated with an increased risk of development of a subsequent invasive breast carcinoma. However, the association between LCIS and the risk of subsequent local recurrence in patients with infiltrating carcinoma treated with conservative surgery and radiation therapy has received relatively little attention. Methods: Between 1968 and 1986, 1625 patients with clinical Stage I-II invasive breast carcinoma were treated at the Joint Center for Radiation Therapy at Harvard Medical School with breast-conserving surgery (CS) and radiation therapy (RT) to a total dose to the primary site of &gt; or =60 grays. Analysis was limited to 1181 patients with infiltrating ductal carcinoma, infiltrating lobular carcinoma, or infiltrating carcinoma with mixed ductal and lobular features who, on review of their histologic slides, had sufficient normal tissue adjacent to the tumor to evaluate for the presence of LCIS and also had a minimum potential follow-up time of 8 years. The median follow-up time was 161 months. Results: One hundred thirty-seven patients (12%) had LCIS either within the tumor or in the macroscopically normal adjacent tissue. The 8-year crude risk of recurrence was not significantly increased for patients with LCIS associated with invasive ductal, invasive lobular, or mixed ductal and lobular carcinoma. Among the 119 patients with associated LCIS adjacent to the tumor, the 8-year rate of local recurrence was 13%, compared with 12% for the 1062 patients without associated LCIS. For the 70 patients with moderate or marked LCIS adjacent to the tumor, the 8-year rate of local recurrence was 13%. The extent of LCIS did not affect the risk of recurrence. The risks of contralateral disease and of distant failure were similarly not affected by the presence or extent of LCIS. Conclusions: Breast-conserving therapy involving limited surgery and radiation therapy is an appropriate method of treating patients with invasive breast carcinoma with or without associated LCIS. Neither the presence nor the extent of LCIS should influence management decisions regarding patients with invasive breast carcinoma. [See editorial counterpoint and reply to counterpoint on pages 978-81 and 982-3, this issue.] abstract_id: PUBMED:33978889 Breast-Conserving Therapy in Patients with cT3 Breast Cancer with Good Response to Neoadjuvant Systemic Therapy Results in Excellent Local Control: A Comprehensive Cancer Center Experience. Background: Many cT3 breast cancer patients are treated with mastectomy, regardless of response to neoadjuvant systemic therapy (NST). We evaluated local control of cT3 patients undergoing breast-conserving therapy (BCT) based on magnetic resonance imaging (MRI) evaluation post-NST. In addition, we analyzed predictive characteristics for positive margins after breast-conserving surgery (BCS). Methods: All cT3 breast cancer patients who underwent BCS after NST between 2002 and 2015 at the Netherlands Cancer Institute were included. Local recurrence-free interval (LRFI) was estimated using the Kaplan-Meier method, and predictors for positive margins were analyzed using univariable analysis and multivariable logistic regression. Results: Of 114 patients undergoing BCS post-NST, 75 had negative margins, 16 had focally positive margins, and 23 had positive margins. Of those with (focally) positive margins, 12 underwent radiotherapy, 6 underwent re-excision, and 21 underwent mastectomy. Finally, 93/114 patients were treated with BCT (82%), with an LRFI of 95.9% (95% confidence interval [CI] 91.5-100%) after a median follow-up of 7 years. Predictors for positive margins in univariable analysis were hormone receptor-positive/human epidermal growth factor receptor 2-negative (HR+/HER2-) subtype, lobular carcinoma, and non-mass enhancement (NME) on pre-NST MRI. MRI response was not correlated to positive margins. In multivariable regression, the odds of positive margins were decreased in patients with HER2-positive (HER2+; odds ratio [OR] 0.27, 95% CI 0.10-0.73; p = 0.01) and TN tumors (OR 0.17, 95% CI 0.03-0.82; p = 0.028). A trend toward positive margins was observed in patients with NME (OR 2.38, 95% CI 0.98-5.77; p = 0.055). Conclusion: BCT could be performed in 82% of cT3 patients in whom BCT appeared feasible on post-NST MRI. Local control in these patients was excellent. In those patients with HR+/HER2- tumors, NME on MRI, or invasive lobular carcinoma, the risk of positive margins should be considered preoperatively. abstract_id: PUBMED:24761867 Outcomes of triple-negative versus non-triple-negative breast cancers managed with breast-conserving therapy. Background: Triple negative breast cancer is associated with aggressive behavior and high risk of local and regional failure. Aggressive surgical intervention is considered suitable. This makes role of breast conserving therapy (BCT) debatable in these patients. The objective of this study was to compare outcome of BCT for triple negative versus non-triple negative breast cancer. Materials And Methods: Medical records of patients who underwent breast conserving therapy from 1999 to 2009 at Shaukat Khanum Cancer Hospital and had complete receptor status information were extracted. Patients were divided into triple negative breast cancer (TNBC) and non-TNBC. Patient characteristics, medical treatment modalities and adverse events were compared. Expected five year locoregional recurrence free, disease free and overall survival was calculated. The Cox proportional hazard model was used to identify independent predictors of outcome. Results: A total of 194 patients with TNBC and 443 with non-TNBC were compared. Significant difference was present for age at presentation (p&lt;0.0001), family history (p=0.005), grade (p&lt;0.0001) and use of hormonal therapy (p&lt;0.0001). The number of locoregional failures, distant failures and mortalities were not significantly different. No significant difference was present in 5 year locoregional recurrence free (96% vs 92%, p=0.3), disease free (75% vs 74%, p=0.7) and overall survival (78% vs 83%, p=0.2). On multivariate analysis, tumor size, nodal involvement and hormonal treatment were independent predictors of negative events. Conclusions: Breast conserving therapy has comparable outcomes for triple negative and non-triple negative breast cancers. abstract_id: PUBMED:14718796 Examination of breast conserving therapy in lobular carcinoma. Background: Experience with conserving surgery for lobular carcinoma has grown as more breast conserving surgeries have been performed. We examined the results of breast conserving therapy in lobular carcinoma. Patients And Methods: We examined the postoperative positive margin rate, presence or absence of additional surgery, presence or absence of local or systemic recurrence and role of breast helical CT in 25 cases of breast conserving surgery performed at this department from 1991 through June 2003. Results: Among the 303 cases of all breast conserving surgeries, there were 63 case with positive margins (20.8%), but there were 15 of 25 positive margin cases (60.0%) among the lobular carcinoma cases. In 8 of the 15 positive margin cases the technique was changed to mastectomy. One case of recurrence in the breast has been observed thus far. Although the positive margin rate and positive margin rate in infiltrating carcinoma cases tended to decline after the introduction of breast helical CT, the rates remained high. Conclusions: Since the positive margin rate was significantly high at the time of breast conserving surgery for lobular carcinoma, careful selection of technique based on imaging studies such as breast helical CT and MRI along with careful follow-up is considered necessary. Answer: Lobular carcinoma in situ (LCIS) as a component of breast carcinoma does not appear to be a significant risk factor for local failure after breast-conserving therapy (BCT). According to the results of a matched pair analysis, the presence of LCIS, even when multifocal, did not reduce the excellent rates of local control after BCT with radiation therapy (RT). The study found that the rates of local control at 5 years were 100% in the group with LCIS and 99.1% in the control group without LCIS, indicating that LCIS in the tumor specimen should not affect the selection of patients for BCT and whole-breast RT (PUBMED:16329136). Furthermore, another study examining the influence of positive surgical margins on the local recurrence rate in patients diagnosed with invasive lobular cancer and treated with breast-conserving therapy found that patients with invasive lobular cancer have a low risk of local recurrence, despite their high risk of having a microscopically incomplete excision of the tumor. The study concluded that there was no influence of positive surgical margins on the risk of local recurrence (PUBMED:17115109). Additionally, a study that evaluated the relation between the presence and extent of LCIS and the risk of local recurrence for patients with infiltrating carcinoma of the breast treated with conservative surgery and RT found that neither the presence nor the extent of LCIS should influence management decisions regarding patients with invasive breast carcinoma. The study reported that the 8-year crude risk of recurrence was not significantly increased for patients with LCIS associated with invasive ductal, invasive lobular, or mixed ductal and lobular carcinoma (PUBMED:10699897). In summary, the current evidence suggests that LCIS as a component of breast carcinoma is not a risk factor for local failure after breast-conserving therapy.
Instruction: Does laboratory cue reactivity correlate with real-world craving and smoking responses to cues? Abstracts: abstract_id: PUBMED:26277429 Does laboratory cue reactivity correlate with real-world craving and smoking responses to cues? Background: Laboratory cue reactivity (CR) assessments are used to assess smokers' responses to cues. Likewise, EMA recording is used to characterize real-world response to cues. Understanding the relationship between CR and EMA responses addresses the ecological validity of CR. Methods: In 190 daily smokers not currently quitting, craving and smoking responses to cues were assessed in laboratory CR and by real-world EMA recording. Separate CR sessions involved 5 smoking-relevant cues (smoking, alcohol, negative affect, positive affect, smoking prohibitions), and a neutral cue. Subjects used EMA to monitor smoking situations for 3 weeks, completing parallel situational assessments (presence of others smoking, alcohol consumption, negative affect, positive affect, and smoking prohibitions, plus current craving) in smoking and non-smoking occasions (averaging 70 and 60 occasions each). Analyses correlated CR craving and smoking cue responses with EMA craving and smoking correlations with similar cues. Results: Although some cues did not show main effects on average craving or smoking, a wide range of individual differences in response to cues was apparent in both CR and EMA data, providing the necessary context to assess their relationship. Laboratory CR measures of cue response were not correlated with real-world cue responses assessed by EMA. The average correlation was 0.03; none exceeded 0.32. One of 40 correlations examined was significantly greater than 0. Conclusions: Laboratory CR measures do not correlate with EMA-assessed craving or smoking in response to cues, suggesting that CR measures are not accurate predictors of how smokers react to relevant stimuli in the real world. abstract_id: PUBMED:25028339 Nicotine dependence, "background" and cue-induced craving and smoking in the laboratory. Background: Nicotine dependence has been associated with higher "background" craving and smoking, independent of situational cues. Due in part to conceptual and methodological differences across past studies, the relationship between dependence and cue-reactivity (CR; e.g., cue-induced craving and smoking) remains unclear. Methods: 207 daily smokers completed six pictorial CR sessions (smoking, negative affect, positive affect, alcohol, smoking prohibitions, and neutral). Individuals rated craving before (background craving) and after cues, and could smoke following cue exposure. Session videos were coded to assess smoking. Participants completed four nicotine dependence measures. Regression models assessed the relationship of dependence to cue-independent (i.e., pre-cue) and cue-specific (i.e., pre-post cue change for each cue, relative to neutral) craving and smoking (likelihood of smoking, latency to smoke, puff count). Results: Dependence was associated with background craving and smoking, but did not predict change in craving across the entire sample for any cue. Among alcohol drinkers, dependence was associated with greater increases in craving following the alcohol cue. Only one dependence measure (Wisconsin Inventory of Smoking Dependence Motives) was consistently associated with smoking reactivity (higher likelihood of smoking, shorter latency to smoke, greater puff count) in response to cues. Conclusion: While related to cue-independent background craving and smoking, dependence is not strongly associated with laboratory cue-induced craving under conditions of minimal deprivation. Dependence measures that incorporate situational influences on smoking correlate with greater cue-provoked smoking. This may suggest independent roles for CR and traditional dependence as determinants of smoking, and highlights the importance of assessing behavioral CR outcomes. abstract_id: PUBMED:27115733 Introspective responses to cues and motivation to reduce cigarette smoking influence state and behavioral responses to cue exposure. In the current study, we aimed to extend smoking cue-reactivity research by evaluating delay discounting as an outcome of cigarette cue exposure. We also separated introspection in response to cues (e.g., self-reporting craving and affect) from cue exposure alone, to determine if introspection changes behavioral responses to cigarette cues. Finally, we included measures of quit motivation and resistance to smoking to assess motivational influences on cue exposure. Smokers were invited to participate in an online cue-reactivity study. Participants were randomly assigned to view smoking images or neutral images, and were randomized to respond to cues with either craving and affect questions (e.g., introspection) or filler questions. Following cue exposure, participants completed a delay discounting task and then reported state affect, craving, and resistance to smoking, as well as an assessment of quit motivation. We found that after controlling for trait impulsivity, participants who introspected on craving and affect showed higher delay discounting, irrespective of cue type, but we found no effect of response condition on subsequent craving (e.g., craving reactivity). We also found that motivation to quit interacted with experimental conditions to predict state craving and state resistance to smoking. Although asking about craving during cue exposure did not increase later craving, it resulted in greater delaying of discounted rewards. Overall, our findings suggest the need to further assess the implications of introspection and motivation on behavioral outcomes of cue exposure. abstract_id: PUBMED:24309295 The magnitude and reliability of cue-specific craving in nondependent smokers. Background: Cue-reactivity is a robust phenomenon in regular cigarette smokers (Carter and Tiffany, 1999), but it has not been widely investigated in nondependent smokers. Further, most research on cue-specific craving assesses response to cues in a single experimental session. As such, investigations of cue-specific craving have primarily measured state-like but not trait-like responses to smoking stimuli. Methods: This study measured general and cue-specific craving in nondependent smokers and assessed the within-session and cross-session reliability of these two facets of craving. Participants (n=154) attended five laboratory sessions over the course of three months and completed multiple cue-reactivity trials (using smoking and neutral in vivo and photographic stimuli) during each study visit. Results: Results indicated that smoking cues elicited significantly stronger craving than neutral cues across study sessions, and that craving ratings following smoking cues decreased across subsequent sessions. Within-session and cross-session reliability was extremely high. Conclusions: Overall, findings indicate that nondependent smokers experience reactivity to smoking cues, and that this response is quite reliable within and across sessions. Further, the magnitude of cue-specific craving was comparable to what has been observed in heavy, dependent smokers. abstract_id: PUBMED:30959242 Taste manipulation during a food cue-reactivity task: Effects on cue-elicited food craving and subsequent food intake among individuals with overweight and obesity. Food cue-reactivity tasks are used to induce and evaluate food cravings. Extant research has implicated the role of tasting foods in heightening cue-elicited food craving. The present study was the first to evaluate a taste manipulation during a food cue-reactivity task to optimize cue-elicited craving and predict food intake. Participants with overweight/obesity (N = 35; M age = 33.46 years [SD = 13.27]; M BMI = 32.91 kg/m2 [SD = 5.34]) engaged in one laboratory session and were randomized to a 'No Taste' or 'Taste' condition. All participants reported baseline food craving and observed two types of high-calorie food cues during a cue-reactivity task: photographic and real foods. The Taste group tasted real food cues and the No Taste group did not. Cue-elicited craving was assessed after the presentation of each food cue. Calorie intake of palatable foods was subsequently measured during a bogus taste test. Results indicated that cue-elicited craving to high-calorie foods was greater for the No Taste relative to the Taste group and that calorie intake was greater for the Taste relative to the No Taste group; both effects were nonsignificant, but of medium-size. Cue-elicited craving was significantly greater following exposure to high-calorie real food cues compared to photographic food cues. Results provide initial evidence that presenting high-calorie real (vs. photographic) food cues and forgoing taste manipulation during a food cue-reactivity task may optimize cue-elicited craving, and that taste manipulation could increase subsequent food intake. Future research should be conducted to replicate findings in larger samples with greater power to detect significant effects. abstract_id: PUBMED:34469584 Combined smoking and alcohol cues: Effects on craving, drug-seeking, and consumption. Background: Alcohol and cigarettes are commonly used together, but little is known about their joint motivational impact. Cue reactivity studies have customarily examined alcohol and smoking cues in isolation, despite the potential for cues to elicit stronger motivational responses when combined. This study used a validated cue reactivity procedure (Choice Behavior Under Cued Conditions) systematically to disentangle the separate and joint effects of alcohol and cigarette cues on substance use motivation. Methods: Participants were 110 adults (Mage = 34.0, SD = 10.8) who consumed both cigarettes and alcohol. Participants completed 40 cue reactivity trials with four in vivo cue types: water, alcohol, cigarette, and combined cigarette and alcohol. Participants rated their craving prior to receiving opportunities to spend real money to gain access to the cues. Spending larger amounts of money increased the probability that the substance(s) would be available for consumption. When granted access, participants took one cigarette puff and/or sip of the beverage. A multimethod approach assessed three key motivational indices: craving, drug-seeking (spending, latency to access the cue), and consumption (puff duration, alcohol consumed). Effects of cue type and rates of substance use (cigarettes per day, drinks per day, relative frequency of co-use) were assessed using hierarchical linear models. Results: Both alcohol and smoking cues enhanced cue-specific craving but not craving for the alternative substance. In a novel finding, combined cues elicited higher craving and greater spending than single-drug cues. All drug cues elicited greater spending than water cues, and spending was moderated by the relative frequency of co-use. Conclusions: We found that combined alcohol and cigarette cues provoke more powerful craving and drug-seeking responses and, therefore, may be more motivationally potent among individuals who use multiple substances. abstract_id: PUBMED:33263158 Smoking-related cue reactivity in a virtual reality setting: association between craving and EEG measures. Background: Cue-reactivity is the array of responses that smokers exhibit when exposed to conditioned and contextual stimuli previously associated to substance use. The difficulty to experimentally recreate the complexity of smokers' spatial experience and context requires more ecological models. Virtual reality (VR) creates a state of immersion close to reality allowing controlled assessments of behavioral responses. To date, no studies investigated brain activation associated to smoking cue-reactivity in VR using electroencephalography (EEG). Aims: To investigate whether a VR cue-reactivity paradigm (a) may increase smoking craving, (b) is feasible with EEG recording, and (c) induces craving levels associated to EEG desynchronization. Methods: Smokers (N = 20) and non-smokers (N = 20) were exposed to neutral and smoking-related VR scenarios, without and with smoking conditioned stimuli, respectively. EEG was recorded from occipital and parietal leads throughout the sessions to assess alpha band desynchronization. Smoking and food craving and presence visual analogue scales (VAS) were assessed during the session. Results: To be smoker, but not non-smoker, significantly influenced smoking craving VAS induced by smoking cue VR but not by neutral VR. No significant food craving changes was observed during the VR sessions. The new finding was that EEG alpha band power in posterior leads was significantly increased by the smoking context scenario only in smokers, and that the degree of smoking (i.e., heavy vs. light) was significantly associated to this neurophysiological measure. Conclusions: This study demonstrated, for the first time, the feasibility of EEG recording in a VR setting, suggesting that EEG desynchronization may be a neurophysiological marker of smoking cue-reactivity. abstract_id: PUBMED:19395178 Laboratory-based, cue-elicited craving and cue reactivity as predictors of naturally occurring smoking behavior. Cigarette craving, one hallmark sign of nicotine dependence, is often measured in laboratory settings using cue reactivity methods. How lab measures of cue reactivity relate to real world smoking behavior is unclear, particularly among non-treatment seeking smokers. Within a larger study of hormonal effects on cue reactivity (N=78), we examined the predictive relationship of cue reactivity to smoking, each measured in several ways. Results indicated that cue-evoked craving in response to stressful imagery, and to a lesser extent, in vivo smoking cues, significantly predicted smoking behavior during the week following testing. However, this predictive relationship was absent upon controlling for reactivity to neutral cues. Nicotine dependence may moderate the relationship between cue reactivity and actual smoking, such that this predictive relationship is less robust among highly dependent smokers than among smokers low in nicotine dependence. The question of whether cue-elicited craving predicts smoking among smokers not in treatment is best answered with a qualified yes, depending on how craving is manipulated and measured. Our findings highlight important methodological and theoretical considerations for cue reactivity research. abstract_id: PUBMED:37437451 Cue-reactivity to distal cues in individuals at risk for gaming disorder. Background: Gaming disorder (GD) is a disorder due to addictive behaviors (ICD-11). Cue-reactivity and craving are relevant mechanisms in the development and maintenance of addictive behaviors. When confronted with cues showing in-game content (proximal cues) individuals with higher symptom severity show increased cue-reactivity. Based on conditioning and addiction theories on incentive sensitization, cue-reactivity responses may generalize to more distal cues, e.g. when individuals at risk of developing a GD are confronted with a starting page of an online game. In cue-reactivity paradigms so far, only proximal gaming cues have been used. Methods: We investigated the effect of distal gaming cues compared to gaming-unrelated control cues on cue-reactivity and craving in 88 individuals with non-problematic use of online games (nPGU) and 69 individuals at risk for GD (rGD). The distal cues showed the use of an electronic device (e.g., desktop PC or smartphone) whose screen showed starting pages of either games (target cues), shopping- or pornography sites (control cues) from a first-person perspective. Findings: We found significantly higher urge and arousal ratings as well as longer viewing times for gaming-related compared to gaming-unrelated control cues in rGD compared to nPGU. Valence ratings did not differ between groups. Interpretation: The results demonstrate that already distal gaming-specific cues lead to cue-reactivity and craving in rGD. This finding indicates that based on conditioning processes, cue-reactivity and craving develop during the course of GD and generalize to cues that are only moderately related to the specific gaming activity. abstract_id: PUBMED:31135053 Effect of deactivation of activity patterns related to smoking cue reactivity on nicotine addiction. With approximately 75% of smokers resuming cigarette smoking after using the Gold Standard Programme for smoking cessation, investigation into novel therapeutic approaches is warranted. Typically, smoking cue reactivity is crucial for smoking behaviour. Here we developed a novel closed-loop, smoking cue reactivity patterns EEG-based neurofeedback protocol and evaluated its therapeutic efficacy on nicotine addiction. During an evoked smoking cue reactivity task participants' brain activity patterns corresponding to smoking cues were obtained with multivariate pattern analysis of all EEG channels data, then during neurofeedback the EEG activity patterns of smoking cue reactivity were continuously deactivated with adaptive closed-loop training. In a double-blind, placebo-controlled, randomized clinical trial, 60 nicotine-dependent participants were assigned to receive two neurofeedback training sessions (∼1 h/session) either from their own brain (n = 30, real-feedback group) or from the brain activity pattern of a matched participant (n = 30, yoked-feedback group). Cigarette craving and craving-related P300 were assessed at pre-neurofeedback and post-neurofeedback. The number of cigarettes smoked per day was assessed at baseline, 1 week, 1 month, and 4 months following the final neurofeedback visit. In the real-feedback group, participants successfully deactivated EEG activity patterns of smoking cue reactivity. The real-feedback group showed significant decrease in cigarette craving and craving-related P300 amplitudes compared with the yoked-feedback group. The rates of cigarettes smoked per day at 1 week, 1 month and 4 months follow-up decreased 30.6%, 38.2%, and 27.4% relative to baseline in the real-feedback group, compared to decreases of 14.0%, 13.7%, and 5.9% in the yoked-feedback group. The neurofeedback effects on craving change and smoking amount at the 4-month follow-up were further predicted by neural markers at pre-neurofeedback. This novel neurofeedback training approach produced significant short-term and long-term effects on cigarette craving and smoking behaviour, suggesting the neurofeedback protocol described herein is a promising brain-based tool for treating addiction. Answer: Laboratory cue reactivity (CR) does not appear to correlate well with real-world craving and smoking responses to cues. A study involving 190 daily smokers not currently quitting found that laboratory CR measures of cue response were not correlated with real-world cue responses assessed by ecological momentary assessment (EMA). The average correlation was 0.03, and none exceeded 0.32, with only one of 40 correlations examined being significantly greater than 0 (PUBMED:26277429). This suggests that CR measures are not accurate predictors of how smokers react to relevant stimuli in the real world.
Instruction: Does Medicaid pay more to a program of all-inclusive care for the elderly (PACE) than for fee-for-service long-term care? Abstracts: abstract_id: PUBMED:30575846 Health Care Utilization and Cost Outcomes of a Comprehensive Dementia Care Program for Medicare Beneficiaries. Importance: An estimated 4 to 5 million Americans have Alzheimer disease or another dementia. Objective: To determine the health care utilization and cost outcomes of a comprehensive dementia care program for Medicare fee-for-service beneficiaries. Design, Setting, And Participants: In this case-control study, we used a quasiexperimental design to compare health care utilization and costs for 1083 Medicare fee-for-service beneficiaries enrolled in the University of California Los Angeles Health System Alzheimer and Dementia Care program between July 1, 2012, and December 31, 2015, with those of 2166 similar patients with dementia not participating in the program. Patients in the comparison cohort were selected using the zip code of residence as a sampling frame and matched with propensity scores, which included demographic characteristics, comorbidities, and prior-year health care utilization. We used Medicare claims data to compare utilization and cost outcomes for the 2 groups. Interventions: Patients in the dementia care program were comanaged by nurse practitioners and physicians, and the program consisted of structured needs assessments of patients and their caregivers, creation and implementation of individualized dementia care plans with input from primary care physicians, monitoring and revising care plans, referral to community organizations for dementia-related services and support, and access to a clinician for assistance and advice 24 hours per day, 7 days per week. Main Outcomes And Measures: Admissions to long-term care facilities; average difference-in-differences per quarter over the 3-year intervention period for all-cause hospitalization, emergency department visits, 30-day hospital readmissions, and total Medicare Parts A and B costs of care. Program costs were included in the cost estimates. Results: Program participants (n = 382 men, n = 701 women; mean [SD] age, 82.10 [7.90] years; age range 54-101 years) were less likely to be admitted to a long-term care facility (hazard ratio, 0.60; 95% CI, 0.59-0.61) than those not participating in the dementia care program (n = 759 men, n = 1407 women; mean [SD] age, 82.42 [8.50] years; age range, 34-103 years). There were no differences between groups in terms of hospitalizations, emergency department visits, or 30-day readmissions. The total cost of care to Medicare, excluding program costs, was $601 less per patient per quarter (95% CI, -$1198 to -$5). After accounting for the estimated program costs of $317 per patient per quarter, the program was cost neutral for Medicare, with an estimated net cost of -$284 (95% CI, -$881 to $312) per program participant per quarter. Conclusions And Relevance: Comprehensive dementia care may reduce the number of admissions to long-term care facilities, and depending on program costs, may be cost neutral or cost saving. Wider implementation of such programs may help people with dementia stay in their communities. abstract_id: PUBMED:34236603 Long-Term Effects of the Comprehensive Primary Care Model on Health Care Spending and Utilization. Background: The Centers for Medicare &amp; Medicaid Services launched the 4-year Comprehensive Primary Care Initiative (CPC Classic) in 2012 and its 5-year successor, CPC Plus (CPC+), in 2017 to test whether improving primary care delivery in five areas-and providing practices with financial and technical support-reduced spending and improved quality. This is the first study to examine long-term effects of a primary care practice transformation model. Objective: To test whether long-term primary care transformation-the 4-year CPC Classic and the first 2 years of its successor, CPC+-reduced hospitalizations, emergency department (ED) visits, and spending over 6 years. Design: We used a difference-in-differences analysis to compare outcomes for beneficiaries attributed to CPC Classic practices with outcomes for beneficiaries attributed to comparison practices during the year before and 6 years after CPC Classic began. Participants: The study involved 565,674 Medicare fee-for-service beneficiaries attributed to 502 CPC Classic practices and 1,165,284 beneficiaries attributed to 908 comparison practices, with similar beneficiary-, practice-, and market-level characteristics as the CPC Classic practices. Interventions: The interventions required primary care practices to improve 5 care areas and supported their transformation with substantially enhanced payment, data feedback, and learning support and, for CPC+, added health information technology support. Main Measures: Hospitalizations (all-cause), ED visits (outpatient and total), and Medicare Part A and B expenditures. Key Results: Relative to comparison practices, beneficiaries in intervention practices experienced slower growth in hospitalizations-3.1% less in year 5 and 3.5% less in year 6 (P &lt; 0.01) and roughly 2% (P &lt; 0.1) slower growth each year in total ED visits during years 3 through 6. Medicare Part A and B expenditures (excluding care management fees) did not change appreciably. Conclusions: The emergence of favorable effects on hospitalizations in years 5 and 6 suggests primary care transformation takes time to translate into lower hospitalizations. Longer tests of models are needed. abstract_id: PUBMED:22046633 Medicare program; Medicare Shared Savings Program: Accountable Care Organizations. Final rule. This final rule implements section 3022 of the Affordable Care Act which contains provisions relating to Medicare payments to providers of services and suppliers participating in Accountable Care Organizations (ACOs) under the Medicare Shared Savings Program. Under these provisions, providers of services and suppliers can continue to receive traditional Medicare fee-for-service (FFS) payments under Parts A and B, and be eligible for additional payments if they meet specified quality and savings requirements. abstract_id: PUBMED:24800156 Impact of continued biased disenrollment from the Medicare Advantage Program to fee-for-service. Background: Medicare managed care enrollees who disenroll to fee-for-service (FFS) historically have worse health and higher costs than continuing enrollees and beneficiaries remaining in FFS. Objective: To examine disenrollment patterns by analyzing Medicare payments following disenrollment from Medicare Advantage (MA) to FFS in 2007. Recent growth in the MA program, introduction of limits on timing of enrollment/disenrollment, and initiation of prescription drug benefits may have substantially changed the dynamics of disenrollment. Study Design: The study was based on MA enrollees who disenrolled to FFS in 2007 (N=248,779) and a sample of "FFS stayers" residing in the same counties as the disenrollees (N=551,616). Actual Medicare Part A and Part B payments (excluding hospice payments) in the six months following disenrollment were compared with predicted payments based on claims experience of local FFS stayers, adjusted for CMS-Hierarchical Condition Category (CMS-HCC) risk scores. Results: Disenrollees incurred $1,021 per month in Medicare payments, compared with $798 in predicted payments (ratio of actual/predicted=1.28, p &lt; 0.001). Differences between actual and predicted payments were smaller for disenrollees of Preferred Provider Organizations and Private Fee-for-Service plans than of Health Maintenance Organizations. Analysis of 10 individual MA plans revealed variation in the degree of selective disenrollment. Conclusions: Despite substantial changes in policies and market characteristics of the Medicare managed care program, disenrollment to FFS continues to occur disproportionately among high-cost beneficiaries, raising concerns about care experiences among sicker enrollees and increased costs to Medicare. abstract_id: PUBMED:17171853 Medicare and Medicaid programs; programs of all-inclusive care for the elderly (PACE); program revisions. Final rule. This rule finalizes the interim final rule with comment period published in the Federal Register November 24, 1999 (64 FR 66234) and the interim final rule with comment period published in the Federal Register on October 1, 2002 (67 FR 61496). The November 1999 interim final rule implemented sections 4801 through 4803 of the Balanced Budget Act of 1997 (Pub. L. 105-33) and established requirements for Programs of All-inclusive Care for the Elderly (PACE) under the Medicare and Medicaid programs. The interim final rule with comment period published on October 1, 2002 (67 FR 61496) implemented section 903 of the Medicare, Medicaid, and SCHIP Benefits Improvement and Protection Act of 2000 (BIPA) (Pub. L. 106-554). abstract_id: PUBMED:29462078 The Impact of a Health Information Technology-Focused Patient-centered Medical Neighborhood Program Among Medicare Beneficiaries in Primary Care Practices: The Effect on Patient Outcomes and Spending. Background: The Center for Medicare &amp; Medicaid Innovation (CMMI) tests new models of paying for or delivering health care services and expands models that improve health outcomes while lowering medical spending. CMMI gave TransforMED, a national learning and dissemination contractor, a 3-year Health Care Innovation Award (HCIA) to integrate health information technology systems into physician practices. This paper estimates impacts of TransforMED's HCIA-funded program on patient outcomes and Medicare parts A and B spending. Research Design: We compared outcomes for Medicare fee-for-service (FFS) beneficiaries served by 87 treatment practices to outcomes for Medicare FFS beneficiaries served by 286 matched comparison practices, adjusting for differences in outcomes between the 2 groups during a 1-year baseline period. We estimated impacts in 3 evaluation outcome domains: quality-of-care processes, service use, and spending. Results: We estimated the program led to a 7.1% reduction in inpatient admissions and a 5.7% decrease in the outpatient emergency department visits. However, there was no evidence of statistically significant effects in outcomes in either the quality-of-care processes or spending domains. Conclusions: These results indicate that TransforMED's program reduced service use for Medicare FFS beneficiaries, but also show that the program did not have statistically significant favorable impacts in the quality-of-care processes or spending domains. These results suggest that providing practices with population health management and cost-reporting software-along with technical assistance for how to use them-can complement practices' own patient-centered medical home transformation efforts and add meaningfully to their impacts on service use. abstract_id: PUBMED:27295736 Medicare Program; Medicare Shared Savings Program; Accountable Care Organizations--Revised Benchmark Rebasing Methodology, Facilitating Transition to Performance-Based Risk, and Administrative Finality of Financial Calculations. Final rule. Under the Medicare Shared Savings Program (Shared Savings Program), providers of services and suppliers that participate in an Accountable Care Organization (ACO) continue to receive traditional Medicare fee-for-service (FFS) payments under Parts A and B, but the ACO may be eligible to receive a shared savings payment if it meets specified quality and savings requirements. This final rule addresses changes to the Shared Savings Program, including: Modifications to the program's benchmarking methodology, when resetting (rebasing) the ACO's benchmark for a second or subsequent agreement period, to encourage ACOs' continued investment in care coordination and quality improvement; an alternative participation option to encourage ACOs to enter performance-based risk arrangements earlier in their participation under the program; and policies for reopening of payment determinations to make corrections after financial calculations have been performed and ACO shared savings and shared losses for a performance year have been determined. abstract_id: PUBMED:23833805 Medicare and Medicaid programs; requirements for long-term care facilities; hospice services. Final rule. This final rule will revise the requirements that an institution will have to meet in order to qualify to participate as a skilled nursing facility (SNF) in the Medicare program, or as a nursing facility (NF) in the Medicaid program. These requirements will ensure that long-term care (LTC) facilities (that is, SNFs and NFs) that choose to arrange for the provision of hospice care through an agreement with one or more Medicare-certified hospice providers will have in place a written agreement with the hospice that specifies the roles and responsibilities of each entity. This final rule reflects the Centers for Medicare and Medicaid Services' (CMS') commitment to the principles of the President's Executive Order 13563, released on January 18, 2011, titled "Improving Regulation and Regulatory Review.'' It will improve quality and consistency of care between hospices and LTC facilities in the provision of hospice care to LTC residents. abstract_id: PUBMED:30676355 How Much Does Medication Nonadherence Cost the Medicare Fee-for-Service Program? Background: Medication adherence is associated with lower health care utilization and savings in specific patient populations; however, few empirical estimates exist at the population level. Objective: The main objective of this study was to apply a data-driven approach to obtain population-level estimates of the impact of medication nonadherence among Medicare beneficiaries with chronic conditions. Research Design: Medicare fee-for-service (FFS) claims data were used to calculate the prevalence of medication nonadherence among individuals with diabetes, heart failure, hypertension, and hyperlipidemia. Per person estimates of avoidable health care utilization and spending associated with medication adherence, adjusted for healthy adherer effects, from prior literature were applied to the number of nonadherent Medicare beneficiaries. Subjects: A 20% random sample of community-dwelling, continuously enrolled Medicare FFS beneficiaries aged 65 years or older with Part D (N=14,657,735) in 2013. Measures: Avoidable health care costs and hospital use from medication nonadherence. Results: Medication nonadherence for diabetes, heart failure, hyperlipidemia, and hypertension resulted in billions of Medicare FFS expenditures, millions in hospital days, and thousands of emergency department visits that could have been avoided. If the 25% of beneficiaries with hypertension who were nonadherent became adherent, Medicare could save $13.7 billion annually, with over 100,000 emergency department visits and 7 million inpatient hospital days that could be averted. Conclusion: Medication nonadherence places a large resource burden on the Medicare FFS program. Study results provide actionable information for policymakers considering programs to manage chronic conditions. Caution should be used in summing estimates across disease groups, assuming all nonadherent beneficiaries could become adherent, and applying estimates beyond the Medicare FFS population. abstract_id: PUBMED:38100460 The Comprehensive Primary Care Plus Model and Health Care Spending, Service Use, and Quality. Importance: Implemented in 18 regions, Comprehensive Primary Care Plus (CPC+) was the largest US primary care delivery model ever tested. Understanding its association with health outcomes is critical in designing future transformation models. Objective: To test whether CPC+ was associated with lower health care spending and utilization and improved quality of care. Design, Setting, And Participants: Difference-in-differences regression models compared changes in outcomes between the year before CPC+ and 5 intervention years for Medicare fee-for-service beneficiaries attributed to CPC+ and comparison practices. Participants included 1373 track 1 (1 549 585 beneficiaries) and 1515 track 2 (5 347 499 beneficiaries) primary care practices that applied to start CPC+ in 2017 and met minimum care delivery and other eligibility requirements. Comparison groups included 5243 track 1 (5 347 499 beneficiaries) and 3783 track 2 (4 507 499 beneficiaries) practices, matched, and weighted to have similar beneficiary-, practice-, and market-level characteristics as CPC+ practices. Interventions: Two-track design involving enhanced (higher for track 2) and alternative payments (track 2 only), care delivery requirements (greater for track 2), data feedback, learning, and health information technology support. Main Outcomes And Measures: The prespecified primary outcome was annualized Medicare Part A and B expenditures per beneficiary per month (PBPM). Secondary outcomes included expenditure categories, utilization (eg, hospitalizations), and claims-based quality-of-care process and outcome measures (eg, recommended tests for patients with diabetes and unplanned readmissions). Results: Among the CPC+ patients, 5% were Black, 3% were Hispanic, 87% were White, and 5% were of other races (including Asian/Other Pacific Islander and American Indian); 85% of CPC+ patients were older than 65 years and 58% were female. CPC+ was associated with no discernible changes in the total expenditures (track 1: $1.1 PBPM [90% CI, -$4.3 to $6.6], P = .74; track 2: $1.3 [90% CI, -$5 to $7.7], P = .73), and with increases in expenditures including enhanced payments (track 1: $13 [90% CI, $7 to $18], P &lt; .001; track 2: $24 [90% CI, $18 to $31], P &lt; .001). Among secondary outcomes, CPC+ was associated with decreases in emergency department visits starting in year 1, and in acute hospitalizations and acute inpatient expenditures in later years. Associations were more favorable for practices also participating in the Medicare Shared Savings Program and independent practices. CPC+ was not associated with meaningful changes in claims-based quality-of-care measures. Conclusions And Relevance: Although the timing of the associations of CPC+ with reduced utilization and acute inpatient expenditures was consistent with the theory of change and early focus on episodic care management of CPC+, CPC+ was not associated with a reduction in total expenditures over 5 years. Positive interaction between CPC+ and the Shared Savings Program suggests transformation models might be more successful when provider cost-reduction incentives are aligned across specialties. Further adaptations and testing of primary care transformation models, as well as consideration of the larger context in which they operate, are needed. Answer: The provided abstracts do not contain specific information comparing Medicaid payments for a Program of All-inclusive Care for the Elderly (PACE) to fee-for-service long-term care. The abstracts discuss various Medicare programs and initiatives, such as the Medicare Shared Savings Program for Accountable Care Organizations (ACOs) (PUBMED:22046633, PUBMED:27295736), the impact of health information technology-focused patient-centered medical neighborhood programs (PUBMED:29462078), and the Comprehensive Primary Care Plus Model (PUBMED:38100460). They also address issues related to Medicare Advantage disenrollment (PUBMED:24800156), medication nonadherence costs (PUBMED:30676355), and revisions to long-term care facility requirements (PUBMED:23833805). However, none of the abstracts directly compare Medicaid payments for PACE versus fee-for-service long-term care. To answer the question about Medicaid payment differences between PACE and fee-for-service long-term care, specific data on Medicaid reimbursement rates or cost analyses for these two types of care would be required, which is not provided in the abstracts.
Instruction: Do the trajectories of dyspnea differ in prevalence and intensity by diagnosis at the end of life? Abstracts: abstract_id: PUBMED:20413056 Do the trajectories of dyspnea differ in prevalence and intensity by diagnosis at the end of life? A consecutive cohort study. Context: Breathlessness reportedly worsens as death approaches for many people, but the differences in intensity and time course between underlying causes are not well described. Objectives: To determine differences in the intensity of breathlessness by diagnosis over time as death approaches in a consecutive cohort seen by a specialist palliative care service. Methods: Patients referred to Silver Chain Hospice Care Service over a period of four years (January 2004 to December 2007) had dyspnea evaluated at every clinical encounter until death. A numeric rating scale (NRS) was used to measure the intensity. Patients were categorized into five clusters (lung cancer, secondary cancer to lung, heart failure, end-stage pulmonary disease, and no identifiable cardiorespiratory cause) at three time points (60-53 [T(3)], 30-23 [T(2)], and 7-0 [T(1)] days before death [T(0)]). Group differences were assessed using analysis of variance. Joinpoint regression models defined significant changes in mean breathlessness intensity. Results: For 5,862 patients, data were collected an average of 20 times (median: 13; 116,982 data points) for an average of 86 days (median: 48). Breathlessness was significantly higher at all three time points in people with noncancer diagnoses. Breathlessness increased significantly at days 10 and 3 before death for people with cancer (P&lt;0.001 for both), but remained unchanged, albeit significantly higher for patients with noncancer diagnoses. In the three months leading to death, the prevalence of "no breathlessness" decreased from 50% to 35%, and the proportion of patients with severe breathlessness (&gt;7 out of 10) increased from 10% to 26%. Conclusion: Prevalence of breathlessness increases rapidly at life's end, especially for people with primary lung cancer; the levels of breathlessness became close to those experienced by people with noncancer diagnoses despite symptom control measures. abstract_id: PUBMED:22469665 Dyspnea prevalence, trajectories, and measurement in critical care and at life's end. Purpose Of Review: Dyspnea is a multidimensional response to an asphyxial threat that heralds respiratory failure. A clinician's ability to treat dyspnea warrants understanding the prevalence, burden, trajectories, and measurement across chronic conditions and exacerbations. This review focuses on two contexts: the intensive care unit and the near death stage of illness. Recent Findings: Dyspnea is the most distressing symptom experienced by critically ill patients at risk for dying. New longitudinal studies reveal varying dyspnea trajectories over time and by diagnosis. Patients with noncancer diagnoses, particularly chronic obstructive pulmonary disease, experience severe dyspnea for months before death, whereas patients with cancer even without apparent cardiorespiratory disease have an escalation of dyspnea near death. The sickest patients and those who are cognitively impaired or unconscious are not represented because self-report measures are used. Thus, dyspnea is underrepresented in most end-of-life cohorts. In most data, the Numeric Rating Scale for the intensity of breathlessness, while unidimensional, is the most valid measure for palliative care because of its simplicity. Use of the Respiratory Distress Observation Scale in future studies will permit the inclusion of patients who are unable to self-report dyspnea. Summary: In summary, new evidence is demonstrating the high prevalence and distress associated with dyspnea among critically ill patients and patients near death. abstract_id: PUBMED:31873053 Identifying the Prevalence and Correlates of Caregiver-Reported Symptoms in Home Hospice Patients at the End of Life. Background: Symptoms (e.g., pain, shortness of breath, and fatigue) at the end of life (EoL) are common. Although symptoms can contribute to poor quality of life at the EoL, much remains unknown regarding their prevalence and correlates in home hospice care. Objectives: To determine the prevalence and correlates of caregiver-reported symptoms in home hospice patients during the last week before discharge using the Edmonton Symptom Assessment Scale (ESAS). Design: This is a cross-sectional study measuring perceived patient symptoms using caregiver proxy data. Bivariate and multivariate analyses were conducted to examine patient and caregiver characteristics associated with ESAS scores. Setting/Subjects: Subjects were from an urban nonprofit home hospice organization. Measurements: Symptoms were measured using the ESAS. Results: The mean ESAS score was 51.2 (SD ±17.4). In bivariate analyses, higher perceived symptom score was associated with younger patient age (p &lt; 0.001), younger caregiver age (p &lt; 0.001), having a cancer diagnosis (p = 0.006), and lower caregiver comfort level managing symptoms (p &lt; 0.001). Regression model analyses showed that younger patient age (p = 0.0009, p = 0.0036) and lower caregiver comfort level managing symptoms (p = 0.0047, p &lt; 0.0001) were associated uniquely with higher symptom scores. Conclusions: Multiple symptoms of high severity were perceived by caregivers in the last week on home hospice. Patient age and caregiver comfort level in managing symptoms were associated with higher symptom scores. Further work is needed to improve management and treatment of symptoms in this care setting. abstract_id: PUBMED:29440806 Prevalence and Intensity of Dyspnea in Advanced Cancer and its Impact on Quality of Life. Context: Dyspnea is a subjective, multidimensional experience of breathing discomfort, commonly seen in patients with advanced cancer. To find the impact of dyspnea on the quality of life in this population, it is important to understand the prevalence and factors influencing dyspnea. Aims: This study aimed to determine the prevalence, intensity, and factors influencing dyspnea in advanced cancer and determine its impact on overall quality of life. Settings And Design: This was a prospective cross-sectional study. The prevalence of dyspnea and its impact on quality of life was determined in 500 patients registered with palliative medicine outpatient department. Subjects And Methods: The patients were asked to fill a set of questionnaires, which included the Cancer Dyspnea Scale (translated and validated Hindi and Marathi versions), visual analog scale for dyspnea and EORTC QLQ C 15 PAL. Details of demographics, symptomatology, and medical data were collected from the case record sheets of the patients. Statistical Analysis Used: Descriptive statistics, univariate, and multiple regression analysis were used to calculate the results. Results: About 44.37% of the patients experienced dyspnea. Dyspnea increased with worsening anxiety, depression, fatigue, appetite, well-being, pain, lung involvement by primary or metastatic cancer, performance status, and deteriorating overall quality of life and emotional wellbeing. Conclusions: The prevalence of dyspnea in advanced cancer patients is as high as 44.37% and has a negative impact on their overall quality of life. abstract_id: PUBMED:31867978 End-of-Life Trajectories of Patients With Hematological Malignancies and Patients With Advanced Solid Tumors Visiting the Emergency Department: The Need for a Proactive Integrated Care Approach. Purpose: Patients with hematological malignancies (HM) have more unpredictable disease trajectories compared to patients with advanced solid tumors (STs) and miss opportunities for a palliative care approach. They often undergo intensive disease-directed treatments until the end of life with frequent emergency department (ED) visits and in-hospital deaths. Insight into end-of-life trajectories and quality of end-of-life care can support arranging appropriate care according to patients' wishes. Method: Mortality follow-back study to compare of end-of-life trajectories of HM and ST patients who died &lt;3 months after their ED visit. Five indicators based on Earle et al. for quality of end-of-life care were assessed: intensive anticancer treatment &lt;3 months, ED visits &lt;6 months, in-hospital death, death in the intensive care unit (ICU), and in-hospice death. Results: We included 78 HM patients and 420 ST patients, with a median age of 63 years; 35% had Eastern Cooperative Oncology Group performance status 3-4. At the ED, common symptoms were dyspnea (22%), pain (18%), and fever (11%). After ED visit, 91% of HM patients versus 76% of ST patients were hospitalized (P = .001). Median survival was 17 days (95% confidence interval [CI]: 15-19): 15 days in HM patients (95% CI: 10-20) versus 18 days in ST patients (95% CI: 15-21), P = .028. Compared to ST patients, HM patients more often died in hospital (68% vs 30%, P &lt; .0001) and in the ICU or ED (30% vs 3%, P &lt; .0001). Conclusion: Because end-of-life care is more aggressive in HM patients compared to ST patients, a proactive integrated care approach with early start of palliative care alongside curative care is warranted. Timely discussions with patients and family about advance care planning and end-of-life choices can avoid inappropriate care at the end of life. abstract_id: PUBMED:30623244 Neuropathic symptoms, physical and emotional well-being, and quality of life at the end of life. The purpose of this cross-sectional, descriptive study was to assess differences in neuropathic symptoms, physical and emotional well-being, and quality of life in cancer patients at the end of life compared to those without neuropathic symptoms. Neuropathic symptoms were defined as numbness and tingling in the hands and/or feet. A secondary analysis of data from two hospices in Central Florida was performed. Adults (n = 717) with a cancer diagnosis, an identified family caregiver, and who were receiving hospice services, were eligible. The prevalence of numbness/tingling in the hands or feet was 40% in this sample of hospice patients with cancer. Participants with neuropathic symptoms of numbness/tingling had a significantly higher prevalence of pain (76.7% vs. 67.0%; p = .006), difficulty with urination (29.4% vs. 20.3%; p = .007), shortness of breath (64.9% vs. 54.1%; p = .005), dizziness/lightheadedness (46.0% vs. 28.2%; p &lt; .001), sweats (35.5% vs. 20.3%; p &lt; .001), worrying (50.7% vs. 37.3%; p = .001), feeling irritable (38.5% vs. 28.7%; p = .008), feeling sad (48.2% vs. 37.8%; p = .008), and difficulty concentrating (46.2% vs. 32.5%; p &lt; .001). They also reported significantly higher overall symptom intensity and symptom distress scores (p = &lt; .001), higher pain severity (p = .001) and pain distress (p = .002), and decreased quality of life (p = .002) compared to those without numbness/tingling. Neuropathic symptoms are emotionally distressing at the end of life and associated with higher symptom burden and diminished quality of life. abstract_id: PUBMED:36604427 Systematic review and meta-analysis of prevalence, trajectories, and clinical outcomes for frailty in COPD. This systematic review synthesised measurement and prevalence of frailty in COPD and associations between frailty and adverse health outcomes. We searched Medline, Embase and Web of Science (1 January 2001-8 September 2021) for observational studies in adults with COPD assessing frailty prevalence, trajectories, or association with health-related outcomes. We performed narrative synthesis and random-effects meta-analyses. We found 53 eligible studies using 11 different frailty measures. Most common were frailty phenotype (n = 32), frailty index (n = 5) and Kihon checklist (n = 4). Prevalence estimates varied by frailty definitions, setting, and age (2.6-80.9%). Frailty was associated with mortality (5/7 studies), COPD exacerbation (7/11), hospitalisation (3/4), airflow obstruction (11/14), dyspnoea (15/16), COPD severity (10/12), poorer quality of life (3/4) and disability (1/1). In conclusion, frailty is a common among people with COPD and associated with increased risk of adverse outcomes. Proactive identification of frailty may aid risk stratification and identify candidates for targeted intervention. abstract_id: PUBMED:34205278 Clinical Characteristics of the End-of-Life Phase in Children with Life-Limiting Diseases: Retrospective Study from a Single Center for Pediatric Palliative Care. Background: Data on the end-of-life phase of children receiving palliative care are limited. The purpose of this study is to investigate the spectrum of symptoms of terminally ill children, adolescents, and young adults, depending on their underlying disease. Methods: Findings are based on a 4.5-year retrospective study of 89 children who received palliative care before they died, investigating the symptomatology of the last two weeks before death. Results: In this study, the most common clinical symptomatology present in children undergoing end-of-life care includes pain, shortness of breath, anxiety, nausea, and constipation. Out of 89 patients included in this study, 47% suffered from an oncological disease. Oncological patients had a significantly higher symptom burden at the end of life (p &lt; 0.05) compared to other groups, and the intensity of symptoms increased as the underlying disease progressed. The likelihood of experiencing pain and nausea/vomiting was also significantly higher in oncological patients (p = 0.016). Conclusions: We found that the underlying disease is associated with marked differences in the respective leading clinical symptom. Therefore, related to these differences, symptom management has to be adjusted according to the underlying disease, since the underlying disorder seems to exert an influence on the severity of symptoms and thereby on the modality and choice of treatment. This study is intended to aid underlying disease-specific symptom management at the end-of-life care for children, adolescents, and young adults, with a specific focus on end-of-life care in a home environment. abstract_id: PUBMED:33042931 Neonatal End-of-Life Symptom Management. Purpose of review: Despite advances in technology and treatment options, over 15,000 neonates die each year in the United States. The majority of the deaths, with some estimates as high as 80%, are the result of a planned redirection of care or comfort measures only approach to care. When curative or life-prolonging interventions are not available or have been exhausted, parents focus on preserving quality of life and eliminating needless suffering. Parents hope their child will have a peaceful death and will not feel pain. A significant component of end-of-life care is high quality symptom evaluation and management. It is important that neonatal providers are knowledgeable in symptom management to address common sources of suffering and distress for babies and their families at the end-of-life (EOL). Recent findings: Medically complex neonates with life-threatening conditions are a unique patient population and there is little research on end-of-life symptom assessment and management. While there are tools available to assess symptoms for adolescents and adults, there is not a recognized set of tools for the neonatal population. Nonetheless, it is widely accepted that neonates experience significant symptoms at end-of-life. Most commonly acknowledged manifestations are pain, dyspnea, agitation, and secretions. In the absence of data and established guidelines, there is variability in their clinical management. This contributes to provider discomfort and inadequate symptom control. Summary: End-of-life symptom assessment and management is an important component of neonatal end-of-life care. While there remains a paucity of studies and data, it is prudent that providers adequately manage symptoms. Likewise, it is important that providers are educated so that they can effectively guide families through the dying process by discussing disease progression, physical changes, and providing empathetic support. In this review, the authors make recommendations for non-pharmacological and pharmacological management of end-of-life symptoms in neonates. abstract_id: PUBMED:24413610 Prehospital providers' perceptions of emergency calls near life's end. The nature of emergency end-of-life calls is changing as people live longer and die from chronic illnesses. This study explored prehospital providers' perceptions of (1) end-of-life 911 calls, (2) the signs and symptoms of dying, and (3) medical orders for life sustaining treatment (MOLST). The exploratory-descriptive pilot study was survey based and cross-sectional. Calls to nursing homes occur most often, (47.8% every shift). The MOLST was seen infrequently (57.9% rarely never). The most frequent signs and symptoms of dying were diagnosis (76%), hospice involvement (82%), apnea (75%), mottling (55%), and shortness of breath (48%). The MOLST identifies wishes about intubation (74%), resuscitation (74%), life-sustaining treatment (72%), and cardiopulmonary resuscitation (70%). Synergy exists between the fields of prehospital, hospice, and palliative medicine which offers potential for improved education and care. Answer: Yes, the trajectories of dyspnea do differ in prevalence and intensity by diagnosis at the end of life. Studies have shown that patients with noncancer diagnoses, particularly those with chronic obstructive pulmonary disease (COPD), experience severe dyspnea for months before death. In contrast, patients with cancer, even without apparent cardiorespiratory disease, have an escalation of dyspnea near death (PUBMED:22469665). Specifically, breathlessness was found to be significantly higher at all three time points (60-53 days, 30-23 days, and 7-0 days before death) in people with noncancer diagnoses. For people with cancer, breathlessness increased significantly at days 10 and 3 before death, but remained unchanged, albeit significantly higher for patients with noncancer diagnoses (PUBMED:20413056). Furthermore, in advanced cancer patients, dyspnea prevalence was found to be as high as 44.37%, and it negatively impacted their overall quality of life (PUBMED:29440806). Patients with hematological malignancies (HM) were shown to have more aggressive end-of-life care compared to those with solid tumors (STs), with frequent emergency department (ED) visits and in-hospital deaths, indicating a need for a proactive integrated care approach (PUBMED:31867978). In children with life-limiting diseases, the underlying disease was associated with marked differences in the leading clinical symptoms, including dyspnea, at the end of life. Oncological patients had a significantly higher symptom burden compared to other groups (PUBMED:34205278). Additionally, frailty in COPD patients was associated with increased dyspnea and poorer quality of life, among other adverse health outcomes (PUBMED:36604427). In summary, the trajectories of dyspnea at the end of life vary significantly depending on the underlying diagnosis, with noncancer patients experiencing severe dyspnea for a longer duration before death, and cancer patients experiencing a more acute escalation of dyspnea near death. These differences highlight the importance of diagnosis-specific approaches to managing dyspnea and other symptoms at the end of life.
Instruction: Percutaneous core-needle biopsy of palpable breast tumors. Do we need ultrasound guidance? Abstracts: abstract_id: PUBMED:12221573 Percutaneous core-needle biopsy of palpable breast tumors. Do we need ultrasound guidance? Introduction: Percutaneous core-needle biopsy is widely accepted for preoperative histologic assessment of suspicious breast lesions. The purpose of this study was to asses the impact of continuous sonographic guidance on diagnostic accuracy of large core needle biopsy of palpable breast lesions. Method: We analysed 170 breast lesions in a retrospective study. Percutaneous breast biopsies were performed by using a biopsy gun with 14-gauge needles. Eighty-eight biopsies were performed under continuous ultrasound guidance (group II) and 82 biopsies without ultrasound documentation of the procedure (group I). Core needle diagnoses were compared with the patients final tissue diagnosis as based on surgical excisional biopsy. Results: In patient group I, 17 lesions were categorized as core breast biopsy cancer misses (sensitivity 79 %). The sensitivity in this group showed an obvious dependency on tumor size. Among the 17 false negative lesions, 13 lesions were 3 cm in mean diameter or smaller. Two false negative findings occurred in group II (sensitivity 98 %), with a tumor size of 0.5 and 1.0 cm. Conclusion: Sonographic guidance is indispensable to ensure adequate diagnostic accuracy for core-needle biopsy of palpable breast lesions. abstract_id: PUBMED:35752516 FNA 2.0: Value of cytopathologist-performed ultrasound-guided core-needle biopsy. In the late 20th century, pathologist-performed palpation-guided fine-needle aspiration (PG-FNA) of superficial masses was popularized in the United States. It brought pathologists out of the laboratory to see patients and the hope of decreasing the need for surgical biopsy for diagnostic purposes. This first iteration of minimally invasive tissue sampling could be informally called FNA 1.0. FNA 1.0 had shortcomings, such as detection of invasion in breast cancer, precise subtyping of lymphomas, aspiration of fibrous lesions, and diagnosis of sarcomas. The early 21st century brought new hope. Ultrasound-guidance became commonly used to guide FNA of both palpable and non-palpable masses. Ultrasound-guided core-needle biopsy was available to complement FNA in select cases. Flow cytometry, immunohistochemistry, fluorescent in-situ hybridization, and genomic studies could be done on cell block and core biopsy specimens. These advances in minimally invasive tissue diagnosis could be informally called FNA 2.0. In particular, pathologist-performed ultrasound-guided core-needle biopsy can overcome many of the criticisms and shortcomings of FNA. As pathologists were once leaders in palpation-guided fine-needle aspiration, they now have the opportunity to add pathologist-performed ultrasound-guided core-needle biopsy to their skill set and emerge once again as leaders in minimally invasive tissue diagnosis. This will bring pathology to the next level. abstract_id: PUBMED:25483145 Non-palpable breast cancer malignant on needle core biopsy and no malignancy in surgical excision: how to manage? Objective: Despite the standard management of non-palpable breast cancer (needle core biopsy diagnostic, accurate preoperative localization), there are differences in some cases between the malignant histo-pathological finding in diagnostic biopsy results and negative histo-pathological finding after surgical excision. The aim of this study is to evaluate this incidence and classifying them under three category: failure of surgical excision after preoperative identification; removal of the tumor was already completed by percutaneous biopsy; percutaneous biopsy true false positive. Patients And Methods: We conducted a study based on prospective database, all patients included in this study had partial mastectomy for ductal carcinoma in-situ or invasive cancer which was diagnosed by needle core biopsy and normal/benign after surgery. Results: Regarding the partial mastectomy, 1863 was performed in the last three years in our center. Thirty-seven patients (2%) correspond our study criteria. After discussion of cases in our multidisciplinary reunion, 6 patients (16%) were considered as failure of surgical excision, 26 patients (70%) as true removal of the whole lesion in the core, and 5 patients (13%) as true false-positive cores. Discussion And Conclusion: This is the first study witch investigate all factors that influence the results of negative final histo-pathological finding of surgical excision of the tumor after malignant diagnostic needle core biopsy. This rare situation need a multidisciplinary meeting to analyse all the steps of management and to determine causes of those false results and try to find adequate management to solve this problem. abstract_id: PUBMED:16412288 Clinical study of percutaneous ultrasound-guided or freehand core needle biopsy of primary breast cancer Objective: To evaluate ultrasound-guided or freehand percutaneous core needle biopsy in histopathological assessment of primary breast cancer lesions and its utility in primary breast cancer treatment. Methods: Retrospectively analysis of the clinical data of 697 primary breast cancer examined by mean of core needle biopsy. Results: The diagnostic accuracy, false negative rate and insufficiency rate was 90.5%, 8.9% and 0.6%, respectively. Diagnostic accuracy was significantly higher in ultrasound guiding comparing to palpable guiding (92.7% vs. 88.2%, P &lt; 0.05). There was a nonsignificant trend toward increased false negative rate in palpable guiding (10.9% vs. 7.1%, P &gt; 0.05). Conclusion: Percutaneous core needle biopsy is reliable in histopathological assessment of primary breast cancer lesions. Ultrasound guiding was superior to palpable guiding in term of diagnostic accuracy in this study. abstract_id: PUBMED:24604953 Needle core biopsy for breast lesions: An audit of 467 needle core biopsies. Background: Breast cancer is the commonest cancer among women in urban India. Triple assessment includes clinical, radiological and cytological assessment of breast lesions. Guided core needle biopsy has replaced fine needle aspiration cytology in most of the western countries. In resource poor countries FNAC is still a very valuable and cost effective method to diagnose breast lesions. Pitfalls include increased rates of non diagnostic smears, and inadequate smears. Further procedures may be required and this increases the cost, anxiety and delay in diagnosis. Aims: The aim of this study is to analyze the concordance of radiological and histopathology findings in BI-RADS category 3,4,5 lesions following a core biopsy. Materials And Methods: Data was retrospectively collected from consecutive symptomatic and opportunistic screen detected patients with abnormalities who underwent ultrasound guided interventional procedures from Jan 2010 to Aug 2011. Symptomatic patients underwent clinical examination, mammogram and breast ultrasound. Women under 35 years of age had only breast ultrasound. Core biopsy was performed under ultrasound guidance or clinically by a breast surgeon/ radiologist for BI-RADS category 3,4,5 lesions. Statistical Methods: Chi square test was done to show the strength of association of imaging findings and histopathology results of core biopsy. Results: 437 patients were symptomatic and 30 patients had screen detected abnormalities. The positive predictive value for BI-RADS 5 lesions for malignancy is 93.25% and the negative predictive value of BI-RADS category 3 lesions for cancer is 98.4%. False negative diagnosis on core biopsy was 0.85%. We were able to defer surgery in 60% of the patients with a clear radiological and pathological benign diagnosis. Conclusion: The PPV and NPV for cancer is high with needle core biopsy in BI-RADS 3,4,5 lesions. Where there is no discordance between clinical, radiology and pathology findings, surgery can be avoided in benign lesions. While in resource poor countries FNAC continues to be a valuable method in the diagnosis of palpable and non palpable breast lesions, the practice of needle core biopsy provides the most accurate and optimal diagnostic information. abstract_id: PUBMED:26832175 Comparative Study of Core Needle Biopsy and Fine Needle Aspiration Cytology in Palpable Breast Lumps: Scenario in Developing Nations. Objective: The purpose of this study was to evaluate the utility of core needle biopsy as a diagnostic tool for palpable breast lumps in developing countries as compared to fine needle aspiration cytology. Material And Method: All patients attending the surgery outpatient department with palpable breast lumps were subjected to fine needle aspiration cytology and core needle biopsy by the same operator in a single session. Fine needle aspiration cytology was performed by the standard technique. Core needle biopsy was done freehand using a 14G manual core biopsy needle. Reporting categories of the two techniques were taken from the standard National Health Service Breast Screening Programme criteria and were compared with the final histopathology results. Results: A total of 107 patients underwent fine needle aspiration cytology and core needle biopsy simultaneously. Histopathology was available for 85 cases. Statistical analysis of fine needle aspiration cytology and core needle biopsy showed no significant difference between the diagnoses offered by core needle biopsy and histopathology while there was a significant difference between fine needle aspiration cytology and histopathology diagnoses. Conclusion: Core needle biopsy detected more breast carcinomas as compared to fine needle aspiration cytology with a sensitivity 95.83% as opposed to 64.58%. Though both the techniques were equally specific (100%), Core needle biopsy was able to correctly categorize borderline / inadequate lesions into definitely benign and malignant categories. We suggest that core needle biopsy should be preferred over fine needle aspiration cytology for the diagnosis of palpable breast lumps with fine needle aspiration cytology being reserved for definitely benign lesions. abstract_id: PUBMED:11698631 Ultrasound-guided core needle biopsy as an initial diagnostic test for palpable breast masses. Purpose: To determine the role of ultrasound (US)-guided core needle biopsy as an initial diagnostic test for palpable breast masses. Materials And Methods: Ninety-eight consecutive patients, each with a palpable breast mass, were referred for US-guided core biopsy by a multidisciplinary team of physicians who specialize in the care of breast diseases. All palpable breast masses were clearly visible on high-resolution US. Ninety-nine core needle biopsies were performed under local anesthesia with use of freehand technique, mostly in an outpatient setting. Core needle path through each mass was documented in two orthogonal sections. A mean of 3.4 tissue core samples (range, 1-7) were obtained in each patient. Results: Core needle biopsy resulted in the diagnosis of 66 malignancies, two cases of atypical ductal hyperplasia (ADH), and 30 benign diseases of the breast. Surgery with curative intent was performed in 63 breast malignancies and excisional biopsies were performed for 10 benign diseases (two cases of ADH and eight benign lesions). Twenty-five breast masses were managed nonoperatively: chemotherapy was performed in three locally advanced breast cancers and 3-year follow-up was conducted for 22 benign lesions. Malignancies were correctly diagnosed in all cases. No malignancy was discovered at surgery or during clinical follow-up of ADH and no benign breast lesions were diagnosed by core needle biopsy. US-guided core needle biopsy is 100% sensitive and specific for palpable breast malignancies, with no false-positive results. A puncture site ecchymosis was the only morbidity or complication noted. Conclusion: US-guided core needle biopsy is a safe and accurate first diagnostic test for palpable breast masses that require tissue proof. abstract_id: PUBMED:31515971 Value of combined use of fine-needle aspiration and core needle biopsy in palpable breast tumors performed by pathologist: Institut Curie experience. The aim of this study was to determine the accuracy of fine-needle aspiration (FNA) and core needle biopsy (CNB) for palpable breast tumors (PBTs). FNA and CNB of 492 PBTs from 477 patients were analyzed. Tumors were malignant in 473 cases and benign in 19 cases. There was a strong correlation (P &gt; .05) between FNA and CNB in terms of malignancy. Among 473 malignant tumors, FNA had better accuracy and less unsatisfactory results (95.6%; 2.7%) than CNB (94.9%; 4.9%). Among 19 benign tumors, CNB was accurate in 100% compared to 94.7% using FNA. There were only two (0.4%) cases where result was unsatisfactory by both FNA and CNB. NPV was 56.3% for FNA, 43.2% for CNB, and 95.0% for FNA and CNB combined. Sensitivity was 97.0% for FNA, 94.7% for CNB, and 99.8% for FNA and CNB combined. PPV and specificity was 100% for FNA and CNB both separately and combined. Combined use of FNA with CNB is an optimal diagnostic method for PBTs. In our opinion, this should be recommended as standard for diagnosis of PBTs. abstract_id: PUBMED:10954467 Palpable breast masses: is there a role for percutaneous imaging-guided core biopsy? Objective: The purpose of this study was to evaluate percutaneous imaging-guided core biopsy in the assessment of selected palpable breast masses. Materials And Methods: Of 1388 consecutive breast lesions that had percutaneous imaging-guided core biopsy, 155 (11%) were palpable. Palpable masses referred for percutaneous imaging-guided core biopsy included lesions that were small, deep, mobile, vaguely palpable, or multiple. Biopsy guidance was sonography in 140 lesions (90%) and stereotaxis in 15 (10%). Surgical correlation or minimum of 2 years follow-up is available in 115 palpable masses in 107 women. Medical records, imaging studies, and histologic findings were reviewed. Results: Of 115 palpable breast masses, 98 (85%) were referred by surgeons to the radiology department for percutaneous imaging-guided core biopsy and 88 (77%) had percutaneous imaging-guided core biopsy on the day of initial evaluation at our institution. Percutaneous imaging-guided core biopsy spared additional diagnostic tissue sampling in 79 (74%) of 107 women, including 57 women with carcinoma and 22 women with benign findings. Percutaneous imaging-guided core biopsy did not spare additional tissue sampling in 28 women (26%), including 15 women in whom surgical biopsy was recommended on the basis of percutaneous biopsy findings and 13 women with benign (n = 7) or malignant (n = 6) percutaneous biopsy findings who chose to undergo diagnostic surgical biopsy. Conclusion: Percutaneous imaging-guided core biopsy is useful in the evaluation of palpable breast masses that are small, deep, mobile, vaguely palpable, or multiple. In this study, percutaneous imaging-guided core biopsy spared additional diagnostic tissue sampling in 74% women with palpable breast masses. abstract_id: PUBMED:34861968 Ultrasound Guided Core Breast Biopsies. Core needle biopsies are common procedures performed in breast imaging centers. Utilizing ultrasound guidance for biopsy of suspicious findings within the breast and axilla is the standard of care when lesions are visible on ultrasound. Most ultrasound guided breast biopsies are performed by radiologists interpreting breast imaging who correlate findings across modalities including mammography, ultrasound and breast MRI, however interventional radiologists may perform in some practice settings. Appropriate knowledge of complete breast imaging is important not only for localization, but for determining radiologic-pathologic concordance. Proper localization, technique, and post biopsy management are essential to the success of the procedure and providing proper care for patients. This article discusses indications, strategies for accurate targeting, how-to biopsy techniques, post biopsy considerations and radiologic-pathologic concordance incorporating recommendations from the American College of Radiology Practice Parameter for the Performance of Ultrasound-guided Percutaneous Breast Interventional procedures. Answer: Yes, ultrasound guidance is considered indispensable for ensuring adequate diagnostic accuracy for percutaneous core-needle biopsy of palpable breast lesions. Studies have shown that performing core-needle biopsies under continuous ultrasound guidance significantly improves the sensitivity of the procedure. In a retrospective study comparing biopsies performed with and without ultrasound guidance, the sensitivity was 98% in the ultrasound-guided group compared to 79% in the non-guided group, with a notable dependency on tumor size for the non-guided group. Among the false negatives in the non-guided group, most lesions were 3 cm in mean diameter or smaller, indicating that smaller lesions are particularly at risk of being missed without ultrasound guidance (PUBMED:12221573). Additionally, other studies have supported the superiority of ultrasound-guided core needle biopsy over palpation-guided methods. For instance, one study found that diagnostic accuracy was significantly higher with ultrasound guidance compared to palpation guidance (92.7% vs. 88.2%), and there was a trend toward an increased false negative rate with palpation guidance (PUBMED:16412288). Another study highlighted that ultrasound-guided core needle biopsy is a safe and accurate first diagnostic test for palpable breast masses, with a sensitivity and specificity of 100% for palpable breast malignancies (PUBMED:11698631). Moreover, the combined use of fine-needle aspiration and core needle biopsy, particularly when performed by a pathologist, has been shown to be an optimal diagnostic method for palpable breast tumors, with a combined sensitivity of 99.8% (PUBMED:31515971). This suggests that ultrasound guidance not only improves the accuracy of core needle biopsies but also complements other diagnostic methods like fine-needle aspiration. In conclusion, the evidence strongly supports the use of ultrasound guidance for percutaneous core-needle biopsy of palpable breast tumors to ensure high diagnostic accuracy and reduce the risk of false negatives.
Instruction: Are implantable loop recorders useful in detecting arrhythmias in children with unexplained syncope? Abstracts: abstract_id: PUBMED:28052814 The diagnostic yield of implantable loop recorders in unexplained syncope: A systematic review and meta-analysis. Background: Guidelines recommend that implantable loop recorders (ILRs) are used in the evaluation of people with recurrent unexplained syncope in the absence of high-risk criteria, and in high-risk patients after a negative evaluation. The aim of this systematic review was to analyze the diagnostic yield of ILRs in these patients. Methods: We performed a systematic search in order to retrieve studies enrolling adults undergoing ILR implantation for undetermined syncope. The primary outcome was the overall diagnostic yield, defined as the proportion of patients with syncope recurrence and an available ILR recording or an automatic detection of a significant arrhythmia. Secondary outcomes were the proportions of patients with the specific etiologic diagnoses on the total of subjects and the proportion of an analyzable ECG recording during symptoms. We used a random effects model for the meta-analyses. Results: Forty-nine studies, enrolling 4381 subjects, were included. The overall diagnostic yield was 43.9% (95% CI=40.2%, 47.6%; I2=79.8%). The proportions of subjects finally diagnosed with arrhythmic syncope, ventricular arrhythmias, supraventricular arrhythmias and bradyarrhythmias were 26.5%, 2.7%, 4.9% and 18.2%, respectively. The proportion of an analyzable ECG recording during symptoms was 89.5% (95% CI=86.1%, 92.1%; 1236 subjects; 36 studies; I2=44.9%). Median time to diagnosis was 134days. Heterogeneity is an important limitation to be acknowledged. Conclusions: About a half of unexplained syncope subjects implanted with an ILR were diagnosed, and around 50% of them had an arrhythmia. Life-threatening arrhythmias as well as ILR complications and death due to arrhythmic events were very rare. abstract_id: PUBMED:27812742 Implantable loop recorders in the diagnosis of syncope In syncope patients, it is essential to make the right diagnosis with respect to underlying etiology. Cardiac (arrhythmic and structural) syncopal episodes carry untoward prognostic implication compared with reflex syncope. While rhythm-symptom correlation of a spontaneous syncopal episode is key to making the correct diagnosis, in case of unclear syncope the early implantation of a loop recorder leads to faster, more efficacious, and more cost-effective diagnosis. This review article summarizes the current data regarding diagnostic accuracy and clinical role of implantable loop recorders. It outlines the superiority of loop recorders in the management of unclear syncope according to present knowledge. abstract_id: PUBMED:38326152 Bradycardia risk stratification with implantable loop recorder after unexplained syncope. Background: An implantable loop recorder is an effective tool for diagnosing unexplained syncope. However, after a first episode in non-high-risk patients, the usefulness of implantable loop recorder implantation remains unclear. Aims: To analyse relevant risk factors for significant bradycardia in order to identify patients who do or do not benefit from implantable loop recorder implantation. Also, to study whether implantable loop recorder implantation with remote monitoring is associated with less recurrence of traumatic syncope. Methods: This was a retrospective monocentric study including patients with implantable loop recorder implantation after unexplained syncope, using remote monitoring and iterative consultations. Results: Two hundred and thirty-seven patients were implanted for unexplained syncope. Significant bradycardia occurred in 53 patients (22.4%): 23 (43.4%) caused by paroxysmal atrioventricular block and 30 (56.6%) caused by sinus node dysfunction, leading to permanent pacemaker implantation in 48 patients. Compared with younger patients, there was a 3.46-fold increase (95% confidence interval 1.92-6.23; P&lt;0.0001) in the risk of significant bradycardia in patients aged≥60 years. Based on multivariable analysis, only "typical syncope" was associated with significant bradycardia occurrence (hazard ratio 3.14, 95% confidence interval 1.75-5.65; P=0.0001). There was no recurrence of significant bradycardia with traumatic complications among patients implanted for traumatic syncope. Conclusions: This study shows that: (1) implantable loop recorders identify more significant bradycardia in patients aged≥60 presenting with a first non-high-risk typical syncope, suggesting that an implantable loop recorder should be implanted after a first episode of unexplained syncope in such conditions; and (2) after traumatic syncope, implantable loop recorder implantation is safe, and is associated with little or no recurrence of traumatic syncope. abstract_id: PUBMED:34389905 The Role of External Loop Recorders in Arrhythmia-Related Symptoms in Children: A Single Center Experience. In this study, we report our experience with the use of external loop recorders (ELRs), in terms of diagnostic efficiency according to symptoms and symptom-rhythm correlation in pediatric patients. We evaluated ELRs applied to 178 patients between April 2017 and November 2020 at our center. The mean age of 172 patients included in the study was 13.6 ± 3.8 years, and 69.8% were female. ELR indications were palpitations in 98 (56.9%) cases, chest pain and palpitations in 43 (25%) cases, presyncope/syncope in 28 (16.2%) cases, and pacemaker/ implantable cardioverter-defibrillator (ICD) problems in 3 (0.2%) cases. ELR recording times were 14.2 ± 9.7 days on average, ranging from 2 to 67 days. While the symptom-rhythm correlation was 29.1% in total, when the indications were evaluated one by one, this correlation was found to be 30.2% in palpitations, 34.7% in chest pain and palpitations, and 10.7% in presyncope/syncope. The total diagnostic efficiency was 68.1%. In the follow-up of ELR cases, a total of 139 (80.8%) patients received clinical follow-up without medication, 15 (8.8%) patients received medical treatment, and 18 (10.4%) patients underwent EPS. The cardiac ELR system is useful in detecting underlying arrhythmias. Demonstrating sinus tachycardia at the time of the symptom may be seen as negative finding, but while experiencing symptoms, it is diagnostically valuable and may help avoid further investigation with costly and invasive diagnostic procedures. For diagnostic efficiency and cost effectiveness, the optimal recording time is 2 weeks, but it should be extended to 4 weeks in cases such as of presyncope/syncope that cannot be explained with a 2-week ELR use. abstract_id: PUBMED:27878365 Implantable loop recorders of the Reveal family (Medtronic) This review explains the implantable loop recorders Medtronic Reveal XT and Medtronic Reveal LINQ. Technical specifications of the two devices are described in great detail. Additional tips for implantation as well as device programming are given including specific considerations of follow-up. abstract_id: PUBMED:19694968 Are implantable loop recorders useful in detecting arrhythmias in children with unexplained syncope? Introduction: Syncope and presyncope are symptoms that occur infrequently in children, are unpredictable, and represent a diagnostic challenge to the physician. Conventional diagnostic investigations are often unable to establish a diagnosis, making it difficult to determine patient risk and direct appropriate therapy. The implantable loop recorder (ILR) is a medical device that was created for prolonged monitoring of heart rate and rhythm and has been used in a limited number of pediatric studies in which the cause of the syncope is unknown. Methods: This is a retrospective review of the clinical, surgical, and follow-up data of patients who had ILR devices implanted after conventional testing failed to identify a cause for their symptoms. Results: The diagnostic yield of the ILR device in unmasking the cause for symptoms in our patient cohort was 64%. In our study, manually activated events accounted for 71% of all documented episodes and 68% of the cases involving hemodynamically important arrhythmias or transient rhythm changes. The ILR device can be safely implanted and explanted in children without significant morbidity, in most cases. None of our patients experienced any long-term adverse events associated with placement of the device and all were alive at last follow-up. Conclusions: The use of the ILR device is a useful tool to help unmask arrhythmias as a cause of unexplained syncope in children. Patient selection for who should and should not have an ILR device implanted will continue to influence its diagnostic utility and generate controversy among stakeholders. abstract_id: PUBMED:17074636 Digital implantable loop recorders in the investigation of syncope in children: benefits and limitations. Background: Conventional diagnostic methods for detecting arrhythmogenic causes of syncope in children are poor. Digital implantable loop recorders are of proven value in adults. Objectives: The purpose of this study was to evaluate digital implantable loop recorders in the investigation of syncope in children. Methods: We reviewed the clinical and technical records of 18 consecutive patients (6 female and 12 male; age &lt;or=16 years) who received an implantable loop recorder from 1999 to 2005. Results: Median age at implantation was 11.3 years (range 4.6-16.5 years). Median duration of the device in situ was 18 months (range 5-36 months). Median time to diagnosis was 6 months (range 1 day to 17 months). Two patients had a congenital heart defect. Ten children (56%) had an event, 9 (50%) of whom had diagnostic information; 5 (28%) had profound bradycardia or asystole; 2 (11%) had polymorphic ventricular tachycardia (VT); and 1 child had supraventricular tachycardia. One patient died, but the automatically activated recording was recorded over again after death. One child had sinus rhythm during syncope. One child with polymorphic VT had no auto-activation on two occasions, and the third activation was triggered by asystole after VT terminated. Sixteen patients (89%) had false-positive activations as a result of either artifact or sinus tachycardia. Conclusion: The digital implantable loop recorder is a useful diagnostic modality in children with unexplained syncope. However, the automatic detection algorithm is imperfect, missing genuine polymorphic VT and frequently interpreting muscle tremors as VT. Because of continuous overwriting by automatic detection, genuine arrhythmias may be over-recorded by artifact. abstract_id: PUBMED:26115822 Value of Ambulatory Electrocardiographic Monitoring in Syncope. Implantable loop recorders (ILRs) continuously monitor electrocardiographic signals and perform real-time analysis of heart rhythm for up to 36 months. ILRs are used to evaluate transitory loss of consciousness from possible arrhythmic origin, particularly unexplained syncope, and to evaluate difficult cases of epilepsy and unexplained falls, although current indications for their application in these areas are less clearly defined. This article analyzes the current indications for ILRs according the European Society of Cardiology guidelines on the management of syncope and the European Heart Rhythm Association guidelines on the use of implantable and external electrocardiogram loop recorders, and their limitations. abstract_id: PUBMED:22947628 The usefullness of implantable loop recorders for evaluation of unexplained syncope and palpitations Objective: Syncope is a common complaint and determining the underlying cause can be difficult despite extensive evaluation. The purpose of this study was to evaluate the usefulness of an implantable loop recorder for patients with unexplained syncope and palpitations. Material And Methods: This was a retrospective analysis of 18 patients, five of whom still have the device implanted. All patients had undergone extensive evaluation for their symptoms before getting the loop recorder implanted and this was therefore a highly select group. Results: Of the thirteen patients where use of the device was completed, the mean age was 65±20 years. The loop recorder was in use for a mean time of 20±13 months. Unexplained syncope, eleven of thirteen, was the most common indication. The other two received the loop recorder for unexplained palpitations. Four patients had sick sinus syndrome during monitoring, three had supraventricular tachycardia and one had ventricular tachycardia. Further three had typical symptoms but no arrhythmia was recorded and excluding that as a cause. Two patients had no symptoms the entire time they had the loop recorder. Of the five patients still with the device three had syncope as the indication for monitoring and two have the device as a means of evaluating the results of treatment for arrhythmia. Conclusion: This study on our initial experience with implantable loop recorders shows that these devices can be useful in the investigation of the causes of syncope and palpitations. abstract_id: PUBMED:27873022 Current rare indications and future directions for implantable loop recorders. The scope of application for implantable loop recorders has shifted away from the evaluation of unclear palpitations and syncope episodes to more complex conditions. This article focuses on rare indications of growing importance such as rhythm monitoring after ablation of atrial fibrillation or after cryptogenic stroke. Furthermore, forthcoming applications in various clinical settings are described, e. g., arrhythmia detection after myocardial infarction, after catheter-based valve interventions, in heart failure, and in cardiomyopathies. Enhancement of the capabilities of implantable loop recorders could broaden their fields of use. Answer: Yes, implantable loop recorders (ILRs) are useful in detecting arrhythmias in children with unexplained syncope. The diagnostic yield of ILR devices in unmasking the cause for symptoms in children was found to be 64%, with manually activated events accounting for a significant portion of documented episodes that involved hemodynamically important arrhythmias or transient rhythm changes (PUBMED:19694968). Additionally, the digital implantable loop recorder is a useful diagnostic modality in children with unexplained syncope, although the automatic detection algorithm is not perfect and may miss genuine arrhythmias or interpret muscle tremors as arrhythmias (PUBMED:17074636). Despite these limitations, ILRs have been shown to be a valuable tool in the pediatric population for identifying arrhythmias that may be the cause of unexplained syncope.