input
stringlengths
6.82k
29k
Instruction: Is periprocedural sedation during acute stroke therapy associated with poorer functional outcomes? Abstracts: abstract_id: PUBMED:20431708 Is periprocedural sedation during acute stroke therapy associated with poorer functional outcomes? Background: To safely perform acute intra-arterial revascularization procedures, use of sedative medications and paralytics is often necessary. During the conduct of the Interventional Management of Stroke trials (I and II), the level of sedation used periprocedurally varied. At some institutions, patients were paralyzed and intubated as part of the procedural standard of care while at other institutions no routine sedation protocol was followed. The aim of this study was to identify patient characteristics that would correlate with the need for deeper sedation and to explore whether levels of sedation relate to patient outcome. Methods: 75 of 81 patients in the Interventional Management of Stroke II Study were studied. Patients had anterior circulation strokes and underwent angiography and/or intervention. Four sedation categories were defined and tested for factors potentially associated with the level of sedation. Clinical outcomes were also analyzed, including successful angiographic reperfusion and the occurrence of clinical complications. Results: Only baseline National Institutes of Health Stroke Scale varied significantly by sedation category (p=0.01). Patients that were in the lower sedation category fared better, having a higher rate of good outcomes (p<0.01), lower death rates (p=0.02) and higher successful angiographic reperfusion rates (p=0.01). There was a significantly higher infection rate in patients receiving heavy sedation or pharmacologic paralysis (p=0.02) and a trend towards fewer groin related complications. Conclusion: In this small sample, patients not receiving sedation fared better, had higher rates of successful angiographic reperfusion and had fewer complications. Further examination of the indications for procedural sedation or paralysis and their effect on outcome is warranted. abstract_id: PUBMED:30037954 Is periprocedural sedation during acute stroke therapy associated with poorer functional outcomes? Background: To safely perform acute intra-arterial revascularization procedures, use of sedative medications and paralytics is often necessary. During the conduct of the Interventional Management of Stroke trials (I and II), the level of sedation used periprocedurally varied. At some institutions, patients were paralyzed and intubated as part of the procedural standard of care while at other institutions no routine sedation protocol was followed. The aim of this study was to identify patient characteristics that would correlate with the need for deeper sedation and to explore whether levels of sedation relate to patient outcome. Methods: 75 of 81 patients in the Interventional Management of Stroke II Study were studied. Patients had anterior circulation strokes and underwent angiography and/or intervention. Four sedation categories were defined and tested for factors potentially associated with the level of sedation. Clinical outcomes were also analyzed, including successful angiographic reperfusion and the occurrence of clinical complications. Results: Only baseline National Institutes of Health Stroke Scale varied significantly by sedation category (p=0.01). Patients that were in the lower sedation category fared better, having a higher rate of good outcomes (p<0.01), lower death rates (p=0.02) and higher successful angiographic reperfusion rates (p=0.01). There was a significantly higher infection rate in patients receiving heavy sedation or pharmacologic paralysis (p=0.02) and a trend towards fewer groin related complications. Conclusion: In this small sample, patients not receiving sedation fared better, had higher rates of successful angiographic reperfusion and had fewer complications. Further examination of the indications for procedural sedation or paralysis and their effect on outcome is warranted. abstract_id: PUBMED:31295591 Postinterventional Sedation Worsens Functional Outcomes in Patients with Acute Ischemic Stroke Treated with Endovascular Therapy. Background: Postinterventional sedation is commonly used in clinical practice to ensure patient safety and comfort. Although sedation or anesthesia during thrombectomy has been well studied, the association between postinterventional sedation and functional outcomes in endovascularly treated patients with acute ischemic stroke (AIS) has yet to be investigated. We describe the association between postinterventional sedation and functional outcomes in patients with AIS treated with endovascular therapy (EVT). Methods: This observational study was based on a prospective registry. Patients with AIS treated with EVT from January 2013 to August 2017 at Xuanwu Hospital, Capital Medical University were included. Patients receiving postinterventional sedation were compared with patients not receiving sedation. The primary outcome was the 3 months modified Rankin Scale score. Results: A total of 268 patients were eligible for study: 112 patients (41.8%) receiving postinterventional sedation and 156 patients (58.2%) without sedation. At 3 months follow-up, the median modified Rankin Scale score was 4 (interquartile range, 3-6) in the sedation group and 2 (interquartile range, 1-4) in the nonsedation group (P < 0.001). Multivariable regression analysis suggested that the need for postinterventional sedation was associated with unfavorable outcomes, with an odds ratio of 0.20 for functional independence (95% confidence interval [CI], 0.078-0.487; P < 0.001), 0.06 for freedom from disability (95% CI, 0.017-0.228; P < 0.001), and 8.37 for death (95% CI, 2.196-31.889; P = 0.002). Conclusions: Postinterventional sedation worsens functional outcomes in patients with AIS with large-vessel occlusions treated with EVT. Whether the sedation is a causative factor or a surrogate for poor functional outcomes remains to be determined. abstract_id: PUBMED:33438184 Impact of conscious sedation and general anesthesia on periprocedural outcomes in Watchman left atrial appendage closure. Background: Transcatheter left atrial appendage closure (LAAC) is performed either in conscious sedation (CS) or general anesthesia (GA), and limited data exist regarding clinical outcomes for the two approaches. The aim of the study was to analyze the effect of CS versus GA on acute outcomes in a large patient cohort undergoing LAAC with a Watchman occluder. Methods: A cohort of 521 consecutive patients underwent LAAC with Watchman occluders at two centers (REGIOMED hospitals, Germany) between 2012 and 2018. One site performed 303 consecutive LAAC procedures in GA, and the other site performed 218 consecutive procedures in CS. The safety endpoint was a composite of major periprocedural complications and postoperative pneumonia. The efficacy endpoint was defined as device success. Results: After a 1:1 propensity score matching, 196 (CS) vs. 115 (GA) patients could be compared. In 5 (2.6%) cases CS was converted to GA. The primary safety endpoint (3.5% [CS] vs. 7.0% [GA], p = 0.18) and its components (major periprocedural complications: 2.5% vs. 3.5%, p = 0.73; postoperative pneumonia: 2.6% vs. 4.3%, p = 0.51) did not differ between the groups. Also, device success was comparable (96.9% vs. 93.9%, p = 0.24). Conclusions: In patients undergoing LAAC with the Watchman device, conscious sedation and general anesthesia showed comparable device success rates and safety outcomes. The type of anesthesia for LAAC may therefore be tailored to patient comorbidities, operator experience, and hospital logistics. abstract_id: PUBMED:37010829 Outcomes After Endovascular Therapy With Procedural Sedation vs General Anesthesia in Patients With Acute Ischemic Stroke: The AMETIS Randomized Clinical Trial. Importance: General anesthesia and procedural sedation are common practice for mechanical thrombectomy in acute ischemic stroke. However, risks and benefits of each strategy are unclear. Objective: To determine whether general anesthesia or procedural sedation for anterior circulation large-vessel occlusion acute ischemic stroke thrombectomy are associated with a difference in periprocedural complications and 3-month functional outcome. Design, Setting, And Participants: This open-label, blinded end point randomized clinical trial was conducted between August 2017 and February 2020, with final follow-up in May 2020, at 10 centers in France. Adults with occlusion of the intracranial internal carotid artery and/or the proximal middle cerebral artery treated with thrombectomy were enrolled. Interventions: Patients were assigned to receive general anesthesia with tracheal intubation (n = 135) or procedural sedation (n = 138). Main Outcomes And Measures: The prespecified primary composite outcome was functional independence (a score of 0 to 2 on the modified Rankin Scale, which ranges from 0 [no neurologic disability] to 6 [death]) at 90 days and absence of major periprocedural complications (procedure-related serious adverse events, pneumonia, myocardial infarction, cardiogenic acute pulmonary edema, or malignant stroke) at 7 days. Results: Among 273 patients evaluable for the primary outcome in the modified intention-to-treat population, 142 (52.0%) were women, and the mean (SD) age was 71.6 (13.8) years. The primary outcome occurred in 38 of 135 patients (28.2%) assigned to general anesthesia and in 50 of 138 patients (36.2%) assigned to procedural sedation (absolute difference, 8.1 percentage points; 95% CI, -2.3 to 19.1; P = .15). At 90 days, the rate of patients achieving functional independence was 33.3% (45 of 135) with general anesthesia and 39.1% (54 of 138) with procedural sedation (relative risk, 1.18; 95% CI, 0.86-1.61; P = .32). The rate of patients without major periprocedural complications at 7 days was 65.9% (89 of 135) with general anesthesia and 67.4% (93 of 138) with procedural sedation (relative risk, 1.02; 95% CI, 0.86-1.21; P = .80). Conclusions And Relevance: In patients treated with mechanical thrombectomy for anterior circulation acute ischemic stroke, general anesthesia and procedural sedation were associated with similar rates of functional independence and major periprocedural complications. Trial Registration: ClinicalTrials.gov Identifier: NCT03229148. abstract_id: PUBMED:28919313 Periprocedural Outcomes of Carotid Artery Stenting in Elderly Patients. Background: Carotid artery stenting has emerged as an alternative to carotid endarterectomy especially in patients with high risk of carotid endarterectomy. Older age (≥80 years old) was recognized as one of the high risk factors of carotid endarterectomy. However, the association between older age and increased risk of adverse events for carotid artery stenting has been reported. The purpose of this study was to evaluate the association between age and periprocedural outcomes after carotid artery stenting. Methods: A total of 126 symptomatic and asymptomatic cases of carotid artery stenosis were treated with tailored carotid artery stenting. The type of stents and embolic protection devices were chosen according to clinical and morphologic characteristics of the patients. Procedural, imaging, and clinical outcomes were retrospectively assessed and compared between the elderly patients group (≥80 years old) and the non-elderly patients group (<80 years old). Results: Clinical and morphologic characteristics except for dyslipidemia were not significantly different between the 2 groups. Periprocedural neurologic complications were not significantly different between the 2 groups (P = .095). Minor stroke occurred more frequently in the elderly patients group (P = .021). However, the frequency of major stroke was not significantly different between the 2 groups (P = 1). Presence of new ischemic lesions on postprocedural examination was not significantly different between the 2 groups (P = .84). Myocardial infarction and death did not occur in either group. Conclusions: Carotid artery stenting can be performed safely in elderly patients, comparable with non-elderly patients. abstract_id: PUBMED:31593903 The impact of post-dilatation on periprocedural outcomes during carotid artery stenting: A single-center experience. Background And Aims: Carotid artery stenting (CAS) is an accepted treatment modality for carotid artery disease. However, CAS is associated with periprocedural embolic events, and the effect of balloon post-dilatation has not been sufficiently investigated in large studies. We assessed the effect of post-dilatation on periprocedural outcomes during CAS. Methods: The study included 128 patients who underwent CAS. The patients were divided into groups according to whether post-dilatation was (post-dilatation [+], group 1) or was not (post dilatation [-], group 2) performed after stent deployment. Major adverse events were defined as death, minor or major stroke, and transient ischemic attack at 30 days. Silent ischemia was assessed using diffusion-weighted magnetic resonance imaging. Results: No significant between-group differences were found in baseline characteristics, comorbid diseases, or lesion characteristics. The degree of stenosis and procedure duration was greater in group 1 than in group 2. The rate of major adverse events at 30 days was similar between the two groups (5.1% vs. 4.3%, p = 0.844). The silent ischemia rate and number of high-intensity signals were higher in group 1 than in group 2 (45.8% vs. 26.1%, p = 0.020 and 1.01 [1.2] vs. 0.42 [0.79], p = 0.002). Multivariate analysis revealed that post-dilatation was associated with a 2.4-fold increase in silent ischemia (95% confidence interval: 1.15-5.20, p = 0.020). Conclusions: Although post-dilatation was not associated with an increase in major adverse events, it significantly increased the incidence of periprocedural silent ischemia. Therefore, post-dilation should be performed only in cases with severe residual stenosis after CAS. abstract_id: PUBMED:30234901 Diabetes and periprocedural outcomes in patients treated with rotablation during percutaneous coronary interventions. Background: This study is to assess differences in periprocedural outcomes among diabetic and non-diabetic patients treated with percutaneous coronary intervention (PCI) and rotational atherectomy (RA). Methods: Under assessment were 221,187 patients from the Polish Cardiovascular Intervention Society national registry (ORPKI) including all PCIs performed in Poland in 2015 and 2016. Data was extracted of 975 patients treated with RA - 336 (34.5%) diabetics and 639 (65.5%) non-diabetics. Periprocedural complications were defined as overall rate or particular complications such as deaths, no-reflows, perforations, dissections, cerebral strokes or bleedings.. Multivariate analysis was performed to assess predictors of periprocedural complications. Results: The mean age was similar in diabetics and non-diabetics (70.9 ± 9.0 vs. 72.1 ± 9.9; p = 0.06). Diabetics were more often females (p < 0.01), with arterial hypertension (p < 0.01), kidney failure (p < 0.01) and prior myocardial infarction (p = 0.01). No significant differences were observed in overall or individual periprocedural complications and angiographic success was expressed as thrombolysis in myocardial infarction grade 3 flow after PCI. At baseline, de-novo lesions accounted for 96.5% in diabetics and 99% in non-diabetics (p < 0.01), while overall rate of restenosis was 3.5% and 1%, respectively (p < 0.01). Diabetes was an independent predictor of periprocedural complications in the overall group of patients treated with PCI (OR 1.11, 95% CI 1.04-1.194; p < 0.001). Conclusions: The negative impact of diabetes on the incidence of periprocedural complications and angiographic effectiveness in the group of patients treated with RA is mitigated in the comparison to the non-RA group. abstract_id: PUBMED:32634499 The prognostic significance of periprocedural infarction in the era of potent antithrombotic therapy. The PRAGUE-18 substudy. Background: The prognostic significance of periprocedural myocardial infarction (MI) remains controversial. Methods And Results: The study aims to investigate the incidence of periprocedural MI in the era of high sensitivity diagnostic markers and intense antithrombotics, and its impact on early outcomes of patients with acute MI treated with primary angioplasty (pPCI). Data from the PRAGUE-18 (prasugrel versus ticagrelor in pPCI) study were analyzed. The primary net-clinical endpoint (EP) included death, spontaneous MI, stroke, severe bleeding, and revascularization at day 7. The key secondary efficacy EP included cardiovascular death, spontaneous MI, and stroke within 30 days. The incidence of peri-pPCI MI was 2.3% (N = 28) in 1230 study patients. The net-clinical EP occurred in 10.7% of patients with, and in 3.6% of patients without, peri-pPCI MI (HR 2.92; 95% CI 0.91-9.38; P = 0.059). The key efficacy EP was 10.7% and 3.2%, respectively (HR 3.44; 95% CI 1.06-11.13; P = 0.028). Patients with periprocedural MI were at a higher risk of spontaneous MI (HR 6.19; 95% CI 1.41-27.24; P = 0.006) and stent thrombosis (HR 10.77; 95% CI 2.29-50.70; P = 0.003) within 30 days. Age, hyperlipidemia, multi-vessel disease, post-procedural TIMI <3, pPCI on circumflex coronary artery, and periprocedural GP IIb/IIIa inhibitor were independent predictors of peri-pPCI MI. Conclusions: In the era of intense antithrombotic therapy, the occurrence of peri-pPCI MI is despite highly sensitive diagnostic markers a rare complication, and is associated with an increased risk of early reinfarction and stent thrombosis. abstract_id: PUBMED:30048983 General Anesthesia may have Similar Outcomes with Conscious Sedation in Thrombectomy Patients with Acute Ischemic Stroke: A Real-World Registry in China. Background And Purpose: Clinical trials showed that anesthesia may not influence the functional outcome in stroke patients with endovascular therapy; however, data are lacking in China. Using real-world registry data, our study aims to compare the effects of general anesthesia or conscious sedation on functional outcomes in stroke patients treated with thrombectomy in China. Methods: Consecutive patients with acute anterior circulation stroke receiving thrombectomy in 21 stroke centers between January 2014 and June 2016 were included in this study. The propensity score analysis with 1: 1 ratio was used to match the baseline variables between patients with general anesthesia and the conscious sedation. The 90-day modified Rankin Scale (mRS), symptomatic intracranial hemorrhage (sICH), and death were compared between groups. Results: Of the 698 patients undergoing endovascular treatment, 138 were treated with general anesthesia and 560 with conscious sedation. After propensity score matching, 114 general anesthesia and 114 conscious sedation patients were matched. The proportions of patients with 90-day mRS 0-2 were not significantly different between general anesthesia and conscious sedation groups (41.2% [47/114] vs. 46.5% [53/114], p = 0.470), nor were the rates of sICH (21.9% [25/114] vs. 12.3% [14/114], p = 0.072) and 90-day mortality (31.6% [36/114] vs. 21.9% [25/114], p = 0.145). Conclusion: Anesthesia patterns may have no significant impacts on clinical outcomes in patients with acute anterior circulation occlusion stroke undergoing endovascular treatment in the real-world practice in China. Answer: Based on the provided abstracts, there is evidence to suggest that periprocedural sedation during acute stroke therapy may be associated with poorer functional outcomes. In the Interventional Management of Stroke II Study, patients who were in the lower sedation category had a higher rate of good outcomes, lower death rates, and higher successful angiographic reperfusion rates compared to those receiving heavy sedation or pharmacologic paralysis (PUBMED:20431708; PUBMED:30037954). Additionally, an observational study based on a prospective registry found that postinterventional sedation was associated with unfavorable outcomes, including a lower likelihood of functional independence and freedom from disability, as well as a higher risk of death (PUBMED:31295591). Furthermore, the AMETIS randomized clinical trial did not find a significant difference in functional independence and major periprocedural complications between general anesthesia and procedural sedation, suggesting that the type of sedation may not be the sole factor influencing outcomes (PUBMED:37010829). However, the study on post-dilatation during carotid artery stenting indicated that post-dilatation was associated with a significant increase in silent ischemia, although it did not increase major adverse events (PUBMED:31593903). In the context of carotid artery stenting in elderly patients, periprocedural neurologic complications were not significantly different between elderly and non-elderly groups, suggesting that carotid artery stenting can be performed safely in elderly patients (PUBMED:28919313). Additionally, a study on the impact of diabetes on periprocedural outcomes during percutaneous coronary interventions with rotablation found that diabetes was an independent predictor of periprocedural complications (PUBMED:30234901). Lastly, a real-world registry in China comparing general anesthesia and conscious sedation in thrombectomy patients with acute ischemic stroke found no significant impacts on clinical outcomes between the two anesthesia patterns (PUBMED:30048983). In conclusion, while some studies suggest that periprocedural sedation may be associated with poorer functional outcomes in acute stroke therapy, the type of sedation and other factors such as patient characteristics and procedural specifics also play a role. Further examination of the indications for procedural sedation or paralysis and their effect on outcome is warranted (PUBMED:20431708; PUBMED:30037954).
Instruction: Does total hip arthroplasty restore native hip anatomy? Abstracts: abstract_id: PUBMED:24966079 Does total hip arthroplasty restore native hip anatomy? three-dimensional reconstruction analysis. Purpose: Component orientations and positions in total hip arthroplasty (THA) are important parameters in restoring hip function. However, measurements using plain radiographs and 2D computed tomography (CT) slices are affected by patient position during imaging. This study used 3D CT to determine whether contemporary THA restores native hip geometry. Methods: Fourteen patients with unilateral THA underwent CT scan for 3D hip reconstruction. Hip models of the nonoperated side were mirrored with the implanted side to quantify the differences in hip geometry between sides. Results: The study demonstrated that combined hip anteversion (sum of acetabular and femoral anteversion) and vertical hip offset significantly increased by 25.3° ± 29.3° (range, -25.7° to 55.9°, p = 0.003) and 4.1 ± 4.7 mm (range, -7.1 to 9.8 mm, p = 0.009) in THAs. Conclusions: These data suggest that hip anatomy is not fully restored following THA compared with the contralateral native hip. abstract_id: PUBMED:22655221 Total hip arthroplasty in developmental dysplasia of the hip: Review of anatomy, techniques and outcomes. Total hip arthroplasty (THA) in developmental dysplasia of the hip (DDH) presents many challenges to the reconstructive surgeon. The complex femoral and acetabular anatomy makes standard reconstruction technically challenging. Acetabular coverage can be improved by medialization of the component or augmentation of the deficient areas with bone graft. Femoral shortening osteotomies are considered in cases of severe dysplasia and frankly dislocated hips. Each patient's unique anatomy dictates what options of reconstruction are available. The functional outcomes of THA in DDH are generally excellent, though higher rates of mechanical failure have been reported in this group. This article reviews the anatomy, classification, technical considerations, and outcomes of THA in patients with DDH. abstract_id: PUBMED:25906764 Does haptic robot-assisted total hip arthroplasty better restore native acetabular and femoral anatomy? Background: The objective was to evaluate whether total hip arthroplasty (THA) using haptic robot assistance restores hip geometry better than the free-hand technique. Methods: Twelve robot-assisted and 14 free-hand unilateral THA patients underwent CT scan for three-dimensional (3D) hip models. The anteversion, inclination and hip joint centre locations of the native and implanted hips in each patient were quantified and compared. Results: Significant increase of combined anteversion by 19.1 ± 11.7° and 23.5 ± 23.6° and decrease of cup inclination by 16.5 ± 6.0° and 10.2 ± 6.8° were observed in the robot-assisted and the free-hand THAs, respectively. Less variation in the difference of the component orientations (max 11.1 vs 18.3°) and the femoral head centre (max 4.5 vs 6.3 mm) were found in the robot-assisted group. Conclusion: This study demonstrated that neither robot-assisted nor free-hand THAs had fully restored native hip geometry. However, the higher precision of the robot-assisted THA suggested that it has potential utility in restoring the native hip geometry. Copyright © 2015 John Wiley & Sons, Ltd. abstract_id: PUBMED:25207309 Clinical outcomes of total hip arthroplasty in patients with ankylosed hip. Background: Hip ankylosis includes the limitation of hip motion and hip arthroplasty is the recommended treatment. This study aimed to evaluate the clinical and radiographic outcomes in the treatment of ankylosis of the hip joint by the Harris Hip Score (HHS). Methods: This interventional study was performed on patients with ankylosis in one or both hip joints, who were referred to Sina Hospital, Tehran, Iran from 2011-13. Electromyogram and nerve conduction studies were taken from the hip abductor muscles before surgery and HHS was calculated. Type of surgery and prosthesis, osteotomy required for the neck and trochanteric region of the femur, periprosthetic fracture and the need to restore the acetabulum were studied in the patients. Postoperative complications such as infection and dislocation at 3, 6 and 12 months after surgery were examined and then the HHS was calculated. Results: Six patients were excluded after six month and two patients were excluded after 12 month due to surgery complications. The mean HHS of patients before surgery was 48.53±6.28 and it progressed to 88.22±3.78 after 12 month (P<0.001). Conclusions: Total hip arthroplasty for patients with ankylosed hip can improve the range of joint motion, especially in the long-term follow-up. However, good results should be considered in the absence of pre- and post-operative complications. abstract_id: PUBMED:24736014 Ceramic-on-ceramic total hip arthroplasty: is squeaking related to an inaccurate three-dimensional hip anatomy reconstruction? Many factors were incriminated in the squeaking generation in ceramic-on-ceramic total hip arthroplasty (THA), including the cup positioning and design. However, the influence of the stem orientation has not been investigated and the true three-dimensional hip anatomy has never been compared to the contralateral healthy hip. Three patients, who underwent unilateral ceramic-on-ceramic THA, complained of squeaking. CT-scans were performed to compare the true three-dimensional hip anatomy to the contralateral healthy hip. All patients presented evidence of posterior neck-rim impingement with a two-fold increase in the global anteversion (above 75°) comparatively to the healthy hip. The excess of anteversion was on the cup side in 2 cases and on the stem side in 1 case. We conclude that squeaking in ceramic-on-ceramic THA could be related to a poor accuracy of 3D hip anatomy reconstruction which generated a posterior impingement and subsequent anterior edge loading because of excessive global anteversion. abstract_id: PUBMED:29523442 Investigation of the Unstable Total Hip Arthroplasty. The investigation of recurrent instability after total hip arthroplasty requires a thorough history and physical examination focused on characterizing the dislocation events and the patient factors that contribute to instability. Radiographic studies include plain radiographs, in particular a cross-table lateral, and computed tomography to accurately assess the implant orientation relative to the patient's native anatomy. More advanced imaging such as magnetic resonance imaging may be useful to assess abductor tendon integrity. A classification system, based on etiology of recurrent instability, is available and helpful to guide treatment strategy. Finally, emerging research has debunked the previous characterization of an acetabular "safe zone" and further identified lumbar spine disease and surgical fusion as a significant risk factor for recurrent instability. Current and future research efforts target radiographic assessment of lumbopelvic alignment in diagnosis, treatment, and prevention of recurrent total hip arthroplasty instability. abstract_id: PUBMED:34095402 Bilateral Total Hip Arthroplasty in the Setting of Developmental Dysplasia of the Hip and Extreme Hip Flexion Requirements due to Phocomelia. Phocomelia is a rare congenital birth defect marked by hypoplastic or markedly absent limbs. Developmental dysplasia of the hip (DDH) is a congenital disorder with a failure of the native acetabulum to provide complete coverage over the femoral head. The secondary osteoarthritis that develops from DDH is technically challenging for orthopedic surgeons because of distorted anatomy. The present case describes the diagnosis of Crowe 3 DDH in a phocomelia patient with hyperflexion requirements who successfully underwent staged bilateral total hip arthroplasty via a direct anterior approach. It highlights the utility of preoperative computerized tomography and intraoperative computer navigation to assist in implant placement. Recognizing difficult arthroplasty cases in advance is imperative as these cases may require great expertise and more extensive surgical planning. abstract_id: PUBMED:24257672 Comparison of native anatomy with recommended safe component orientation in total hip arthroplasty for primary osteoarthritis. Background: The adverse consequences of impingement, dislocation, and implant wear have stimulated increasing interest in accurate component orientation in total hip arthroplasty and hip resurfacing. The aims of the present study were to define femoral and acetabular orientation in a cohort of patients with primary hip osteoarthritis and to determine whether the orientation of their native hip joints corresponded with established recommendations for implantation of prosthetic components. Methods: We retrospectively evaluated a consecutive series of 131 preoperative computed tomography (CT) scans of patients with primary end-stage hip osteoarthritis (fifty-seven male and seventy-four female patients; mean age, sixty years). Patients were positioned according to a standardized protocol. Accounting for pelvic tilt, three-dimensional acetabular orientation was determined in the anatomical reference frame. Moreover, three-dimensional femoral version was measured. Differences in native anatomy between male and female patients were assessed with use of nonparametric tests. Native anatomy was evaluated with reference to the "safe zone" as described by Lewinnek et al. and to a "safe" combined anteversion of 20° to 40°. Results: In the entire cohort, the mean femoral anteversion was 13° and the mean acetabular anteversion was 19°. No significant differences in femoral, acetabular, or combined (femoral and acetabular) anteversion were observed between male and female patients. The mean acetabular inclination was 62°. There was no significant difference in acetabular inclination between female and male patients. We did not observe a correlation among acetabular inclination, acetabular anteversion, and femoral anteversion. Ninety-five percent (125) of the native acetabula were classified as being within the safe anteversion zone, whereas only 15% (nineteen) were classified as being within the safe inclination zone. Combined anteversion was within the safe limits in 63% (eighty-three) of the patients. However, only 8% (ten) of the cases in the present cohort met the criteria of both "safe zone" definitions (that of Lewinnek et al. and combined anteversion). Conclusions: Acetabular anteversion of the osteoarthritic hip as defined by the native acetabular rim typically matches the recommended component "targets" for cup insertion. There was no specific relationship among native acetabular inclination, acetabular anteversion, and femoral anteversion. Neither native acetabular inclination nor native combined anteversion appears to be related to current implant insertion targets. Clinical Relevance: The present findings of native acetabular and femoral orientation in patients with primary hip osteoarthritis support intraoperative component positioning for total hip arthroplasty. abstract_id: PUBMED:33192013 Total hip arthroplasty in acetabular fractures. Total Hip Arthroplasty (THA) is a well-accepted treatment for established hip arthritis following acetabular fractures. If a conservatively managed or operated case progresses to non-union/mal-union failing to restore the joint integrity, it may eventually develop secondary arthritis warranting a total hip arthroplasty. Also, in recent years, acute total hip arthroplasty is gaining importance in conditions where the fracture presents with pre-existing hip arthritis, is not amenable to salvage by open reduction and internal fixation, or, a poor prognosis is anticipated following fixation. There are several surgical challenges in performing total hip arthroplasty for acetabular fractures whether acute or delayed. As a separate entity elderly patients pose a distinct challenge due to osteoporosis and need stable fixation for early weight bearing alleviating the risk of any thromboembolic event, pulmonary complications and decubitus ulcer. The aim of surgery is to restore the columns for acetabular component implantation rather than anatomic fixation. Meticulous preoperative planning with radiographs and Computed Tomography (CT) scans, adequate exposure to delineate the fracture pattern, and, availability of an array of all instruments and possible implants as backup are the key points for success. Previous implants if any should be removed only if they are in the way of cup implantation or infected. Press fit uncemented modern porous metal acetabular component with multiple screw options is the preferred implant for majority of cases. However, complex fractures may require major reconstruction with revision THA implants especially when a pelvic discontinuity is present. abstract_id: PUBMED:36096271 Reduction in Offset Is Associated With Worse Functional Outcomes Following Total Hip Arthroplasty. Background: Conflicting reports exist about the effect of offset variation on functional outcomes following total hip arthroplasty. Reproducing native hip offset is thought to optimize function by restoring biomechanics and appropriately tensioning the hip abductor muscles. The aim of this study is to assess the effect of failing to restore global hip offset in comparison to the native contralateral hip. Methods: A retrospective analysis of a prospective patient cohort was performed on patients undergoing an elective primary total hip arthroplasty. A total of 414 patients who had a minimum of 12 months of follow-up were included. Postoperative plain radiographs were analyzed for offset and compared to the contralateral native hip. Western Ontario and McMaster Universities Arthritis Index (WOMAC) and Veterans RAND 12 (VR-12) scores were assessed preoperatively and at 12 months postoperatively. Results: Regression analyses indicated that a reduction in offset of >20 mm resulted in worse WOMAC pain (P = .005) and motion (P = .015) scores compared to those with maintained offset. WOMAC function (P = .063), global (P = .025), and VR-12 scores were not affected (physical P = .656; mental P = .815). Reduction in offset up to 20 mm and increased offset were not significantly associated with patient-reported outcome measures (P-values ranged from .102 to .995). Conclusion: This study demonstrated an association between reduction in offset by >20 mm and worse WOMAC pain and motion scores following total hip arthroplasty. Surgeons should avoid decreases in offset >20 mm in order to optimize functional outcomes. Answer: Total hip arthroplasty (THA) does not fully restore native hip anatomy. Studies using three-dimensional computed tomography (3D CT) have demonstrated that THA can result in significant changes in hip geometry compared to the contralateral native hip. For instance, combined hip anteversion and vertical hip offset were found to significantly increase following THA (PUBMED:24966079). Additionally, neither robot-assisted nor free-hand THAs have been shown to fully restore native hip geometry, although robot-assisted THA has shown higher precision and potential utility in restoring native hip geometry (PUBMED:25906764). Furthermore, in cases of developmental dysplasia of the hip (DDH), THA presents additional challenges due to complex femoral and acetabular anatomy, which makes standard reconstruction technically challenging (PUBMED:22655221). In patients with ankylosed hips, THA can improve the range of joint motion, but the restoration of anatomy must be considered in the absence of pre- and post-operative complications (PUBMED:25207309). Moreover, inaccuracies in three-dimensional hip anatomy reconstruction have been associated with complications such as squeaking in ceramic-on-ceramic THA, which could be related to posterior impingement and subsequent anterior edge loading due to excessive global anteversion (PUBMED:24736014). The investigation of recurrent instability after THA also requires accurate assessment of implant orientation relative to the patient's native anatomy (PUBMED:29523442). In the setting of DDH and extreme hip flexion requirements, preoperative computerized tomography and intraoperative computer navigation have been highlighted as useful tools to assist in implant placement (PUBMED:34095402). A study comparing native anatomy with recommended safe component orientation in THA for primary osteoarthritis found that native acetabular anteversion typically matches recommended component targets for cup insertion, but there was no specific relationship among native acetabular inclination, acetabular anteversion, and femoral anteversion (PUBMED:24257672). Finally, a reduction in offset greater than 20 mm has been associated with worse functional outcomes following THA, indicating that surgeons should avoid such decreases to optimize functional outcomes (PUBMED:36096271). In summary, while THA aims to restore hip function, it does not always fully restore native hip anatomy, and variations from native geometry can have functional consequences.
Instruction: Does prophylactic endoscopic sphincterotomy prevent recurrent biliary problems in patients with gallstones and a normal cholangiogram? Abstracts: abstract_id: PUBMED:12647907 Does prophylactic endoscopic sphincterotomy prevent recurrent biliary problems in patients with gallstones and a normal cholangiogram? Background: Endoscopic sphincterotomy (ES) is indicated in patients with confirmed bile duct stones at endoscopic retrograde cholangiopancreatography (ERCP). The role of ES in patients with suspected bile duct stones but a normal cholangiogram, in the prevention of recurrent biliary symptoms, when cholecystectomy is not planned, is unclear. Aim: To determine if prophylactic ES prevents further biliary problems in such patients. Methods: Patients were identified with gallbladder stones presenting with jaundice, abnormal liver function tests (LFTs) or dilated bile ducts on ultrasound, in whom cholecystectomy was not planned and who had a normal cholangiogram at ERCP. Patients were followed-up to determine the frequency of recurrent biliary problems or repeat investigations. Results: Forty-one patients were included, of whom 20 had an ES. The frequency of pre-ERCP features did not differ between the two groups. Median follow-up was 32 months (range 15-66). Post-ERCP recurrent abdominal pain (5 vs 3; p=0.39), jaundice (3 vs 1; p=0.28), pancreatitis (0 vs 1; p=0.32), and repeat ultrasound (2 vs 1; p=0.52), ERCP (1 vs 1; p=0.97) or cholecystectomy (2 vs 3, p=0.82) did not differ between the two groups. Conclusions: Patients with gallstones, suspected common bile duct (CBD) stones and a normal cholangiogram need not have a prophylactic sphincterotomy since there is no reduction in recurrent biliary problems and this potentially increases the morbidity. abstract_id: PUBMED:8128327 Does endoscopic sphincterotomy prevent recurrent biliary pancreatitis? Endoscopic sphincterotomy was electively performed as the definitive procedure on 8 high-risk patients with biliary pancreatitis; the gallbladder was left in situ. After a mean follow-up of 21 months, 2 patients had biliary symptoms for which 1 required cholecystectomy and exploration of the common bile duct. No patient developed recurrent pancreatitis during the follow-up period. When the risk of elective surgery is high, endoscopic sphincterotomy appears to protect the patient against recurrent episodes of pancreatitis, and is the initial procedure of choice for the high-risk patient with biliary pancreatitis. abstract_id: PUBMED:10202063 Does cholecystectomy after endoscopic sphincterotomy prevent the recurrence of biliary complications? Background: The intact gallbladder after endoscopic sphincterotomy is thought to be a potential risk factor for recurrent biliary complications. The aim of this non-randomized prospective study was to investigate whether cholecystectomy soon after endoscopic sphincterotomy could prevent the recurrence of biliary complications. Methods: From January 1991 to October 1995, 140 patients with intact gallbladder underwent endoscopic sphincterotomy for clearance of stones in the bile duct. Of the 140 patients, 46 underwent elective cholecystectomy soon after sphincterotomy (group A) and 94 did not (group B). All 140 patients had quantitative cholescintigraphy after normalization of liver function and were followed on a regular basis with liver biochemistry, sonography, and/or computed tomography. Endoscopic retrograde cholangiography was also performed if a recurrent biliary problem was suspected. Results: After a median 43 months (range 23 to 80) of follow-up, 5 patients in group A developed bile duct stones whereas 12 patients in group B had recurrent stones; 4 patients in group A versus 6 patients in group B had recurrent biliary symptoms. One patient in group A and 5 patients in group B with recurrent biliary stones were without symptoms. In group B, the age, gender, diameter of the bile duct, preexisting cholelithiasis, abnormal filling of the gallbladder on quantitative cholescintigraphy, and presence of juxtapapillary diverticulum were not found to be the significant factors affecting the recurrence of biliary symptoms or stones. Endoscopic removal of recurrent biliary stones was successful in all patients. Three patients in group B underwent cholecystectomy after abatement of symptoms. Conclusion: Elective cholecystectomy after endoscopic sphincterotomy does not reduce the incidence of recurrent biliary complications. abstract_id: PUBMED:9835327 Biliary ascariasis complicating endoscopic sphincterotomy for choledocholithiasis in India. Endoscopic sphincterotomy is the treatment of choice for patients with choledocholithiasis. Biliary ascariasis has been reported from many parts of the world but is common in Kashmir, India. We report five cases of biliary ascariasis of which four were the result of post-endoscopic sphincterotomy for choledocholithiasis. Therefore, biliary ascariasis is not an uncommon complication of endoscopic sphincterotomy. abstract_id: PUBMED:32590541 Role of Prophylactic Cholecystectomy After Endoscopic Sphincterotomy for Biliary Stone Disease: A Systematic Review and Meta-analysis. Objective: The aim of this study was to perform a structured systematic review and meta-analysis to evaluate the effectiveness and complication rate of cholecystectomy deferral versus prophylactic cholecystectomy among patients post-endoscopic biliary sphincterotomy for common bile duct stones. Background: Although previous reports suggest a decreased risk of biliary complications with prophylactic cholecystectomy, biliary endoscopic cholangiopancreatography (ERCP) with sphincterotomy may provide a role for deferring cholecystectomy with the gallbladder left in situ. Methods: Searches of PubMed, EMBASE, Web of Science, and Cochrane Library databases were performed through August 2019 in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses and Meta-Analysis of Observational Studies in Epidemiology guidelines. Measured outcomes included: mortality, recurrent biliary pain or cholecystitis, pancreatitis, cholangitis, and eventual need for cholecystectomy. Random effects models were used to determine pooled effect size and corresponding 95% confidence intervals (CIs). Results: Nine studies (n = 1605) were included. A total of 53.8% (n = 864) patients had deferred cholecystectomy post-sphincterotomy. Deferral cholecystectomy as compared to prophylactic cholecystectomy resulted in a significant increased risk of mortality [odds raio (OR) 2.56 (95% confidence interval, CI 1.54-4.23); P < 0.0001; I2 = 18.49]. Patients who did not undergo prophylactic cholecystectomy developed more recurrent biliary pain or cholecystitis [OR 5.10 (95% CI 3.39-7.67); P < 0.0001; I2 = 0.00]. Rate of pancreatitis [OR 3.11 (95% CI 0.99-9.83); P = 0.053; I2 = 0.00] and cholangitis [OR 1.49 (95% CI 0.74-2.98); P = 0.264; I2 = 0.00] was unaffected. Overall, 26.00% (95% CI 14.00-40.00) of patients with deferred prophylactic cholecystectomy required eventual cholecystectomy. Conclusions: Prophylactic cholecystectomy remains the preferred strategy compared to a deferral approach with gallbladder in situ post-sphincterotomy for patients with bile duct stones. Future studies may highlight a subset of patients (ie, those with large balloon biliary dilation) that may not require cholecystectomy. abstract_id: PUBMED:7988821 Endoscopic sphincterotomy for biliary pancreatitis: an alternative to cholecystectomy in high-risk patients. Recurrent biliary pancreatitis frequently is associated with an intact gallbladder containing stones. This condition has been effectively treated by removing the gallbladder, but there is evidence that endoscopic sphincterotomy might obviate the need for cholecystectomy in some patients. We performed prophylactic sphincterotomy in 49 patients who presented with biliary pancreatitis more than once and who were considered at risk for surgery. The majority (39 patients) were treated electively after resolution of pancreatitis, while the remainder (10 patients) were treated urgently during their index admission because of continuing symptoms. No patient experienced recurrent pancreatitis over a mean follow-up period of 48 months. No mortality occurred in this endoscopic series, and no significant morbidity was experienced. Based on our results, we advocate performing sphincterotomy in the aged patient or younger patients considered at high risk for surgery who present with a history of recurrent pancreatitis and cholelithiasis. abstract_id: PUBMED:26542765 Endoscopic sphincterotomy and cholecystectomy in acute biliary pancreatitis. Background: This review discusses current insights with regard to biliary tract management during and after acute biliary pancreatitis. Methods: A MEDLINE and EMBASE search was done and studies were selected based on methodological quality and publication date. The recommendations of recent guidelines are incorporated in this review. In absence of consensus in the literature, expert opinion is expressed. Results: There is no role for early endoscopic retrograde cholangiopancreatography (ERCP) in patients with (predicted) mild biliary pancreatitis to improve outcome. In case of persisting choledocholithiasis, ERCP with stone extraction is scheduled electively when the acute event has subsided. Whether early ERCP with sphincterotomy is beneficial in patients with predicted severe pancreatitis remains subject to debate. Regardless of disease severity, in case of concomitant cholangitis urgent endoscopic sphincterotomy (ES) is recommended. As a definitive treatment to reduce the risk of recurrent biliary events in the long term, ES is inferior to cholecystectomy and should be reserved for patients considered unfit for surgery. After severe biliary pancreatitis, cholecystectomy should be postponed until all signs of inflammation have subsided. In patients with mild pancreatitis, cholecystectomy during the primary admission reduces the risk of recurrent biliary complications. Conclusion: Recent research has provided valuable data to guide biliary tract management in the setting of acute biliary pancreatitis with great value and benefit for patients and clinicians. Some important clinical dilemmas remain, but it is anticipated that on-going clinical trials will deliver some important insights and additional guidance soon. abstract_id: PUBMED:31523328 Prophylactic Biliary Stenting Before Cholecystectomy in Patients With Gallstones and Common Bile Duct Stones. Background: The usefulness of prophylactic biliary stenting for patients with common bile duct stones (CBDS) and gallstones (GS) to prevent recurrent biliary events after endoscopic sphincterotomy (EST) and CBDS extraction before elective cholecystectomy remains controversial. The aim of this study was to evaluate the risk of recurrent CBDS around the perioperative period and clarify its risk factors. Methods: The clinical data of all patients who received prophylactic biliary stenting after EST for CBDS and later underwent cholecystectomy for GS followed by stent extraction in our institution were retrospectively reviewed. The numbers of residual CBDS at the end first and second endoscopic retrograde cholangiography (ERC) studies were compared. Univariate and multivariate analyses were performed using a logistic regression model to determine risk factors for recurrent CBDS in the perioperative period. Results: Forty-two consecutive patients received prophylactic biliary stenting and subsequent cholecystectomy for GS. Three of these patients were excluded from this study because the number of residual stones was not confirmed. The median maximum CBDS diameter at second ERC was 0 mm (range, 0 - 10 mm); six patients had multiple CBDS (≥ 5). The number of CBDS at second ERC was increased in comparison to that at the first ERC in 15 patients (38.4%), and was unchanged or decreased in 24 patients. The median minimum cystic duct diameter was 4 mm (range, 1 - 8 mm). The median interval between first ERC and operation was 26 days (range, 2 - 131 days). The median interval between operation and second ERC was 41 days (range, 26 - 96 days). Laparoscopic cholecystectomy (LC) was performed in 38 patients, one of whom was converted from LC to open cholecystectomy. Postoperative complications (transient bacteremia) occurred in one patient. The cystic duct diameter was an independent risk factor for an increased number of CBDS at second ERC in the multivariate analysis (odds ratio 0.611 (95% confidence interval (0.398 - 0.939)), P = 0.03). Conclusion: Recurrent CBDS around the perioperative period of cholecystectomy is not a rare complication after EST and the removal of CBDS with concomitant GS. Prophylactic biliary stenting is considered useful for preventing CBDS-associated complications, especially for patients in whom the cystic duct diameter is larger (≥ 5 mm). abstract_id: PUBMED:8854437 Biliary motility following endoscopic sphincterotomy for recurrent common bile duct stones. Background: Recurrent common bile duct (CBD) stones after cholecystectomy are a common biliary tract disease. The role of biliary emptying in the formation of these stones is not yet well defined. Methods: Cholescintigraphy was used to evaluate the biliary motility in 50 cholecystectomized patients with recurrent CBD stones after endoscopic sphincterotomy (EST) with stone extraction. Complete clearance of the CBD stones was confirmed in all patients by endoscopic retrograde cholangiopancreatography. Absence of sphincter of Oddi (SO) pressure after EST was determined by means of biliary manometry, and 99mTc-diisopropyl iminodiacetic acid (DISIDA) cholescintigraphy was performed four to six weeks after recovery from sphincterotomy. Results: Complete clearance of the radioisotopes from the bile duct within 1, and more than 1 hour, was observed in 3 (6%) and 47 (94%) of the patients, respectively. When a cutoff value of 2 hours indicating abnormal biliary motility was employed, 31 out of the 50 patients (62%) had delayed biliary emptying. In comparing the clinical parameters between the two groups, patients with delayed biliary emptying time demonstrated an increase in CBD size greater than those of the normals (11.7 +/- 0.5 vs 9.2 +/- 0.4 mm, p < 0.05), while the presence of juxtapapillary diverticula and length of time for recurrence of CBD stones after cholecystectomy were not significant. Follow-up study showed that repeated recurrence rate of CBD stones appeared to be higher in the patients associated with delayed (12.9%) rather than normal (5.3%) biliary emptying. Conclusions: These results suggested that, in the absence of a functioning SO, abnormal biliary motility in cholecystectomized patients is one of the clinical factors involved in the pathogenesis of recurrent CBD stones. abstract_id: PUBMED:24070665 The role of endoscopic biliary sphincterotomy for the treatment of type 1 biliary dysfunction (papillary stenosis) with or without biliary stones. Background: This study assesses the safety and effectiveness of endoscopic biliary sphincterotomy (ES) in the treatment of papillary stenosis (PS) with and without biliary stones. Methods: The records of all patients who had endoscopic retrograde cholangiopancreatography (2,689 patients) from January 1, 1991, to August 1, 2010, were reviewed. There were 117 patients with PS who had ES. Results: All patients had biliary pain, a dilated common bile duct (CBD) with a maximum diameter of 10 to 25 mm, and elevated liver function tests. There were 46 patients who had prior cholecystectomy of whom 20 patients had CBD stones. The remaining 71 patients had no prior biliary surgery; there were no biliary stones in 14 patients. All patients were symptom free after ES with or without CBD stone retrieval. Conclusions: ES is the optimal treatment for PS in patients with or without biliary stones. ES eliminates pain, corrects CBD dilation, and restores LFTs to normal. Answer: Prophylactic endoscopic sphincterotomy (ES) does not appear to prevent recurrent biliary problems in patients with gallstones and a normal cholangiogram. A study that followed 41 patients, of whom 20 had an ES, found no significant difference in the frequency of recurrent abdominal pain, jaundice, pancreatitis, or the need for repeat ultrasound, ERCP, or cholecystectomy between the group that had an ES and the group that did not. The median follow-up was 32 months. The conclusion was that patients with gallstones, suspected common bile duct (CBD) stones, and a normal cholangiogram do not benefit from a prophylactic sphincterotomy, as it does not reduce recurrent biliary problems and may potentially increase morbidity (PUBMED:12647907). This finding is consistent with the results of other studies that suggest that while ES may be beneficial in certain high-risk patients with biliary pancreatitis (PUBMED:8128327, PUBMED:7988821), it is not effective in preventing recurrent biliary complications when compared to cholecystectomy (PUBMED:10202063). Moreover, a systematic review and meta-analysis indicated that deferring cholecystectomy post-sphincterotomy for patients with bile duct stones resulted in a significant increased risk of mortality and recurrent biliary pain or cholecystitis, suggesting that prophylactic cholecystectomy is the preferred strategy (PUBMED:32590541). In summary, prophylactic ES does not prevent recurrent biliary problems in patients with gallstones and a normal cholangiogram, and cholecystectomy remains the preferred approach for preventing recurrent biliary complications in these patients.
Instruction: Posterior tibial nerve stimulation: is the once-a-week protocol the best option? Abstracts: abstract_id: PUBMED:26140224 Topography of human ankle joint: focused on posterior tibial artery and tibial nerve. Most of foot pain occurs by the entrapment of the tibial nerve and its branches. Some studies have reported the location of the tibial nerve; however, textbooks and researches have not described the posterior tibial artery and the relationship between the tibal nerve and the posterior tibial artery in detail. The purpose of this study was to analyze the location of neurovascular structures and bifurcations of the nerve and artery in the ankle region based on the anatomical landmarks. Ninety feet of embalmed human cadavers were examined. All measurements were evaluated based on a reference line. Neurovascular structures were classified based on the relationship between the tibial nerve and the posterior tibial artery. The bifurcation of arteries and nerves were expressed by X- and Y-coordinates. Based on the reference line, 9 measurements were examined. The most common type I (55.6%), was the posterior tibial artery located medial to the tibial nerve. Neurovascular structures were located less than 50% of the distance between M and C from M at the reference line. The bifurcation of the posterior tibial artery was 41% in X-coordinate, -38% in Y-coordinate, and that of the tibial nerve was 48%, and -10%, respectively. Thirteen measurements and classification showed statistically significant differences between both sexes (P<0.05). It is determined the average position of neurovascular structures in the human ankle region and recorded the differences between the sexes and amongst the populations. These results would be helpful for the diagnosis and treatment of foot pain. abstract_id: PUBMED:29463943 Tarsal Tunnel Syndrome Caused by a Schwannoma of the Posterior Tibial Nerve. Schwannoma is the most common tumor of the peripheral nerve sheath. However, there have been few reports on schwannoma of the posterior tibial nerve causing tarsal tunnel syndrome. We report on a 60-year-old man with tarsal tunnel syndrome caused by a schwannoma of the posterior tibial nerve, which was first diagnosed as a ganglion cyst. After enucleation of this tumor, the patient was asymptomatic and had no related sequelae except for minor numbness in the plantar aspects of his digits. Although schwannoma of the posterior tibial nerve is rare, it should be considered even if a ganglion is clinically suspected. abstract_id: PUBMED:15951736 Posterior tibial nerve stimulation: is the once-a-week protocol the best option? Aim: Aim of our study was to compare the results of posterior tibial nerve stimulation (PTNS) performed weekly with those of PTNS performed 3 times per week in patients with overactive bladder syndrome. Methods: Thirty-five patients (28 females, 7 males) with overactive bladder syndrome not responding to antimuscarinic therapy were enrolled in a prospective study. A total of 17 out of 35 patients were randomly assigned to group A and treated with a PTNS protocol based on weekly stimulation sessions; 18 out of 35 patients were randomly assigned to group B and treated with a PTNS protocol based on stimulation sessions performed 3 times per week. All subjects were evaluated by means of 24 h bladder diaries, quality of life questionnaires (I-QoL, SF36) and urodynamic evaluation before and after treatment. Patients were asked after each stimulation session to give their opinion on the efficacy of the treatment. We have considered ''success'' those patients who presented a reduction >50% of the micturition episodes/24 h (ME/24) or (if incontinent) of the incontinence episodes/24 h (IE/24). Results before and after treatments in both groups were collected and statistically compared. Results: As a whole, 11/17 patients (63%) in group A and 12/18 patients (67%) in group B were considered ''success''; 4/11 (36%) incontinent patients in group A and 5/11 (45%) incontinent patients in group B were completely cured after treatment. In both groups, patients reported subjective improvement after 6-8 stimulation sessions. Conclusions: Our findings seem to show that the periodicity of stimulation does not effect the results of PTNS treatment. The advantage of more frequent stimulation sessions is to achieve earlier a clinical improvement. abstract_id: PUBMED:37409002 Comparison of Relaxometry Between Ulnar Nerve and Posterior Tibial Nerve After Cisatracurium Administration Using Electromyography. Background: Electromyography can be used for quantitative neuromuscular monitoring during general anesthesia, mostly using the stimulation train-of-four (TOF) pattern. Relaxometry measures the muscular response of the adductor pollicis muscle to electrical stimulation of the ulnar nerve, which is routinely used in clinical practices for monitoring the neuromuscular block. However, when it is not always possible to be used for all patients, the posterior tibial nerve is a suitable alternative. Objectives: Using electromyography, we compared the neuromuscular block between the ulnar and the posterior tibial nerves. Methods: In this study, the participants were 110 patients who met inclusion criteria and submitted their written consent. Following the administration of cisatracurium intravenously, the patients had relaxometry performed simultaneously on the ulnar and the posterior tibial nerves using electromyography. Results: Eighty-seven patients were included in the final analysis. The onset time was 296 ± 99 s at the ulnar nerve and 346 ± 146 s at the tibial nerve, with a mean difference of -50 s and a standard deviation of 164 s. The 95% limits of agreement ranged from -372 s to 272 s. The relaxation time was 105 ± 26 min at the ulnar nerve and 87 ± 25 min at the tibial nerve, with a mean difference of 18 min and a standard deviation of 20 min. Conclusions: Using electromyography, no statistically significant difference was noticed between the ulnar and the posterior tibial nerve during the neuromuscular block. The onset time and the relaxation time assessed with an electromyogram to compare the stimulation of the ulnar and posterior tibial nerves showed large limits of agreement. abstract_id: PUBMED:36527146 An anatomical study of the tibial nerve branches innervating the posterior tibial artery. The arteries of the lower limbs are innervated by vascular branches (VBs) originating from the lumbar sympathetic trunk and branches of the spinal nerve. Although lumbar sympathectomy is used to treat nonreconstructive critical lower limb ischemia (CLLI), it has limited long-term effects. In addition, the anatomical structure of tibial nerve (TN) VBs remain incompletely understood. This study aimed to clarify their anatomy and better inform the surgical approach for nonreconstructive CLLI. Thirty-six adult cadavers were dissected under surgical microscopy to observe the patterns and origin points of VBs under direct vision. The calves were anatomically divided into five equal segments, and the number of VB origin points found in each was expressed as a proportion of the total found in the whole calf. Immunofluorescence staining was used to identify the sympathetic nerve fibers of the VBs. Our results showed that the TN gave off 3-4 VBs to innervate the posterior tibial artery (PTA), and the distances between VBs origin points and the medial tibial condyle were: 24.7 ± 16.3 mm, 91.7 ± 66.1 mm, 199.6 ± 52.0 mm, 231.7 ± 38.5 mm, respectively. They were mainly located in the first (40.46%) and fourth (31.68%) calf segments, and immunofluorescence staining showed that they contained tyrosine hydroxylase-positive sympathetic nerve fibers. These findings indicate that the TN gives off VBs to innervate the PTA and that these contain sympathetic nerve fibers. Therefore, these VBs may need to be cut to surgically treat nonreconstructable CLLI. abstract_id: PUBMED:31853597 Posterior tibial nerve stimulation for overactive bladder-techniques and efficacy. The ideal treatment for overactive bladder is still elusive. In those where medication fails to improve symptoms options include invasive treatments such as botulinum toxin-A, sacral neural stimulation or posterior tibial nerve stimulation. Scientific professional society guidelines advise percutaneous posterior tibial nerve stimulation as a third line treatment option only after multi-disciplinary team review as well as failure of both conservative and pharmacological management. The aim of this article is to review all techniques for tibial nerve stimulation and their efficacy. abstract_id: PUBMED:29177688 Trifurcation of the tibial nerve within the tarsal tunnel. The tibial nerve is the larger terminal branch of the sciatic nerve and it terminates in the tarsal tunnel by giving lateral and medial plantar nerves. We present a rare case of trifurcation of the tibial nerve within the tarsal tunnel. The variant nerve curves laterally after branching from the tibial nerve and courses deep to quadratus plantae muscle. Interestingly, posterior tibial artery was also terminating by giving three branches. These branches were accompanying the terminal branches of the tibial nerve. abstract_id: PUBMED:6308815 Perforating plantar ulcer. Interfascicular neurolysis of the posterior tibial nerve The authors report the case of a 45-year-old man with perforant ulceration of the foot. The rapid evolution of the cutaneous and bone lesions (three years) necessitated a bilateral intermetacarpo-phalangeal amputation. After conventional therapy, a large necrotic zone remained on the sole of the left foot as well as hyperkeratosis of the right foot. Following Bourrel's protocol for the treatment of leprosy, the authors proposed a posterior tibial nerve neurography, which confirmed the compression of the nerve in the tarsal tunnel. In addition, histological examination of the nerve branches of the sole of the foot showed fibrous thickening of the nerve, endoneural invasion by fibroblasts, and loss of myelin. An interfascicular neurolysis of the posterior tibial nerve at the level of the tarsal tunnel together with a periarterial sympathectomy on both sides, led to rapid recovery. At the same operation, a skin graft was used to cover the skin defect with excellent results in ten days. Ten months later healing remains complete. abstract_id: PUBMED:30579281 The anatomic location and importance of the tibialis posterior fascicular bundle at the sciatic nerve bifurcation: report of 3 cases. The authors present the cases of 3 patients with severe injuries affecting the peroneal nerve combined with loss of tibialis posterior function (inversion) despite preservation of other tibial nerve function. Loss of tibialis posterior function is problematic, since transfer of the tibialis posterior tendon is arguably the best reconstructive option for foot drop, when available. Analysis of preoperative imaging studies correlated with operative findings and showed that the injuries, while predominantly to the common peroneal nerve, also affected the lateral portion of the tibial nerve/division near the sciatic nerve bifurcation. Sunderland's fascicular topographic maps demonstrate the localization of the fascicular bundle subserving the tibialis posterior to the area that corresponds to the injury. This has clinical significance in predicting injury patterns and potentially for treatment of these injuries. The lateral fibers of the tibial division/nerve may be vulnerable with long stretch injuries. Due to the importance of tibialis posterior function, it may be important to perform internal neurolysis of the tibial division/nerve in order to facilitate nerve action potential testing of these fascicles, ultimately performing split nerve graft repair when nerve action potentials are absent in this important portion of the tibial nerve. abstract_id: PUBMED:32981656 Importance of some technical aspects of the procedure of percutaneous posterior tibial nerve stimulation in patients with fecal incontinence. Introduction: The results of percutaneous posterior tibial nerve stimulation (PTNS) in the treatment of fecal incontinence (IF) are modest. The aim of the study is to assess the relationship of some technical aspects with the clinical response: location of the nerve, distal response (motor or sensory) and accommodation. Methods: Prospective study of patients with FI undergoing PTNS therapy. The clinical response was assessed using the Wexner scale, defecation diary and anorectal manometry. Results: 32 patients were studied. The intensity of localization (proximity to the nerve) was not correlated with clinical or manometric changes. Motor response was associated with a decrease on the Wexner scale [12.12 (± 5.39) to 7.71 (± 4.57) p < 0.005], the number of episodes of passive incontinence [8.78 (± 9.64) to 4.11 (± 7.11) p = 0.025], the total number of incontinence episodes [16.11 (± 16.03) to 7.78 (± 11.34) p = 0.009] and the number of days with faecal soiling [6.89 (± 5.53) to 2.56 (± 4.13) p = 0.002] and with an increase in the length of the manometric anal canal at rest [4.55 (± 0.596) to 4.95 (± 0.213) p = 0.004]. The increase in stimulation (accommodation) was inversely correlated with the decrease in the Wexner scale (r = -0.677 p < 0.005) and the number of days with soiling (r = -0.650 p = 0.022). Conclusions: The motor response during PTNS seems to be related to a better clinical response. The accommodation phenomenon could be associated with worse results. The proximity of the electrode to the nerve does not seem to be important as long as a good distal response is achieved. Answer: The study by PUBMED:15951736 investigated the efficacy of posterior tibial nerve stimulation (PTNS) performed weekly compared to PTNS performed three times per week in patients with overactive bladder syndrome. The results indicated that the frequency of stimulation did not significantly affect the outcomes of PTNS treatment. Both groups, those receiving weekly stimulation (group A) and those receiving stimulation three times per week (group B), showed similar success rates, with 63% in group A and 67% in group B considered successful based on a reduction of micturition or incontinence episodes. However, the advantage of more frequent stimulation sessions was that patients achieved clinical improvement earlier. Therefore, while once-a-week PTNS protocol is effective, more frequent sessions may lead to quicker improvements, although the overall success rate may not differ significantly.
Instruction: Do child's psychosocial functioning, and parent and family characteristics predict early alcohol use? Abstracts: abstract_id: PUBMED:24935960 Do child's psychosocial functioning, and parent and family characteristics predict early alcohol use? The TRAILS Study. Background: Given the negative consequences of early alcohol use for health and social functioning, it is essential to detect children at risk of early drinking. The aim of this study is to determine predictors of early alcohol use that can easily be detected in Preventive Child Healthcare (PCH). Methods: We obtained data from the first two waves on 1261 Dutch adolescents who participated in TRAILS (TRacking Adolescents' Individual Lives Survey) at ages 10-14 years and from the PCH records regarding ages 4-10 years. Early adolescence alcohol use (age 10-14 years) was defined as alcohol use at least once at ages 10-12 years (wave 1) and at least once in the previous 4 weeks at ages 12-14 years (wave 2). Predictors of early alcohol use concerned parent and teacher reports at wave 1 and PCH registrations, regarding the child's psychosocial functioning, and parental and socio-demographic characteristics. Results: A total of 17.2% of the adolescents reported early alcohol use. Predictors of early alcohol use were teacher-reported aggressive behaviour [odds ratios (OR); 95% confidence interval (CI): 1.86; 1.11-3.11], being a boy (OR 1.80, 95%-CI 1.31-2.56), being a non-immigrant (OR 2.31, 95%CI 1.05-5.09), and low and middle educational level of the father (OR 1.71, 95%CI 1.12-2.62 and OR 1.77, 95%CI 1.16-2.70, respectively), mutually adjusted. Conclusion: A limited set of factors was predictive for early alcohol use. Use of this set may improve the detection of early adolescence alcohol use in PCH. abstract_id: PUBMED:27131271 Psychosocial functioning in Dutch foster children: The relationship with child, family, and placement characteristics. Foster care is the preferred alternative for out-of-home care, but not necessarily beneficial for foster children's psychosocial functioning. This dilemma leaves researchers with a challenge to find out more about the factors related to foster children's social and emotional functioning. In a sample of 446 Dutch foster children we examined the extent to which three clusters of characteristics, those akin to the foster child, the foster family, or foster placement, were related to foster children's functioning at the time of research. Multivariate three-step hierarchical regression analyses were performed for three outcome variables: externalizing problems, internalizing problems, and prosocial behavior. We found that all three clusters of foster care characteristics were significantly related to foster children's functioning. Foster placement characteristics, in particular interventions aimed at foster children, explained the largest amount of variance in behavior problems. Children receiving interventions had more externalizing and internalizing problems. A possible explanation is that interventions are indicated for those foster children who are in the highest need of additional support. Prosocial behavior was particularly related to foster family characteristics. The results were mostly in line with international research. Careful screening and monitoring of the social and emotional functioning of foster children may help to identify problems at an early stage. In addition (preventive) support should be offered to those foster children and families who are in need of it. abstract_id: PUBMED:29719348 Analysis of Family Functioning and Parent-Child Relationship between Adolescents with Depression and their Parents. Background: Drug therapy combined with family therapy is currently the best treatment for adolescent depression. Nevertheless, family therapy requires an exploration of unresolved problems in the family system, which in practice presents certain difficulties. Previous studies have found that the perceptual differences of family function between parents and children reflect the problems in the family system. Aims: To explore the characteristics and role of family functioning and parent-child relationship between adolescents with depressive disorder and their parents. Methods: The general information and clinical data of the 93 adolescents with depression were collected. The Family Functioning Assessment Scale and Parent-child Relationship Scale were used to assess adolescents with depressive disorder and their parents. Results: a) The dimensions of family functioning in adolescents with depressive disorder were more negative in communication, emotional response, emotional involvement, roles, and overall functioning than their parents. The differences were statistically significant. Parent-child relationship dimensions: the closeness and parent-child total scores were more negative compared with the parents and the differences were statistically significant. b) All dimensions of parent-child relationship and family functioning in adolescents with depression except the time spent together were negatively correlated or significantly negatively correlated. c) The results of multivariate regression analysis showed: the characteristics of family functioning, emotional involvement, emotional response, family structure, and income of the adolescents with depressive disorder mainly affected the parent-child relationship. Conclusions: There were perceptual differences in partial family functioning and parent-child relationship between adolescents with depressive disorder and their parents. Unclear roles between family members, mutual entanglement, too much or too little emotional investment, negligence of inner feelings, parental divorce, and low average monthly family income were the main factors causing adverse parent-child relationship. These perceptual differences have a relatively good predictive effect on family problems, and can be used as an important guide for exploring the family relationship in family therapy. abstract_id: PUBMED:34388440 Family Functioning Assessment and Child Psychosocial Symptoms in Family Medicine. Purpose: Children and young adults underutilize behavioral health services, in part due to the challenges with identifying and providing services for child behavioral health in primary care. The objective of this study was to determine if a brief assessment of family functioning captures specific child psychosocial symptoms in a Family Medicine practice. Design And Methods: Eighty-three parent child dyads, in which the parent or child was a patient at the Family Medicine practice, participated in a cross-sectional study, including assessments of demographics, family functioning, child behavioral health symptoms, and health related pediatric quality of life (HRQOL). Bivariate correlations, independent samples t-test, and linear and logistic regression tested associations of parent and child reported family functioning with child behavioral health symptoms and HRQOL. Results: Parent and child reports of family functioning were significantly associated. Child, but not parent reports of family functioning were significantly associated with parent and child reports of behavioral health symptoms. Parent's reports of increased family functioning impairment were only significantly associated with parent's reports of decreased HRQOL. Family functioning impairment was associated with parent and child reports of increased behavioral health symptoms and decreased HRQOL. Conclusion: Future work should determine if screening for family functioning impairment, may serve as a means of identifying and treating child behavioral health symptoms in Family Medicine. Practice Implications: Identifying impaired family functioning may serve to engage children and their parents in services, who may otherwise not be identified as having symptoms. abstract_id: PUBMED:34456539 Child Hostility toward a Parent with a History of Depression and Family Functioning. Family dysfunction has been associated with both child externalizing problems, including hostility, and parent depression or depressive symptoms. Research investigating child hostility directed toward a parent with a history of depression is absent, yet it may be associated with especially high levels of family dysfunction. The current study aimed to assess (1) the relation between observed child hostility, measured by the Iowa Family Interaction Rating Scale, toward such a parent and child-reported family dysfunction, using the Family Assessment Device, and (2) whether current parent depressive symptoms, measured by the Beck Depression Inventory-II, moderated this association. We hypothesized that child hostility would negatively relate to family functioning, even after controlling for parent depressive symptoms, and that parent depressive symptoms would moderate this association in that high levels of such symptoms would strengthen the negative relation between child hostility and family functioning. To address these hypotheses, hierarchical regression and moderation analyses were conducted in SPSS. Results indicated that higher levels of child hostility related to a more dysfunctional family environment. Furthermore, although speculative as the interaction of child hostility toward a parent and parent depressive symptoms only approached conventional levels of significance, low levels of both constructs may protect against family dysfunction. Findings from this study may inform new methods of family intervention and prevention, as well as ways of identifying families most at risk for dysfunction. abstract_id: PUBMED:24963072 Health-related quality of life and psychosocial functioning in children with Tourette syndrome: parent-child agreement and comparison to healthy norms. This study aimed to evaluate the degree of agreement between parent proxy- and child self-report on measures of child psychosocial functioning and health-related quality of life in children with Tourette syndrome. Participants included 28 children with Tourette syndrome and their parents. All participants provided ratings of children's level of quality of life and psychosocial functioning. Results revealed strong, positive relationships between child self- and parent proxy-reports on all quality of life and psychosocial functioning domains. Parents perceived significantly higher levels of depression compared to their children, whereas children reported significantly lower Physical quality of life compared to their parents. Results suggest that assessment of quality of life and psychosocial functioning should include multiple reporters whenever feasible. Caution should be used when exclusively relying on parent proxy-reports of quality of life and psychosocial functioning, as these reports may not accurately reflect children's difficulties or perceptions of their functioning. abstract_id: PUBMED:34542778 Financial and Work Burden, Psychosocial Functioning, and Family Interactions During the COVID-19 Pandemic in Indonesia: Effects on Child Outcomes. The COVID-19 pandemic has profoundly affected all aspects of family life worldwide. This study aimed to examine the effects of several family factors on child outcomes during the pandemic in Indonesia, a country with approximately 260 million people. A range of child maladjustment and child competency variables were examined, along with a set of associated variables, including family income, psychosocial functioning, and family interactions, which were modelled via Structural Equation Modelling to understand the interrelationships between variables associated with child adjustment. Using an online survey, a total of 354 parents with at least one typically developing child between the age of 2 and 10 years participated in this study. Results showed that psychosocial functioning and family interactions appeared as successive mediators between family income, financial and work burden, and child adjustment. This study highlights some key concerns to support children's and families' wellbeing during the pandemic in Indonesia. abstract_id: PUBMED:34314993 Psychosocial functioning in integrated treatment of co-occurring posttraumatic stress disorder and alcohol use disorder. Co-occurring posttraumatic stress disorder and alcohol use disorder (PTSD/AUD) is associated with poorer psychosocial functioning than either disorder alone; however, it is unclear if psychosocial functioning improves in treatment for PTSD/AUD. This study examined if psychosocial functioning improved in integrated treatments for PTSD/AUD, and if changes in PTSD severity and percentage heavy drinking days (PHDD) during treatment were associated with functioning outcomes. 119 veterans with PTSD/AUD randomized to receive either Concurrent Treatment of PTSD and Substance Use Disorders using Prolonged Exposure or Seeking Safety completed measures of functioning (Medical Outcomes Survey SF-36), PTSD (Clinician Administered PTSD Scale for DSM-5), and alcohol use (Timeline Follow-Back) at baseline, posttreatment, 3- and 6-month follow-ups. Our findings suggest that psychosocial functioning improved to a statistically significant degree with no significant differences between conditions. Reductions in PTSD severity during treatment were associated with psychosocial functioning improvements, whereas reductions in PHDD were associated with improvement in role impairment at posttreatment. Although psychosocial functioning improves to a statistically significant degree in interventions designed to treat PTSD/AUD, these improvements do not represent clinically meaningful improvements in patients' abilities to navigate important roles. Findings underscore the need to study how to best treat psychosocial functioning impairment in PTSD/AUD. abstract_id: PUBMED:21513676 Family relationship quality and early alcohol use: evidence for gender-specific risk processes. Objective: Family characteristics (relationship quality, parental behaviors, and attitudes relating to alcohol use) are known to influence alcohol use in the mid-teen years, and there is evidence that family characteristics have different influences on mid-teen girls versus boys. This study examined child gender differences in the association of family relationship quality, parental disapproval of children's alcohol use, and parental alcohol use with early adolescent alcohol use. Method: Grade 6 and 8 students (modal age 11 and 13, respectively; N = 6,837; 52.6% female) were recruited from 231 schools across three Australian states. Hypotheses were tested using two-level ordinal logistic regression (individuals nested within schools). The main dependent measure was lifetime frequency of early adolescent alcohol consumption. Independent variables included mother's/father's alcohol use, closeness, conflict, and disapproval of adolescent alcohol use. Control variables included sensation seeking, peer alcohol use, and socioeconomic disadvantage. Results: The key findings were that for the young age group (Grade 6), emotional closeness to the parent of the opposite sex was protective. Family conflict was associated with females' drinking in both age groups but not males' drinking. Conclusions: There was evidence of gender differences in the epidemiology of family relationship quality and early alcohol use. Social developmental models may need revision to account for these child gender differences. Gender-specific family dynamics may be an important consideration for family-oriented prevention strategy. abstract_id: PUBMED:26762375 Effectiveness of Parent-Child Mediation in Improving Family Functioning and Reducing Adolescent Problem Behavior: Results from a Pilot Randomized Controlled Trial. Parent-child mediation programs are intended to resolve or manage disputes and improve family functioning, but rigorous evaluations of their effectiveness are lacking. Families referred to a community-based mediation program (N = 111) were randomized to an intervention or wait-list control group, and completed three surveys over a 12-week period. With the exception of parent-reported child delinquency (which decreased more in the intervention group), this evaluation provides little support for the short-term effectiveness of parent-child mediation for improving family functioning and reducing child problem behaviors in general. Given that this is the first randomized controlled trial of a parent-child mediation program, additional evaluations involving larger samples and longer follow-ups are needed before firm conclusions can be drawn about the effectiveness of this intervention. Answer: Yes, a child's psychosocial functioning, as well as parent and family characteristics, can predict early alcohol use. The TRAILS Study found that early alcohol use among adolescents was predicted by a limited set of factors including teacher-reported aggressive behavior, being a boy, being a non-immigrant, and having a father with a low or middle educational level (PUBMED:24935960). These findings suggest that certain aspects of a child's behavior and family background are associated with the likelihood of early alcohol consumption.
Instruction: Can intrasac pressure monitoring reliably predict failure of endovascular aneurysm repair? Abstracts: abstract_id: PUBMED:12932164 Can intrasac pressure monitoring reliably predict failure of endovascular aneurysm repair? Purpose: To determine if pressure measured at a single location within aneurysm sac thrombus accurately reflects the force applied to the aneurysm wall and the risk of rupture by examining (1) if pressure is distributed uniformly within aneurysm thrombus, (2) the pressure transmission through aneurysm thrombus, and (3) the microstructural basis for pressure transmission. Methods: Pressure within aneurysm thrombus was measured by direct puncture through the aneurysm wall at 121 sites in 26 patients during open abdominal aortic aneurysm repair. Measurements were taken prior to cross clamping and compared with intrasac pressure measured at 30 sites in 6 patients without aneurysm thrombus (controls). Transmission of pressure through aneurysm thrombus was further examined ex vivo by subjecting fresh thrombus to a pressure gradient in a custom-made pressure cell. Pressure transmission was correlated with matrix density as determined by light microscopy and image analysis. Results: Mean pressure within aneurysm thrombus was higher than mean systemic pressure in 11 patients, lower in 1, and identical in 9. In 5 patients, the pressure was greater than systemic in some areas of the thrombus but less in others. Sac pressure was identical to systemic pressure at all sites in the controls. In 12 thrombus specimens (6 patients) examined in the pressure cell, pressure transmission varied significantly between specimens, correlating directly with matrix density (R(2)=0.747, p=0.001). Conclusions: Pressure transmission through aneurysm thrombus is variable and depends upon the microstructure of the thrombus. Pressure measured at a single location may not, therefore, accurately reflect the pressure acting on the aneurysm wall. abstract_id: PUBMED:21276500 Are intrasac pressure measurements useful after endovascular repair of abdominal aortic aneurysms? Few would argue with the need for long-term follow-up after endovascular repair of abdominal aortic aneurysms. A small risk of reintervention persists and the challenge remains to identify those patients that will require additional procedures to prevent subsequent complications. The ideal follow-up regimen remains elusive. Up until this point, most regimens have consisted of radiologic imaging, with either computed tomography (CT) scans or ultrasonography to identify continued aneurysm perfusion (endoleaks) and document sac dynamics, either shrinkage, growth, or stability. However, aneurysm sac growth or shrinkage serves only as a surrogate measurement for pressurization, and although it is uniformly believed that attachment site endoleaks require treatment, it remains controversial as to how to determine which type II endoleaks pressurize an aneurysm sufficiently to require therapy. In response to these difficulties, several manufacturers have developed pressure sensors that can be implanted at the time of the initial repair. They have been shown capable of measuring intrasac pressures that have appropriately responded to reinterventions for endoleaks. However, are they the answer we are looking for? Are they ready for widespread use? Do they offer a reliable and consistent measure of intrasac pressure that can be trusted to determine the need, or lack of need, for further therapy? Our debaters will try to convince us one way or another. abstract_id: PUBMED:37067390 Significance of perioperative intrasac pressure in sac shrinkage after endovascular abdominal aneurysm repair. Background: Endovascular aneurysm repair (EVAR) results in a marked reduction of intrasac pressure, which is the likely cause of aneurysm sac shrinkage. We evaluated the change of intrasac pressure during operation and its association with aneurysm sac shrinkage one year after EVAR. Methods: This study included 113 patients undergoing EVAR using the Gore C3 Excluder at our university hospital between March 2016 and December 2020. A direct intrasac pressure was measured before and after stent deployment. The Pressure Index (PI) was defined as the ratio of intrasac pressure to systemic blood pressure. Results: Patients were divided into two groups: patients with aneurysm sac shrinkage (N.=33, 29%) and those without (N.=80, 71%). Systolic and diastolic PI after stent graft deployment were significantly higher in patients with aneurysm sac shrinkage than those without, respectively (systolic PI, 64.6±13.9% vs. 58.1±12.0%, P=0.014; diastolic PI, 103.8±24.7% vs. 96.4±12.9%, P=0.039). Multivariable analysis revealed post-deployment systolic PI as an independent risk factor predictive of aneurysm sac shrinkage (odds ratio 1.04; 95% CI 1.01-1.08; P=0.016). Conclusions: Although systolic intrasac pressure, described as systolic PI, was an independent risk factor for aneurysm sac shrinkage, contrary to our expectation, it was significantly higher in patients with aneurysm sac shrinkage than those without. This seemingly contradictory result may be explained by the properties of an aneurysm sac, which influence the change of intrasac pressure. abstract_id: PUBMED:21186901 Intrasac pressure changes and vascular remodeling after endovascular repair of abdominal aortic aneurysms: review and biomechanical model simulation. In this paper, we review existing clinical research data on post-endovascular repair (EVAR) intrasac pressure and relation with abdominal aortic aneurysm (AAA) size changes. Based on the review, we hypothesize that intrasac pressure has a significant impact on post-EVAR AAA size changes, and post-EVAR remodeling depends also on how the pressure has changed over a period of time. The previously developed model of an AAA based on a constrained mixture approach is extended to include vascular adaptation after EVAR using an idealized geometry. Computational simulation shows that the same mechanism of collagen stress-mediated remodeling in AAA expansion induces the aneurysm wall to shrink in a reduced sac-pressure after post-EVAR. Computational simulation suggests that the intrasac pressure of 60 mm Hg is a critical value. At this value, the AAA remains stable, while values above cause the AAA to expand and values below cause the AAA to shrink. There are, however, variations between individuals due to different cellular sensitivities in stress-mediated adaptation. Computer simulation also indicates that an initial decrease in intrasac pressure helps the AAA shrink even if the pressure increases after some time. The presented study suggests that biomechanics has a major effect on initial adaptation after EVAR and also illustrates the utility of a computational model of vascular growth and remodeling in predicting diameter changes during the progression and after the treatment of AAAs. abstract_id: PUBMED:15337865 First experience in human beings with a permanently implantable intrasac pressure transducer for monitoring endovascular repair of abdominal aortic aneurysms. Objectives: Endovascular stent graft repair of abdominal aortic aneurysms (AAAs) prevents rupture by excluding the aneurysm sac from systemic arterial pressure. Current surveillance protocols after endovascular aneurysm repair (EVAR) follow secondary markers of sac pressurization, namely, endoleak and sac enlargement. We report the first clinical experience with the use of a permanently implantable, ultrasound-activated remote pressure transducer to measure intrasac pressure after EVAR. Methods: Over 7 months, 14 patients underwent EVAR of an infrarenal abdominal aortic aneurysm with implantation of an ultrasound-activated remote pressure transducer fixed to the outside of the stent graft and exposed to the excluded aortic sac. Twelve patients received modular bifurcated stent grafts, and 2 patients received aortouniiliac devices. Intrasac pressures were measured directly with an intravascular catheter and by the remote sensor at stent-graft deployment. Follow-up sac pressures were measured with a remote sensor and correlated with systemic arterial pressure at every follow-up visit. Mean follow-up was 2.6 +/-1.9 months. Results: Excellent concordance was found between catheter-derived and transducer-derived intrasac pressssure intraoperatively. Pulsatile waveforms were seen in all functioning transducers at each evaluation interval. One implant ceased to function at 2 months of follow-up. In 1 patient a type I endoleak was diagnosed on 1-month computed tomography (CT) scans; 3 type II endoleaks were observed. Those patients with complete exclusion of the aneurysm on CT scans had a significant difference in systemic and sac systolic pressures initially (P <.001) and at 1 month (P <.001). Initial sac diastolic pressures were higher than systemic diastolic pressures (P <.001). The ratio of systemic to sac systolic pressure increased over time in those patients with complete aneurysm exclusion ( P <.001). Four of 6 patients with no endoleak and greater than 1-month follow-up had diminution of sac systolic pressure to 40 mm Hg or less by 3 months. Conclusion: This is the first report of a totally implantable chronic pressure transducer to monitor the results of EVAR in human beings. Aneurysm exclusion leads to gradual diminution of sac pressure over several months. Additional clinical follow-up will be necessary to determine whether aneurysm sac pressure monitoring can replace CT in the long-term surveillance of patients after EVAR. abstract_id: PUBMED:18297208 Correlation between intrasac pressure measurements of a pressure sensor and an angiographic catheter during endovascular repair of abdominal aortic aneurysm. Purpose: To establish a correlation between intrasac pressure measurements of a pressure sensor and an angiographic catheter placed in the same aneurysm sac before and after its exclusion by an endoprosthesis. Methods: Patients who underwent endovascular abdominal aortic aneurysm repair and received an EndoSure wireless pressure sensor implant between March 19 and December 11, 2004 were enrolled in the study. Simultaneous readings of systolic, diastolic, mean, and pulse pressure within the aneurysm sac were obtained from the catheter and the sensor, both before and after sac exclusion by the endoprosthesis (Readings 1 and 2, respectively). Intrasac pressure measurements were compared using Pearson's correlation and Student's t test. Statistical significance was set at p<0.05. Results: Twenty-five patients had the pressure sensor implanted, with simultaneous readings (i.e., recorded by both devices) obtained in 19 patients for Reading 1 and in 10 patients for Reading 2. There was a statistically significant correlation for all pressure variables during both readings, with p<0.01 for all except the pulse pressure in Reading 1 (p<0.05). Statistical significance of pressure variations before and after abdominal aortic aneurysm exclusion was coincident between the sensor and catheter for diastolic (p>0.05), mean (p>0.05), and pulse (p<0.01) pressures; the sole disagreement was observed for systolic pressure, which varied, on average, 31.23 mmHg by the catheter (p<0.05) and 22 mmHg (p>0.05) by the sensor. Conclusion: The excellent agreement between intrasac pressure readings recorded by the catheter and the sensor justifies use of the latter for detection of post-exclusion abdominal aortic aneurysm pressurization. abstract_id: PUBMED:15350558 Intrasac pressure waveforms after endovascular aneurysm repair (EVAR) are a reliable marker of type I endoleaks, but not type II or combined types: an experimental study. Purpose: To ascertain the nature of the pressure wave transmitted through aneurysm thrombus and the changes produced after endovascular repair and the development of type I and II endoleaks. Methods: A 25 mm Talent endovascular graft was deployed in a latex model of an abdominal aortic aneurysm, which was incorporated in a pulsatile flow unit. The graft was surrounded by thrombus analogue to simulate conditions in vivo. Pressure waveforms in the sac were captured over 5s at 1000 Hz in these settings: (i) no endoleaks (baseline), after introduction of (ii) type I (iii) type II and (iv) combined type I and II endoleaks. The arterial blood pressure settings used were 140/100 and 130/90 mmHg, denoted the high and low settings, respectively. ANOVA in Minitab 13 was applied for statistical analysis. Results: Pulsatile waveforms were transmitted through the thrombus. Intrasac pressure after stent-grafting reduced to 110/107, 99/96 mmHg (p<0.001) (high, low settings, respectively). Introduction of a type I endoleak caused this to rise to 120/112, 115/107 mmHg (p<0.001, vs. baseline); after producing a type II endoleak these were 101/98, 91/88 mmHg (p<0.001, vs. baseline). A combined type I and II endoleak produced intrasac pressures identical to that of a type I endoleak. Conclusions: Intrasac pressure waveforms following EVAR are easily defined following a type I endoleak. Waveforms obtained following type II endoleak simulation resemble the baseline waveform in an attenuated form. Intrasac pressures are, therefore, a reliable marker for type I, but not a type II endoleak. In the case of a combined endoleak, the type I endoleak waveform effectively masks that of the type II. Intrasac thrombus faithfully transmits intrasac pressures. abstract_id: PUBMED:22402935 Aneurysm sac shrinkage after endovascular treatment of the aorta: beyond sac pressure and endoleaks. The isolation of the aneurysm sac from systemic pressure and its consequent shrinkage are considered criteria of success after endovascular repair (EVAR). However, the process of shrinkage does not solely depend on the intrasac pressure, the predictive role of which remains ambiguous. This brief review summarizes the additional pathophysiological mechanisms that regulate the biomechanical properties of the aneurysm wall and may interfere with the process of aneurysm sac shrinkage. abstract_id: PUBMED:35709862 Identifying high risk for proximal endograft failure after endovascular aneurysm repair in patients suitable for both open and endovascular elective aneurysm repair. Objective: Proximal endograft failure (type Ia endoleak or migration) after endovascular aneurysm repair (EVAR) is associated with hostile aneurysm neck morphology. Neck scoring systems were developed to predict proximal endograft failure but were studied in retrospective studies, which, due to selection bias, may have led to an overestimation of bad outcomes after EVAR. To predict patients who benefit from open repair, preoperative neck morphology and occurrence of long-term proximal endograft failure were investigated in patients enrolled in the endovascular arm of the Dutch Randomized Endovascular Aneurysm Management (DREAM) trial who were suitable for open repair by definition and have long-term follow-up. Methods: A post-hoc on-treatment analysis of patients after EVAR was performed in 171 patients. Aneurysm neck morphology was quantified using the aneurysm severity grading (ASG) neck score calculated on preoperative computed tomography angiography images. The ASG neck score was used to predict proximal endograft failure. Receiver operating characteristic analysis was performed to calculate a threshold to divide favorable and unfavorable aneurysm necks (low and high risk); positive and negative likelihood-ratios were calculated accordingly. Freedom from proximal endograft failure was compared between groups using Kaplan-Meier analysis. Results: During a median follow-up of 7.6 years, 20 patients suffered proximal endograft failure. Receiver operating characteristic analysis showed an area under the curve of 0.77 (95% confidence interval [CI], 0.65-0.90; P < .001), indicating acceptable prediction. The threshold was determined at ASG neck score ≥5; 30 patients had unfavorable neck morphology, of whom 11 developed proximal endograft failure. The positive likelihood-ratio was 4.4 (95% CI, 2.5-7.8), and the negative likelihood-ratio was 0.51 (95% CI, 0.3-0.8). Twelve years postoperatively, freedom from proximal endograft failure was 91.7% in the favorable group and 53.2% in the unfavorable group, a difference of 38.5% (95% CI, 13.9-63.1; P < .001). Conclusions: In this study, the ASG neck score predicted proximal endograft failure during the entire follow-up. This exhibits the persistent risk for proximal endograft failure long after EVAR and calls for ongoing surveillance especially in patients with unfavorable aneurysm necks. abstract_id: PUBMED:22942038 Inductive coupling system for endovascular aneurysm repair monitoring. Endoleaks are one of the major concerns in the long-term follow-up of endovascular aneurysm repair treatment (EVAR). Therefore, periodic monitoring is required to detect eventual damages in an implanted stent-graft. A monitoring system for post EVAR procedure based on inductive-coupling which avoids the need to resorting to more complex biomedical imaging systems is presented here. Endoleaks are detected with capacitive pressure sensors placed in the stent-graft and monitored externally after the measure of the oscillation frequency provided by the LC circuit created by sensors and inductive coupling. Answer: Intrasac pressure monitoring has been explored as a method to predict the success or failure of endovascular aneurysm repair (EVAR), but its reliability is subject to debate and appears to be influenced by various factors. One study found that pressure transmission through aneurysm thrombus is variable and depends on the microstructure of the thrombus, suggesting that pressure measured at a single location may not accurately reflect the pressure acting on the aneurysm wall (PUBMED:12932164). This variability could potentially limit the reliability of intrasac pressure monitoring as a predictor of EVAR failure. Another study highlighted the challenges in identifying patients who will require additional procedures to prevent complications after EVAR. While pressure sensors capable of measuring intrasac pressures have been developed, it remains controversial whether these sensors can provide a reliable and consistent measure of intrasac pressure that can be trusted to determine the need for further therapy (PUBMED:21276500). Further research indicated that while systolic intrasac pressure was an independent risk factor for aneurysm sac shrinkage, it was actually higher in patients with aneurysm sac shrinkage than those without, which is contrary to expectations. This suggests that the relationship between intrasac pressure and aneurysm behavior post-EVAR is complex (PUBMED:37067390). A review and biomechanical model simulation suggested that intrasac pressure has a significant impact on post-EVAR AAA size changes, and that vascular remodeling depends on how the pressure has changed over time. The model indicated that an intrasac pressure of 60 mm Hg is a critical value for stability, with higher pressures leading to expansion and lower pressures leading to shrinkage (PUBMED:21186901). Clinical experience with a permanently implantable intrasac pressure transducer showed that aneurysm exclusion leads to a gradual diminution of sac pressure over several months, but additional follow-up is necessary to determine whether pressure monitoring can replace CT in long-term surveillance (PUBMED:15337865). Another study established a significant correlation between intrasac pressure measurements from a pressure sensor and an angiographic catheter, supporting the use of the sensor for detecting post-exclusion pressurization (PUBMED:18297208).
Instruction: Is the musculocutaneous nerve really in the coracobrachialis muscle when performing an axillary block? Abstracts: abstract_id: PUBMED:20435937 Is the musculocutaneous nerve really in the coracobrachialis muscle when performing an axillary block? An ultrasound study. Background: In reference textbooks describing axillary block, the ulnar, radial, and median nerves are located in a common sheath surrounding the axillary artery. In contrast, the musculocutaneous nerve is described as lying outside this sheath in the coracobrachialis muscle. In a recent case report of ultrasound-guided axillary block, the musculocutaneous nerve was joined to the median nerve outside this muscle. Our study evaluated the prevalence of atypical musculocutaneous nerve localizations during axillary block. Methods: All patients undergoing ultrasound-guided axillary block were included from December 2006 to December 2008. Before needle insertion, musculocutaneous, median, ulnar, and radial nerves were localized using ultrasound. Nerve stimulation confirmed atypical nerve localization. After injection of local anesthetics, musculocutaneous and median nerve anatomical relationships were observed. Results: The musculocutaneous nerve was outside the coracobrachialis muscle in 83 of the 387 analyzed blocks (22%). It was near the axillary artery in 22 cases (6%). The musculocutaneous and median nerves appeared as a common neural structure in 61 cases (16%). After local anesthetic injection, a common trunk persisted in 16 of 61 cases (26%), musculocutaneous and median nerves separated in 37 cases (61%), and 2 roots of the median nerve appeared (with or without a separated musculocutaneous nerve) in 6 cases (10%). Two cases (3%) remained undefined. Ulnar nerve location of the 83 patients with atypical musculocutaneous nerve position differed from the ones with a classical musculocutaneous nerve localization. Conclusions: During axillary block, the musculocutaneous nerve is outside the coracobrachialis muscle in 1 of 5 patients. This atypical location should be considered during performance of axillary blockade to avoid repeated IM puncture. abstract_id: PUBMED:16032591 The musculocutaneous nerve: ultrasound appearance for peripheral nerve block. Background And Objectives: To gain complete anesthesia of the forearm, block of the musculocutaneous nerve is necessary. Variations in its course and position make localization of the musculocutaneous nerve problematic. The aim of the study is to describe the ultrasound appearance of the musculocutaneous nerve in the axilla and to suggest potential areas to target neural block. Methods: We scanned the axillary regions of 19 volunteers and assessed the size and shape of 34 musculocutaneous nerves at entry into, exit from, and in the center of the coracobrachialis muscle. Furthermore, we measured the depth of the musculocutaneous nerve under the skin surface and its distance from the axillary artery at those 3 measurement points. Results: As it travels through the coracobrachialis muscle, the musculocutaneous nerve changes in shape from oval to flat-oval to triangular. During this course, the musculocutaneous nerve also separates from the axillary artery and becomes more lateral while changing its depth from the surface. The musculocutaneous nerve increases its transverse area along this nerve path. In 2 subjects, the musculocutaneous nerve could not be visualized unilaterally within the course of the coracobrachialis muscle. Conclusions: Knowledge of its ultrasound appearance facilitates localization and successful block of the musculocutaneous nerve. Because the distance between the musculocutaneous nerve and brachial plexus varies, different locations of musculocutaneous nerve puncture during ultrasound-guided regional anesthesia can be chosen. abstract_id: PUBMED:37719489 A Bilateral Symmetric Accessory Coracobrachialis Muscle Combined With an Interconnection of the Musculocutaneous Nerve With the Median Nerve. This report describes a bilateral symmetric accessory coracobrachialis muscle variant coexisting with a unilateral interconnection of the musculocutaneous nerve and the median nerve. An 80-year-old female cadaver was dissected. The bilateral coracobrachialis muscle variant consisted of three heads: two superficial heads and one deep head. One superficial head arose from the tip of the coracoid process, while the other originated from the short head tendon of the biceps brachii. The deep head of the coracobrachialis muscle emerged from the base of the coracoid process. The musculocutaneous nerve bilaterally coursed between the superficial and deep heads. On the right side, the three-headed coracobrachialis muscle coexisted with an ipsilateral interconnection of the musculocutaneous nerve and the median nerve, located at the lower third of the arm. While the presence of a unilateral three-headed coracobrachialis muscle is not rare (with a prevalence range of 0-22.2%), as well as the distal interconnection between the musculocutaneous nerve and the median nerve at the lower third of the arm (with a prevalence range of 1.8-53.6%), the coexistence of the current bilateral three-headed coracobrachialis muscle variant with the distal interconnection of the musculocutaneous and median nerves is quite unusual. A similar report underscores the finding of the bilateral coracobrachialis muscle variant. abstract_id: PUBMED:26202484 A spiralling origin of the median nerve around an accessory coracobrachialis brevis muscle. During an ultrasound-guided axillary brachial plexus block, sonoanatomy demonstrated a delayed formation of the lateral root to the median nerve from the lateral cord, an accessory coracobrachialis brevis muscle, and a course of the lateral root to the median nerve 'spiralling' around the accessory muscle prior to locating lateral to the brachial artery. This rare variant is further discussed in the context of median, musculocutaneous and coracobrachialis variations. abstract_id: PUBMED:24297578 Ultrasound-guided anterior axilla musculocutaneous nerve block. Purpose: This paper presents a technique of ultrasound-guided localisation and block of the musculocutaneous nerve through the anterior wall of the axilla. Materials And Methods: Twenty patients (7 males and 13 females; mean age, 35 years) had axillary nerve block for upper extremity trauma. With the arm adducted, the ultrasound probe was positioned on the anterior axillary wall; the axillary artery, coracobrachialis and pectoralis major muscles and lateral cord of brachial plexus were visualised in cross section. With continuous imaging of the axillary artery in cross section, the ultrasound probe was slowly moved towards the biceps muscle until the musculocutaneous nerve appeared crossing the coracobrachialis muscle. After ultrasound localisation of the musculocutaneous nerve, the arm was abducted and externally rotated, and the nerve was identified with nerve stimulation and blocked. The quality of sensory and motor nerve block, as well as of ultrasound imaging were evaluated. Results: Ultrasound-guided block of the musculocutaneous nerve was excellent and complete in 18 of the 20 patients. In two patients, the musculocutaneous nerve was fused with the median nerve and the nerve block was repeated successfully with the same technique. The quality of ultrasound imaging was excellent in all patients. No patient experienced pain or tourniquet discomfort during surgery, or any other nerve block-related complication. Conclusion: The anterior axillary ultrasound view provides for complete nerve block and imaging of the entire course of the musculocutaneous nerve and its relations with adjacent structures with excellent quality. abstract_id: PUBMED:11552663 A study on the morphology of the coracobrachialis muscle and its relationship with the musculocutaneous nerve. Thirty-six arms from embalmed adult cadavers were utilised for this investigation. Coracobrachialis muscle was carefully examined to record variations in its attachments, morphology and its relationship with the musculocutaneous nerve. The results of the present work identified the presence of two heads of origin for the coracobrachialis muscle, which are situated superficial (anterior) and deep (posterior) to the musculocutaneous nerve. The superficial head arises mainly from most of the medial border of the tendon of the short head of the biceps brachii muscle. The deep head arises from the apex of the coracoid process of the scapula and the adjoining lateral border of the tendon of the short head of the biceps brachii muscle. The musculocutaneous nerve provides a separate branch for each head. In cases where the coracobrachialis muscle was not pierced by the musculocutaneous nerve, the muscle was formed of one head that has an origin analogous to that of the superficial head. One case showed a splitting of the deep head into two bellies shortly after its origin, where the muscle appeared as being formed of three heads. Variations in the insertion were present as an additional aponeurotic insertion above the usual insertion and an aponeurotic extension to the deep fascia on the medial aspect of the arm. Variations in the musculocutaneous nerve were in the form of lower origin from the lateral root of median nerve and a nerve with a short course after which it united with the median nerve. This investigation supplied evidence of the double heads of origin for the coracobrachialis muscle. The detected variability in insertion and association of the muscle with the musculocutaneous nerve further supports the idea that the coracobrachialis muscle is a complex muscle. abstract_id: PUBMED:37464221 Coracobrachialis muscle morphology and coexisted neural variants: a cadaveric case series. Purpose: The current cadaveric case series evaluates the coracobrachialis muscle morphology, the related musculocutaneous nerve origin, course, and branching pattern, as well as associated adjacent neuromuscular variants. Materials And Methods: Twenty-seven (24 paired and 3 unpaired) cadaveric arms were dissected to identify the coracobrachialis possible variants with emphasis on the musculocutaneous nerve course and coexisted neural variants. Results: Four morphological types of the coracobrachialis were identified: a two-headed muscle in 62.96% (17/27 arms), a three-headed in 22.2% (6/27), a one-headed in 11.1% (3/27), and a four-headed in 3.7% (1 arm). A coracobrachialis variant morphology was identified in 37.04% (10/27). A three-headed biceps brachii muscle coexisted in 23.53% (4/17). Two different courses of the musculocutaneous nerve were recorded: 1. a course between coracobrachialis superficial and deep heads (in cases of two or more heads) (100%, 24/24), and 2. a medial course in case of one-headed coracobrachialis (100%, 3/3). Three neural interconnections were found: 1. the lateral cord of the brachial plexus with the medial root of the median nerve in 18.52%, 2. the musculocutaneous with the median nerve in 7.41% and 3. the radial with the ulnar nerve in 3.71%. Duplication of the lateral root of the median nerve was identified in 11.1%. Conclusions: The knowledge of the morphology of the muscles of the anterior arm compartment, especially the coracobrachialis variant morphology and the related musculocutaneous nerve variable course, is of paramount importance for surgeons. Careful dissection and knowledge of relatively common variants play a significant role in reducing iatrogenic injury. abstract_id: PUBMED:15765462 Ultrasound-guided musculocutaneous nerve block: a description of a novel technique. Background And Objective: Localizing the musculocutaneous nerve for neural blockade is crucial to providing surgical anesthesia for the distal forearm. We present a novel approach for localizing and anesthetizing the musculocutaneous nerve. Case Reports: Ten patients underwent successful ultrasound-guided musculocutaneous nerve blocks. In this technique, either a 10-MHz or a 12-MHz linear probe was placed at the junction of the pectoralis major muscle and the biceps muscle such that the axillary artery was visualized in cross section. The probe was moved towards the biceps muscle until the musculocutaneous nerve was visualized lying between the coracobrachialis and biceps muscles. A 22-gauge, 50-mm b-bevel needle was inserted under direct vision until the needle was adjacent to the nerve. Local anesthetic was then injected, which generated surgical anesthetic conditions in all patients. Conclusion: Ultrasound can facilitate the localization and local anesthetic block of the musculocutaneous nerve. abstract_id: PUBMED:34608984 Six-headed coracobrachialis muscle. The coracobrachialis muscle is the smallest muscle of the anterior compartment of the arm. It is responsible for flexion and abduction in the glenohumeral joint. The coracobrachialis muscle is morphologically variable both in its insertion and origin. Moreover, some additional heads or structures may also occur. The present report describes a six-headed coracobrachialis muscle originated as a common junctionwith the short head of the biceps brachii muscle from the coracoid process. All of these heads insert into the medial surface of the humeral shaft. It is important to note that the musculocutaneous nerve was piercing the fourth belly. Other heads were innervated by branches from the musculocutaneous nerve. Knowledge of the morphological variability of this muscle is essential not only for anatomists but for clinicians as well. abstract_id: PUBMED:27725357 Coracobrachialis muscle and the musculocutaneous nerve: a study using human embryonic sections. In comparative anatomy, the musculocutaneous nerve is hypothesized to pass between the superficial and deep muscle bellies of the coracobrachialis muscle. The superficial belly is supplied by nerve branches of the lateral cord of the brachial plexus, while the deep belly by the musculocutaneous nerve. Observations of longitudinal sections of ten human embryonic arms (7 weeks; crown-rump length 26-32 mm) demonstrated that the coracobrachialis muscle was always continuous with the short head of the biceps muscle. If the aforementioned hypothesis was applied, the deep belly behind the musculocutaneous nerve course was continuous with the biceps. However, such a close relation between the coracobrachialis and biceps was not known in supplying nerves in adults. A further study using embryos of some apes without the deep belly of the coracobrachialis would be necessary for the comparison between a pattern of the embryonic muscle division and the muscle classification in comparative anatomy. Answer: The musculocutaneous nerve is traditionally described as lying within the coracobrachialis muscle during an axillary block. However, studies have shown that there is variability in its location. According to an ultrasound study, the musculocutaneous nerve was found outside the coracobrachialis muscle in 22% of the 387 analyzed blocks. In some cases, it was near the axillary artery (6%) or appeared as a common neural structure with the median nerve (16%) (PUBMED:20435937). Another study on the ultrasound appearance of the musculocutaneous nerve in the axilla suggested that the nerve changes shape and becomes more lateral as it travels through the coracobrachialis muscle, indicating that its position can vary (PUBMED:16032591). Additionally, a cadaveric case series found that the musculocutaneous nerve can have different courses, such as between the superficial and deep heads of the coracobrachialis muscle or medially in the case of a one-headed coracobrachialis (PUBMED:37464221). These findings suggest that while the musculocutaneous nerve is often associated with the coracobrachialis muscle, its exact location can vary, and it may not always be within the muscle during an axillary block. Therefore, it is important for clinicians to be aware of these anatomical variations to avoid repeated intramuscular puncture and to ensure effective nerve blockade (PUBMED:20435937).
Instruction: Does increasing the size of bi-weekly samples of records influence results when using the Global Trigger Tool? Abstracts: abstract_id: PUBMED:27113238 Does increasing the size of bi-weekly samples of records influence results when using the Global Trigger Tool? An observational study of retrospective record reviews of two different sample sizes. Objectives: To investigate the impact of increasing sample of records reviewed bi-weekly with the Global Trigger Tool method to identify adverse events in hospitalised patients. Design: Retrospective observational study. Setting: A Norwegian 524-bed general hospital trust. Participants: 1920 medical records selected from 1 January to 31 December 2010. Primary Outcomes: Rate, type and severity of adverse events identified in two different samples sizes of records selected as 10 and 70 records, bi-weekly. Results: In the large sample, 1.45 (95% CI 1.07 to 1.97) times more adverse events per 1000 patient days (39.3 adverse events/1000 patient days) were identified than in the small sample (27.2 adverse events/1000 patient days). Hospital-acquired infections were the most common category of adverse events in both the samples, and the distributions of the other categories of adverse events did not differ significantly between the samples. The distribution of severity level of adverse events did not differ between the samples. Conclusions: The findings suggest that while the distribution of categories and severity are not dependent on the sample size, the rate of adverse events is. Further studies are needed to conclude if the optimal sample size may need to be adjusted based on the hospital size in order to detect a more accurate rate of adverse events. abstract_id: PUBMED:38271935 Detection of adverse events in patients interned in medical clinic using the Global Trigger Tool Introduction: The Global Trigger Tool (GTT) is a tool that accurately identifies adverse events that represent a significant problem in hospitals. Methods: Cross-sectional study based on retrospective review of randomized medical records using the GTT tool. Results: A total of 161 adverse events (AEs) were detected: 51 events per 100 admissions, 66 per 1000 patient-days, and 30% of admissions with AEs. The most frequent triggers were from the care module, with 25% complications associated with the use of procedures, 10% pressure ulcers, and 9% care-associated infections. The presence of AEs had a statistically significant association with a stay of more than 5 days, and a moderate association with age and number of triggers. Regarding the damage, 78% of the patients presented mild events and 4% fatal events. The ROC curves analysis showed that the triggers with the greatest area under the curve were: procedural complication (0.70), pressure ulcers (0.61) and rapid response code (0.60). Discussion: The number of events per 100 admissions was higher than that reported in the literature, but there were no differences in events per 1000 patientdays. Fatal cases were caused by respiratory infectious diseases in patients with comorbidities, nasogastric tube needs and cognitive decline. The study highlights the scarce use of the tool in public hospitals and the implementation of trigger analysis with ROC curves. Knowing the frequency and the most frequent type of event will allow the implementation of measures that improve patient safety. abstract_id: PUBMED:30295829 Is a modified Global Trigger Tool method using automatic trigger identification valid when measuring adverse events? Objectives: To evaluate a modified Global Trigger Tool (GTT) method with manual review of automatic triggered records to measure adverse events. Design: A cross-sectional study was performed using the original GTT method as gold standard compared to a modified GTT method. Setting: Medium size hospital trust in Northern Norway. Participants: One thousand two hundred thirty-three records selected between March and December 2013. Main Outcome Measure: Records with triggers, adverse events and number of adverse events identified. Recall (sensitivity), precision (positive predictive value), specificity and Cohen's kappa with 95 % confidence interval were calculated. Results: Both methods identified 35 adverse events per 1000 patient days. The modified GTT method with manual review of 658 automatic triggered records identified adverse events (n = 214) in 189 records and the original GTT method identified adverse events (n = 216) in 186 records. One hundred and ten identical records were identified with adverse events by both methods. Recall, precision, specificity and reliability for records identified with adverse events were respectively 0.59, 0.58, 0.92 and 0.51 for the modified GTT method. The total manual review time in the modified GTT method was 23 h while the manual review time using the original GTT method was 411 h. Conclusions: The modified GTT method is as good as the original GTT method that complies with the GTTs aim monitoring the rate of adverse events. Resources saved by using the modified GTT method enable for increasing the sample size. The automatic trigger identification system may be developed to assess triggers in real-time to mitigate risk of adverse events. abstract_id: PUBMED:27265381 Compliance with the surgical safety checklist and surgical events detected by the Global Trigger Tool Introduction: The implementing of the WHO Surgical Safety Checklist (SSC) has helped to improve patient safety. The aim of this study was to assess the level of compliance of the SSC, and incorporating the non-compliances as «triggers» in the Global Trigger Tool (GTT). Material And Methods: Setting: Acute Geriatric Hospital (200 beds). Retrospective study, study period: 2011-2014. The SSC formulary and the methodology of the GTT were used for the analysis of electronic medical records and the compliance with the SSC. The NCCP MERP categories were used to assess the severity of the harm. Results: Out of all the electronic medical records (EMR), a total of 227 (23.6%) discharged patients (1.7% of interventions in the four year study period) were analysed. All (100%) of the EMR included the SSC, with 94.4% of the items being completed, and 28.2% of SSC had all items completed in the 3 phases of the process. Surgical adverse events decreased from 16.3% in 2011 to 9.4% in 2014 (P=.2838, not significant), and compliance with all items of SSC was increased from 18.6% to 39.1% (P=.0246, significant). Conclusions: The GTT systematises and evaluates, at low cost, the triggers and incidents/ AEs found in the EMR in order to assess the compliance with the SSC and consider non-compliance of SSC as «triggers» for further analysis. This strategy has never been referred to in the GTT or in the SCC formulary. abstract_id: PUBMED:36905428 Evaluation of Global trigger tool as a medication safety tool for adverse drug event detection-a cross-sectional study in a tertiary hospital. The objective of this study is to describe and analyze adverse drug events (ADE) identified using the Global trigger tool (GTT) in a Finnish tertiary hospital during a 5-year period and also to evaluate whether the medication module of the GTT is a useful tool for ADE detection and management or if modification of the medication module is needed. A cross-sectional study of retrospective record review in a 450-bed tertiary hospital in Finland. Ten randomly selected patients from electronic medical records were reviewed bimonthly from 2017 to 2021. The GTT team reviewed a total of 834 records with modified GTT method, which includes the evaluation of possible polypharmacy, National Early Warning Score (NEWS), highest nursing intensity raw score (NI), and pain triggers. The data set contained 366 records with triggers in medication module and 601 records with the polypharmacy trigger that were analyzed in this study. With the GTT, a total of 53 ADEs were detected in the 834 medical records, which corresponds to 13 ADEs/1000 patient-days and 6% of the patients. Altogether, 44% of the patients had at least one trigger found with the GTT medication module. As the number of medication module triggers increased per patient, it was more likely that the patient had also experienced an ADE. The number of triggers found with the GTT medication module in patients' records seems to correlate with the risk of ADEs. Modification of the GTT could provide even more reliable data for ADE prevention. abstract_id: PUBMED:37952101 Are adverse events related to the completeness of clinical records? Results from a retrospective records review using the Global Trigger Tool. Clinical record (CR) is a tool for recording details about the patient and the most commonly used source of information for detecting adverse events (AEs). Its completeness is an indicator of the quality of care provided and may provide clues for improving professional practice. The primary aim of this study was to estimate the prevalence of AEs. The secondary aims were to determine the completeness of CRs and to examine the relationship between the two variables. We retrospectively reviewed randomly selected CRs of patients discharged from the Academic Hospital of Udine (Italy) in the departments of general surgery, internal medicine, and obstetrics between July and September 2020. Evaluation was performed using the Global Trigger Tool and a checklist to evaluate the completeness of CRs. The relationship between the occurrence of AEs and the completeness of CRs was analyzed using nonparametric tests. A binomial logistic regression analysis was also performed. We reviewed 291 CRs and identified 368 triggers and 56 AEs. Among them, 16.2% of hospitalizations were affected by at least one AE, with a higher percentage in general surgery. The most common AEs were surgical injuries (42.6%; 24) and care related (26.8%; 15). A significant positive correlation was found between the length of hospital stay and the number of AEs. The average completeness of CRs was 72.9% and was lower in general surgery. The decrease in CR completeness correlated with the increase in the total number of AEs (R = -0.14; P = .017), although this was not confirmed by regression analysis by individual departments. Our results seem to suggest that completeness of CRs may benefit patient safety, so ongoing education and involvement of health professionals are needed to maintain professional adherence to CRs. abstract_id: PUBMED:32405540 Describing adverse events in Swiss hospitalized oncology patients using the Global Trigger Tool. Background And Aims: The occurrence rate of adverse events (AEs) related to care among hospitalized oncology patients in Switzerland remains unknown. The primary objective of this study was to describe, for the first time, the occurrence rate, type, severity of harm, and preventability of AEs related to care, reported in health records of hospitalized hematological and solid-tumor cancer patients in three Swiss hospitals. Methods: Using an adapted version of the validated Global Trigger Tool (GTT) from the Institute for Healthcare Improvement, we conducted a retrospective record review of patients discharged from oncology units over a 6-week period during 2018. Our convenience sample included all records from adult patients (≥18 years of age), diagnosed with cancer, and hospitalized (>24 hours). Per the GTT method, two trained nurses independently assessed patient records to identify AEs using triggers, and physicians from the included units analyzed the consensus of the two nurses. Together, they assessed the severity and preventability of each AE. Results: From the sample of 224 reviewed records, we identified 661 triggers and 169 AEs in 94 of them (42%). Pain related to care was the most frequent AE (n = 29), followed by constipation (n = 17). AEs rates were 75.4 per 100 admissions and 106.6 per 1000 patient days. Most of the identified AEs (78%) caused temporary harm to the patient and required an intervention. Among AEs during hospitalization (n = 125), 76 (61%) were considered not preventable, 28 (22%) preventable, and 21 (17%) undetermined. Conclusion: About half of the hospitalized oncology patients suffered from at least one AE related to care during their hospitalization. Pain, constipation, and nosocomial infections were the most frequent AEs. It is, therefore, essential to identify AEs to guide future clinical practice initiatives to ensure patient safety. abstract_id: PUBMED:33509727 Does Skilled Nursing Facility Trigger Tool used with Global Trigger Tool increase its value for adverse events evaluation? Objective: To assess the additional value in the evaluation of incidents and adverse events by adding the IHI Skilled Nursing Facility Trigger Tool (SNFTT) to the Institute for Healthcare Improvement's Global Trigger Tool (GTT) in an acute geriatric hospital. Material And Methods: A one-year retrospective study reviewing 240 electronic clinical records using the general GTT, either alone or combined with SNFTT. Main Outcome Measures: Number of triggers and identified adverse events (AEs), categories of severity and preventability of AEs, GTT incidence rates, and the number needed to alert (NNA). Results: One hundred and thirty-seven AEs were identified in 107 patients (57.1 AEs per 100 admissions). Of these, 127 (92.7%) occurred 3 or more days after admissions; 49.6% of the harm events were preventable. The NNA for GTT plus SNFTT was 8.6. No significant difference was found using the general GTT alone versus the general GTT plus SNFTT in terms of the main outcome measures. Eleven categories of triggers were better identified when using GTT plus SNFTT because with GTT alone they were allocated to a category of "Other": 9 from the care module (C15) and 2 from the medication module (M13). Conclusions: The study demonstrates that adding the SNFTT to the GTT did not increase its effectiveness as regards the evaluation of AEs. However, some triggers are better described in SNFTT and now have now been added into the general GTT method in our hospital. abstract_id: PUBMED:38225629 Establishing a trigger tool based on global trigger tools to identify adverse drug events in obstetric inpatients in China. Background: Pregnant women belong to the special population of drug therapy, and their physiological state, pharmacokinetics and pharmacodynamics are significantly different from the general population. Drug safety during pregnancy involves two generations, which is a hot issue widely concerned in the whole society. Global Trigger Tool (GTT) of the Institute for Healthcare Improvement (IHI) has been wildly used as a patient safety measurement strategy by several institutions and national programs, and the effectiveness had been demonstrated. But only one study reports the use of GTT in obstetric delivery until now. The aim of the study is to establish triggers detecting adverse drug events (ADEs) suitable for obstetric inpatients on the basis of the GTT, to examine the performance of the obstetric triggers in detecting ADEs experienced by obstetric units compared with the spontaneous reporting system and GTT, and to assess the utility and value of the obstetric trigger tool in identifying ADEs of obstetric inpatients. Methods: Based on a literature review searched in PubMed and CNKI from January of 1997 to October of 2023, retrospective local obstetric ADEs investigations, relevant obstetric guidelines and the common adverse reactions of obstetric therapeutic drugs were involved to establish the initial obstetric triggers. According to the Delphi method, two rounds of expert questionnaire survey were conducted among 16 obstetric and neonatological physicians and pharmacists until an agreement was reached. A retrospective study was conducted to identity ADEs in 300 obstetric inpatient records at the Sichuan Academy of Medical Sciences & Sichuan Provincial People's Hospital from June 1 to September 30, 2018. Two trained junior pharmacists analyzed the first eligible records independently, and the included records reviewed by trained pharmacist and physician to identify ADEs. Sensitivity and specificity of the established obstetric triggers were assessed by the number of ADEs/100 patients and positive predictive value with the spontaneous reporting system (SRS) and GTT. Excel 2010 and SPSS22 were used for data analysis. Results: Through two rounds of expert investigation, 39 preliminary triggers were established that comprised four modules (12 laboratory tests, 9 medications, 14 symptoms, and 4 outcomes). A total of 300 medical records were reviewed through the obstetric triggers, of which 48 cases of ADEs were detected, with an incidence of ADEs of 16%. Among the 39 obstetric triggers, 22 (56.41%) were positive and 11 of them detected ADEs. The positive predictive value (PPV) was 36.36%, and the number of ADEs/100 patients was 16.33 (95% CI, 4.19-17.81). The ADE detection rate, positive trigger rate, and PPV for the obstetric triggers were significantly augmented, confirming that the obstetric triggers were more specific and sensitive than SRS and GTT. Conclusion: The obstetric triggers were proven to be sensitive and specific in the active monitoring of ADE for obstetric inpatients, which might serve as a reference for ADE detection of obstetric inpatients at medical institutions. abstract_id: PUBMED:28711893 Incidence of adverse medical events in the neonatal intensive care unit with the help of a global trigger tool Introduction: Identification and monitoring of the frequency of adverse medical events are important in the organization of the patient's safety system in the intensive care units (ICU). The consistent and accurate identification of adverse events remains a problem for most medical institutions, even in developed countries of the world. Aim: To determine the frequency of adverse medical events in the NICU by using Global and Pediatric Trigger Instruments. Materials And Methods: To achieve the goal, the cross-sectional study was conducted in which in retrospect analyzed 160 newborn development records which were treated in the ICU of the newborn in the perinatal center of the M.V. Sklifosovsky Poltava Regional Clinical Hospital in 2016 to identify in them certain neonatal triggers. Results: In 21.3% of newborns, after detecting the triggers of the "Care" module, the hospital infection was confirmed, which we regarded as medical adverse event. The trigger "Infiltration / extravasation" of the same module was detected in 1.9% medical records. Triggers "Glucose less than 3.0 mmol / L after 48 hours from birth", "Increase in creatinine" and trigger "deviation of electrolytes" of the module "Laboratory" were accordingly found in 21.9%, 6.3% and 11.3% medical records of newborns. Conclusion: Trigger tool is an effective way to identify adverse events that cause to patient harm. Our results can form the basis for the development of a national neonatal trigger instrument that will effectively monitor the frequency of such events in the ICU of newborn. Answer: Yes, increasing the size of bi-weekly samples of records does influence the results when using the Global Trigger Tool (GTT). A study conducted in a Norwegian 524-bed general hospital trust found that a larger sample size of 70 records bi-weekly identified 1.45 times more adverse events per 1000 patient days (39.3 adverse events/1000 patient days) compared to a smaller sample size of 10 records bi-weekly (27.2 adverse events/1000 patient days). Although the distribution of categories and severity of adverse events did not differ significantly between the two sample sizes, the rate of adverse events was affected by the sample size. This suggests that the sample size does have an impact on the rate of adverse events detected using the GTT, and further studies are needed to determine if the optimal sample size should be adjusted based on hospital size to detect a more accurate rate of adverse events (PUBMED:27113238).
Instruction: Do patients with bone bridge amputations have improved gait compared with patients with traditional amputations? Abstracts: abstract_id: PUBMED:24818734 Do patients with bone bridge amputations have improved gait compared with patients with traditional amputations? Background: Two surgical techniques for performing a transtibial amputation include a traditional approach and a bone bridge approach. To date, there is no conclusive evidence of superiority of either technique in terms of temporal-spatial, kinetic, and mechanical work parameters. Questions/purposes: We sought to compare instrumented three-dimensional gait parameters and mechanical work measurements of patients who had undergone a traditional or bone bridge amputation at the transtibial level. Residual limb length and its effect on those functional outcomes was a secondary interest irrespective of amputation type. Methods: This retrospective comparative study included 14 active-duty military men with a mean age of 25 years (range, 20-28 years). Comparisons were made between seven patients with traditional and seven patients with bone bridge amputations at the transtibial level. The patients walked at self-selected and fast paces while three-dimensional gait analysis data were collected and comparisons were made between patients with the two amputation types as well as by length of the residual limb. Results: With the numbers available, we observed no differences between the two surgical groups at either speed for the temporal-spatial parameters or mechanical work metrics. However, the bone bridge group did demonstrate greater rolloff vertical ground reaction force during the fast walking condition with a median 1.02% of body weight compared with 0.94% (p = 0.046), which suggests a more stable platform in terminal stance. When the two groups were combined into one to test the effect of residual limb length, the linear regression resulted in an R(2) value of 0.419 (p = 0.012), in which patients with longer residual limbs had improved F3 force values during self-selected walking. Conclusions: Overall, limited functional differences were found between the two groups in this small pilot study, so a superior surgical technique could not be determined; whereas our limited sample size prevents a firm conclusion of no difference, our data can be considered hypothesis-generating for future, larger studies. Although some evidence indicated that patients with a bone bridge have improved loading at higher speeds, a regression of all patients walking at self-selected speed indicates that as residual limb length increases, loading increases regardless of amputation type. Thus, our data suggest it is important to preserve residual limb length to allow for improved loading in terminal stance. abstract_id: PUBMED:22253506 Puttur kattu (bandage) - A traditional bone setting practice in south India. Traditional bone setting practices are quite popular in India and nearly 6000 traditional bone setting Vaidyas (Practisioners) are practicing the same in our country. Puttur kattu is a traditional way of bone setting practice, invented accidentally by K. Kesava Raju in 1881. Now, the fourth generation of his family is practicing this bone setting practice in hospitals at Puttur, Andhra Pradesh, with 200-300 patients per day. A prospective study was undertaken to analyze the techniques in diagnosis, way of management, medicine preparation, plants used and way of applications by traditional bone setter (TBS) Vaidyas, with special reference to Puttur. We also tried to understand the reasons which make lots of people go to Puttur for getting treatment, means of contact for treatment, pathology of fracture and outcome of some treated cases through this study. 54% of the studied patients came to Puttur TBS on the advice of old patients. It is observed that more educated people are patronizing this therapy and 23% patients of the observed cases took discharge from modern hospital voluntarily to receive Puttur kattu treatment. 80% patients believed that this therapy with home remedy would fasten the healing process. 44% patients opted for this therapy due to fear of pain, heavy plaster of Paris bandage, prolonged period of immobilization, surgery and amputation. 71% patients of the followed cases were satisfied with the treatment of TBS of Puttur with minimum complications. The authors also attempted to put forth the legacy of the tradition, the way of management and the plant used for bone setting by the Puttur bone setting Vaidyas. abstract_id: PUBMED:27353989 Effectiveness of Proprioceptive Neuromuscular Facilitation Techniques as Compared to Traditional Strength Training in Gait Training Among Transtibial Amputees. Objective: To determine the effects of proprioceptive neuromuscular facilitation (PNF) techniques as compared with the traditional prosthetic strength training (TPT), in improving ambulatory function in subjects with transtibial amputation. Study Design: Randomized control trial. Place And Duration Of Study: Artificial Limb Centre of Fauji Foundation Hospital, Rawalpindi, from July to December 2014. Methodology: Patients with lower-limb amputation was selected through purposive sampling and randomly assigned into PNF group (n=31) and traditional group (n=32). The baseline and follow-up of 04 weeks treatment session was provided and measurement was noted through the locomotor capabilities index. Results: The locomotor capabilities index abilities had significant difference in both groups. The mean index was 23.93 for PNF and 18.18 for TPT(p > 0.05), and the knee muscle strength was also significantly different (p > 0.05). There was no significant difference in gait parameters. Conclusion: Proprioceptive neuromuscular facilitation technique is better in improving the locomotor abilities and knee muscle strength as compared to traditional training. The basic gait parameters have same effect in both groups. abstract_id: PUBMED:28756345 Gait alterations in the UAE population with and without diabetic complications using both traditional and entropy measures. Diabetic foot, one of the most common and debilitating manifestations of type 2 diabetes mellitus (T2DM), is the leading cause of worldwide non-traumatic lower extremity amputations. Diabetics who are at risk of ulceration are currently mainly identified by a thorough clinical examination of the feet, which typically does not show clear symptoms during the early stages of disease progression. In this study, we used a non-linear dynamics tool, gait entropy (GaitEN), in addition to traditional linear gait analysis methods, to investigate gait alterations amongst diabetic patients with combinations of three types of T2DM related complications: retinopathy, diabetic peripheral neuropathy (DPN) and nephropathy. Peak plantar pressure (PPP) was not significantly different in the group with DPN as compared to the control group (diabetics with no complications, CONT) in the forefoot region (DPN: mean±SD: 396±69.4kPa, CONT: 409±68.9kPa), although it was significantly lower in the heel region (DPN: mean±SD: 285±43.1.4kPa, CONT: 295±61.8kPa). On the other hand, gait entropy was significantly lower for the DPN compared to CONT group (DPN: 0.95±0.34, CONT: 1.03±0.28, p<0.05). The significant low entropy was maintained when neuropathy was combined with either retinopathy or nephropathy. For the group with all three complications (ALL-C), the entropy was higher than CONT (ALL-C: 1.07±0.26). This may indicate an intrinsic sensorimotor feedback mechanism for the DPN patients to regulate their gait. However, this feedback gets weaker as patients develop multiple complications. Further analysis with longer walking time and different speeds is needed to verify the entropy results. abstract_id: PUBMED:36061060 An improved bone transport surgical method for treating chronic ischemic ulcers (thromboangiitis obliterans). Introduction: The chronic ischemic injury of the upper/lower limbs caused by thromboangiitis obliterans (TAO, Buerger's disease) is difficult to heal, leading to high morbidity and amputation risk, seriously lowering the quality of life of patients. So far, the pathogenesis of this disease is still not clear, and there are still no effective therapeutic approaches. Here, we first use an improved bone transport technique to treat TAO-related foot ulcers and achieve good therapeutic effects. Materials And Methods: In this report, 22 patients met the inclusion criteria, and we provide an improved bone transport technique to repair TAO-related chronic lower limb wounds, which have a minimally surgical incision and a satisfying surgical field. Results: The improved bone transport technique resulted in TAO-related chronic lower extremity wound healing in most patients (18, M:F 16:2) within the first treatment cycle. All wounds healed completely after two treatment cycles. After these cycles, the cold sensation in the patients' feet was significantly relieved, and the rest pain in the lower extremities was significantly relieved (Visual Analog Scale, P < 0.0001). Furthermore, the Laser Doppler flowmeter showed that the blood perfusion and percutaneous oxygen pressure of the affected foot were higher than in preoperation (P < 0.0001). To conclude, bone transport technology is available for the refractory wounds of the extremity, which may promote healing by increasing blood circulation and tissue oxygen supply. Conclusions: In summary, the improved surgical method of the bone transport technique is worth considering in the treatment of thromboangiitis obliterans-related foot ulcers. abstract_id: PUBMED:27843274 Reasons for Patronage of Traditional Bone Setters. Aims And Objectives: The objectives of this study were to analyze the different reasons why patients with fractures patronize traditional bone setters (TBS) and their impression of the outcome of the treatment by the TBS. Materials And Methods: A 24 month prospective observational study was conducted from February 2012 to January 2014. All the patients were recruited from the orthopedics outpatient clinic. The demographic data of each patient, the type of injury, presentation to hospital or not, reasons for leaving the hospital, reasons for patronage of the TBS and their impression of the outcome of TBS' treatment, effect of educational background on patronage of TBS and reason for presenting to hospital for orthodox treatment. Data Analysis: Analysis was done with SPSS software Version 20. Results: A total 79 patients were recruited for the study and they had different reasons for patronizing TBS. These reasons include an external locus of decision making in 19 (24.1%) patients, and greater faith in TBS compared to orthodox medicine in 16 (20.3%). Twelve (15.2%) believed that TBS are more competent than orthodox medical practitioners while another group 11 (13.9%) considered the fees of TBS cheaper than those in the hospital. The delay in treatment in the hospital, forceful removal of patients from hospital against their will and nonsatisfaction with hospital treatment accounted for 5 (6.3%). Poor attitude of hospital staff, fear of amputation, and patients being unconscious during the injury accounted for 2 (2.5%). Their ages ranged from 17 to 83 years, with mean age of 36.8 ± 11.8 years. The male: female ratio was 1.5:1. Conclusions And Recommendations: With recent advancements in the practice of orthopedics and trauma, there is still a very high patronage of the TBS by most of our patients. This is largely due to the dependence of the patients on their sponsors for treatment, while the influence of cultural and religious beliefs continues to play a major role in these decisions. abstract_id: PUBMED:12562068 A comparison of traditional prosthetic training versus proprioceptive neuromuscular facilitation resistive gait training with trans-femoral amputees. Reports in the rehabilitation literature suggest that patients with trans-femoral amputation ambulate well after suitable prosthetic treatment. The effects of exercise protocols on function, however, have not been documented in this population. This study was conducted to compare the outcome of traditional and proprioceptive neuromuscular facilitation (PNF) techniques on weight bearing and gait. Fifty unilateral trans-femoral amputees who were attending for their first prosthesis, participated in this study. Amputees were randomly assigned into groups receiving the traditional training or PNF. Traditional treatment was consisted of weight-shifting, balancing, stool-stepping and gait exercises. In the other group the same activities were given by PNF. Amputees were trained 30 minutes daily, for a total of 10 treatments. Pre- and post-training assessment included weight bearing measurements by using two bathroom scales and time-distance characteristics of gait from footprints. A statistically significant difference was found in all parameters within the groups due to pre- and post-training evaluation data (p<0.05), but more obvious improvement was observed in the group who received PNF (p<0.05). The results of the study suggest that the prosthetic training based on proprioceptive feedback was more effective to improve weight bearing and gait when compared with a traditional programme. abstract_id: PUBMED:21857859 Review of the practice of traditional bone setting in Nigeria. Background: The practice of traditional bone setting (TBS) is extensive in Nigeria and it enjoys enormous patronage by the populace. However, the outcome of the intervention of TBS treatment is usually poor with profound effects on the patient. There are many publications highlighting different aspects of this subject but none has summarized the entire practice and problems as a single publication. Objective: This work aims at reviewing the entire subject of traditional bone setting in Nigeria in a single article to enable easy understanding and appreciation of the practice and problems of traditional bone setting by orthodox practitioners. Method: A total of thirty-one relevant published original scientific research papers involving all aspects of the subject were reviewed and the practices and problems were documented. Results: The results showed that the origin of the practice is shrouded in mystery but passed on by practitioners from one generation to another. There is no formal training of bonesetters. Though the methods of treatment vary, the problems caused by them are usually similar with extremity gangrene being the worst. When attempts have been made to train the bone setters, improvement have been noted in their performance. Conclusion: In other to prevent some of the most debilitating outcomes like amputation, it is suggested that the TBS practitioners undergo some training from orthopaedic practitioners. abstract_id: PUBMED:37350084 Walking ability of individuals fitted with transfemoral bone-anchored prostheses: A comparative study of gait parameters. Objective: This study presents the walking abilities of participants fitted with transfemoral bone-anchored prostheses using a total of 14 gait parameters. Design: Two-centre retrospective cross-sectional comparative study. Setting: Research facilities equipped with tridimensional motion capture systems. Participants: Two control arms included eight able-bodied participants arm (54 ± 9 years, 1.75 ± 0.07 m, 76 ± 7 kg) and nine participants fitted with transfemoral socket-suspended prostheses arm (59 ± 9 years, 1.73 ± 0.07 m, 80 ± 16 kg). The intervention arm included nine participants fitted with transfemoral bone-anchored prostheses arm (51 ± 13 years, 1.78 ± 0.09 m, 87.3 ± 16.1 kg). Intervention: Fitting of transfemoral bone-anchored prostheses. Main Measures: Comparisons were performed for two spatio-temporal, three spatial and nine temporal gait parameters. Results: The cadence and speed of walking were 107 ± 6 steps/min and 1.23 ± 0.19 m/s for the able-bodied participants arm, 88 ± 7 steps/min and 0.87 ± 0.17 m/s for the socket-suspended prosthesis arm, and 96 ± 6 steps/min and 1.03 ± 0.17 m/s for bone-anchored prosthesis arm, respectively. Able-bodied participants and bone-anchored prosthesis arms were comparable in age, height, and body mass index as well as cadence and speed of walking, but the able-bodied participant arm showed a swing phase 31% shorter. Bone-anchored and socket-suspended prostheses arms were comparable for age, height, mass, and body mass index as well as cadence and speed of walking, but the bone-anchored prosthesis arm showed a step width and duration of double support in seconds 65% and 41% shorter, respectively. Conclusions: Bone-anchored and socket-suspended prostheses restored equally well the gait parameters at a self-selected speed. This benchmark data provides new insights into the walking ability of individuals using transfemoral bionics bone-anchored prostheses. abstract_id: PUBMED:29268902 Does First Ray Amputation in Diabetic Patients Influence Gait and Quality of Life? It has recently been suggested that first ray amputation in diabetic patients with serious foot complications can prolong bipedal ambulatory status, and reduce morbidity and mortality. However, no data are available on gait analysis and quality of life after this procedure. In the present case-control study (6 amputee and 6 nonamputee diabetics, 6 healthy non-diabetic), a sample of amputee diabetic patients were evaluated and compared with a sample of nonamputee diabetic patients and a group of age-matched healthy subjects. Gait biomechanics, quality of life, and pain were evaluated. Compared with the other 2 groups, amputee patients displayed a lower walking speed and greater variability and lower ankle, knee, and hip range of motion values. They also tended to have a more flexed hip profile. Pain and lower quality of life were related to worsening biomechanical data. Our study results have shown that gait biomechanics in diabetic patients with first ray amputation are abnormal, probably owing to the severity of diabetes and the absence of the push-off phase provided by the hallux. Tailored orthotics and rehabilitation programs and a specific pain management program should be considered to improve the gait and quality of life of diabetic patients with first ray amputation. Answer: Based on the available abstracts, it is not conclusively determined whether patients with bone bridge amputations have improved gait compared to those with traditional amputations. A retrospective comparative study included active-duty military men with traditional and bone bridge amputations at the transtibial level and found limited functional differences between the two groups. The study observed no differences in temporal-spatial parameters or mechanical work metrics at either self-selected or fast walking speeds. However, the bone bridge group demonstrated greater rolloff vertical ground reaction force during the fast walking condition, which suggests a more stable platform in terminal stance. The study also noted that as residual limb length increases, loading increases regardless of amputation type, suggesting the importance of preserving residual limb length for improved loading in terminal stance (PUBMED:24818734). In contrast, other studies focused on different aspects of gait and amputation. For example, one study compared proprioceptive neuromuscular facilitation (PNF) techniques with traditional prosthetic strength training in transtibial amputees and found that PNF was better in improving locomotor abilities and knee muscle strength, but there was no significant difference in basic gait parameters between the two groups (PUBMED:27353989). Another study on transfemoral bone-anchored prostheses found that bone-anchored and socket-suspended prostheses restored gait parameters equally well at a self-selected speed (PUBMED:37350084). Therefore, while some evidence suggests that bone bridge amputations may provide a more stable platform during fast walking, the overall conclusion is that there is no clear superiority of bone bridge amputations over traditional amputations in terms of gait improvement. Further research with larger sample sizes is needed to draw more definitive conclusions (PUBMED:24818734).
Instruction: The evolution of chromosomal instability in Chinese hamster cells: a changing picture? Abstracts: abstract_id: PUBMED:9881722 The evolution of chromosomal instability in Chinese hamster cells: a changing picture? Purpose: To investigate the kinetics of chromosomal instability induced in clones of Chinese hamster cells following X-irradiation. Materials And Methods: X-irradiated clones of GM10115, human-hamster hybrid cells containing a single human chromosome 4 (HC4), have been previously established. These clones were defined as unstable if they contained > or = three subpopulations of cells with unique rearrangements of HC4 as detected by FISH. Stable and unstable clones were analysed by FISH and Giemsa staining at various times post-irradiation. Results: While most of the stable clones continued to show chromosomal stability of HC4 over time, one became marginally unstable at approximately 45 population doublings post-irradiation. Clones exhibiting chromosomal instability had one of several fates. Many of the unstable clones were showed similar levels of instability over time. However, one unstable clone became stable with time in culture, while another became even more unstable over time. Cytogenetic analyses of all clones after Giemsa staining indicated that in some clones the hamster chromosomes were rearranged independent of HC4, demonstrating increased frequencies of chromatid breaks and dicentric chromosomes. The majority of the unstable clones also had higher yields of chromatid gaps. Conclusions: These data demonstrate the dynamic nature of chromosomal instability as measured by two different cytogenetic assays. abstract_id: PUBMED:29987698 Genome sequence comparison between Chinese hamster ovary (CHO) DG44 cells and mouse using end sequences of CHO BAC clones based on BAC-FISH results. Chinese hamster ovary (CHO) cells have frequently been used in biotechnology as a mammalian host cell platform for expressing genes of interest. Previously, we constructed a detailed physical chromosomal map of the CHO DG44 cell line by fluorescence in situ hybridization (FISH) imaging using 303 bacterial artificial chromosome (BAC) clones as hybridization probes (BAC-FISH). BAC-FISH results revealed that the two longest chromosomes were completely paired. However, other chromosomes featured partial deletions or rearrangements. In this study, we determined the end sequences of 303 BAC clones (BAC end sequences), which were used for BAC-FISH probes. Among 606 BAC-end sequences (BESs) (forward and reverse ends), 558 could be determined. We performed a comparison between all determined BESs and mouse genome sequences using NCBI BLAST. Among these 558 BESs, 465 showed high homology to mouse chromosomal sequences. We analyzed the locations of these BACs in chromosomes of the CHO DG44 cell line using a physical chromosomal map. From the obtained results, we investigated the regional similarities among CHO chromosomes (A-T) and mouse chromosomes (1-19 and sex) about 217 BESs (46.7% of 465 high homologous BESs). Twenty-three specific narrow regions in 13 chromosomes of the CHO DG44 cell line showed high homology to mouse chromosomes, but most of other regions did not show significant correlations with the mouse genome. These results contribute to accurate alignments of chromosomes of Chinese hamster and its genome sequence, analysis of chromosomal instability in CHO cells, and the development of target locations for gene and/or genome editing techniques. abstract_id: PUBMED:33596973 Chinese hamster ovary cell line DXB-11: chromosomal instability and karyotype heterogeneity. Background: Chinese hamster ovary cell lines, also known as CHO cells, represent a large family of related, yet quite different, cell lines which are metabolic mutants derived from the original cell line, CHO-ori. Dihydrofolate reductase-deficient DXB-11 cell line, one of the first CHO derivatives, serves as the host cell line for the production of therapeutic proteins. It is generally assumed that DXB-11 is identical to DUKX or CHO-DUK cell lines, but, to our knowledge, DXB-11 karyotype has not been described yet. Results: Using differential staining approaches (G-, C-banding and Ag-staining), we presented DXB-11 karyotype and revealed that karyotypes of DXB-11 and CHO-DUK cells have a number of differences. Although the number of chromosomes is equal-20 in each cell line-DXB-11 has normal chromosomes of the 1st and 5th pairs as well as an intact chromosome 8. Besides, in DXB-11 line, chromosome der(Z9) includes the material of chromosomes X and 6, whereas in CHO-DUK it results from the translocation of chromosomes 1 and 6. Ag-positive nucleolar organizer regions were revealed in the long arms of chromosome del(4)(q11q12) and both chromosome 5 homologues, as well as in the short arms of chromosomes 8 and add(8)(q11). Only 19 from 112 (16.96%) DXB-11 cells display identical chromosome complement accepted as the main structural variant of karyotype. The karyotype heterogeneity of all the rest of cells (93, 83.04%) occurs due to clonal and nonclonal additional structural rearrangements of chromosomes. Estimation of the frequency of chromosome involvement in these rearrangements allowed us to reveal that chromosomes 9, der(X)t(X;3;4), del(2)(p21p23), del(2)(q11q22) /Z2, der(4) /Z7, add(6)(p11) /Z8 are the most stable, whereas mar2, probably der(10), is the most unstable chromosome. A comparative analysis of our own and literary data on CHO karyotypes allowed to designate conservative chromosomes, both normal and rearranged, that remain unchanged in different CHO cell lines, as well as variable chromosomes that determine the individuality of karyotypes of CHO derivatives. Conclusion: DXB-11and CHO-DUK cell lines differ in karyotypes. The revealed differential instability of DXB-11 chromosomes is likely not incidental and results in karyotype heterogeneity of cell population. abstract_id: PUBMED:33809068 DNA Double-Strand Breaks Affect Chromosomal Rearrangements during Methotrexate-Mediated Gene Amplification in Chinese Hamster Ovary Cells. Methotrexate (MTX)-mediated gene amplification has been widely used in Chinese hamster ovary (CHO) cells for the biomanufacturing of therapeutic proteins. Although many studies have reported chromosomal instability and extensive chromosomal rearrangements in MTX-mediated gene-amplified cells, which may be associated with cell line instability issues, the mechanisms of chromosomal rearrangement formation remain poorly understood. We tested the impact of DNA double-strand breaks (DSBs) on chromosomal rearrangements using bleomycin, a DSB-inducing reagent. Bleomycin-treated CHO-DUK cells, which are one of the host cell lines deficient in dihydrofolate reductase (Dhfr) activity, exhibited a substantial number of cells containing radial formations or non-radial formations with chromosomal rearrangements, suggesting that DSBs may be associated with chromosomal rearrangements. To confirm the causes of DSBs during gene amplification, we tested the effects of MTX treatment and the removal of nucleotide base precursors on DSB formation in Dhfr-deficient (i.e., CHO-DUK) and Dhfr-expressing (i.e., CHO-K1) cells. Immunocytochemistry demonstrated that MTX treatment did not induce DSBs per se, but a nucleotide shortage caused by the MTX-mediated inhibition of Dhfr activity resulted in DSBs. Our data suggest that a nucleotide shortage caused by MTX-mediated Dhfr inhibition in production cell lines is the primary cause of a marked increase in DSBs, resulting in extensive chromosomal rearrangements after gene amplification processes. abstract_id: PUBMED:26850366 Increased recombinant protein production owing to expanded opportunities for vector integration in high chromosome number Chinese hamster ovary cells. Chromosomal instability is a characteristic of Chinese hamster ovary (CHO) cells. Cultures of these cells gradually develop heterogeneity even if established from a single cell clone. We isolated cells containing different numbers of chromosomes from a CHO-DG44-based human granulocyte-macrophage colony stimulating factor (hGM-CSF)-producing cell line and found that high chromosome number cells showed higher hGM-CSF productivity. Therefore, we focused on the relationship between chromosome aneuploidy of CHO cells and high recombinant protein-producing cell lines. Distribution and stability of chromosomes were examined in CHO-DG44 cells, and two cell lines expressing different numbers of chromosomes were isolated from the original CHO-DG44 cell line to investigate the effect of aneuploid cells on recombinant protein production. Both cell lines were stably transfected with a vector that expresses immunoglobulin G3 (IgG3), and specific antibody production rates were compared. Cells containing more than 30 chromosomes had higher specific antibody production rates than those with normal chromosome number. Single cell analysis of enhanced green fluorescent protein (Egfp)-gene transfected cells revealed that increased GFP expression was relative to the number of gene integration sites rather than the difference in chromosome numbers or vector locations. Our results suggest that CHO cells with high numbers of chromosomes contain more sites for vector integration, a characteristic that could be advantageous in biopharmaceutical production. abstract_id: PUBMED:9798685 Molecular cloning and chromosomal localization of Chinese hamster telomeric protein chTRF1. Its potential role in chromosomal instability. Chinese hamster cells frequently have altered karyotypes. To investigate the basis of recent observations that karyotypic alterations are related to telomeric fusions, we asked whether these alterations are due to lack of telomere repeat binding factor/s. Further, Chinese hamster chromosomes contain large blocks of interstitial telomeric repeats, which are preferentially involved in chromosome breakage and exchange, rendering it an interesting model for such studies. Here, we report on the cloning and the chromosomal localization of the Chinese hamster telomere repeat binding factor, chTRF1. The sequence analysis revealed, similar to human TRF1 (hTRF1), an N-terminal acidic domain, a TRF1 specific DNA binding motif and a C-terminal Myb type domain. Unlike mouse TRF1 (mTRF1), chTRF1 shows 97.5% identity to hTRF1. chTRF1 gene was localized on the long arm of chromosome 5. In vitro translation of chTRF1 resulted in protein product similar in molecular weight to hTRF1. Immunostaining of Chinese hamster ovary cells (CHO) with anti-TRF1 antibody revealed punctate nuclear staining. At metaphase, antibodies failed to detect TRF1 on most of the chromosome ends and the interstitial telomeric repeat bands. These studies suggest that chTRF1 does not bind the interstitial telomeric repeats, and its presence at the metaphase chromosome ends is limited. The later could be a factor contributing to frequent karyotypic alterations observed in Chinese hamster cells. abstract_id: PUBMED:8291021 Genetic diversity of mitomycin C-hypersensitive Chinese hamster cell mutants: a new complementation group with chromosomal instability. A Chinese hamster cell mutant (V-C8) isolated previously, which is approximately 100 fold more sensitive to mitomycin C (MMC) than its parental wild-type V79 cells (judged by D10 values), was further characterized. V-C8 cells exhibit an increased sensitivity towards other cross-linking agents, such as cis-DDP (approximately 40-fold), DEB (approximately 30-fold), and also to adriamycin (approximately 5-fold), and the monofunctional alkylating agents: MMS (approximately 5-fold) and EMS (approximately 6-fold). V-C8 cells show a higher level induction of chromosomal aberrations by cross-linking agents (MMC, cis-DDP, and DEB) and an increased level of spontaneous chromosomal aberrations in comparison to the wild-type V79 cells. To determine whether the V-C8 mutant represents a new complementation group among Chinese hamster cell mutants that also display the extreme sensitivity to MMC, V-C8 cells were fused with irs1, irs1SF, UV20, UV41, and V-H4 cells. In all cases, the derived hybrids regained the MMC sensitivity similar to wild-type cells, indicating that the V-C8 mutant belongs to a new sixth complementation group. abstract_id: PUBMED:10738308 Chromosome-specific telomeric associations in Chinese hamster embryonic cells. Telomeric associations (TAs) represent an important cytogenetic marker of human tumor cells. It has been thought that the primary cause of TAs is telomere shortening. However, we report here a surprising aspect of telomere maintenance in primary Chinese hamster embryonic (CHE) cells: relatively high frequencies of TAs in spite of normal telomere length. These TAs are present in both interphase and metaphase cells, suggesting that metaphase TAs may be relics of interphase chromosome organization. In addition, some TAs observed here are chromosome-specific and recurrent in at least three consecutive cell cycles in two different CHE cell strains. In spite of relatively high frequencies of TAs, none of the CHE strains show chromosome instability resulting from breakage-fusion-bridge cycles, as would be expected from tumor cell studies. It appears that TAs in CHE cells may be reversible events. These results are discussed in light of current understanding of telomere biology. abstract_id: PUBMED:16643964 Chinese hamster cell mutant, V-C8, a model for analysis of Brca2 function. The previously described Chinese hamster cell mutant V-C8 that is defective in Brca2 shows a very complex phenotype, including increased sensitivity towards a wide variety of DNA damaging agents, chromosomal instability, abnormal centrosomes and impaired formation of Rad51 foci in response to DNA damage. Here, we demonstrate that V-C8 cells display biallelic nonsense mutations in Brca2, one in exon 15 and the other in exon 16, both resulting in truncated Brca2 proteins. We generated several independent mitomycin C (MMC)-resistant clones from V-C8 cells that had acquired an additional mutation leading to the restoration of the open reading frame of one of the Brca2 alleles. In two of these revertants, V-C8-Rev 1 and V-C8-Rev 6, the reversions lead to the wild-type Brca2 sequence. The V-C8 revertants did not gain the entire wild-type phenotype and still show a 2.5-fold increased sensitivity to mitomycin C (MMC), higher levels of spontaneous and MMC-induced chromosomal aberrations, as well as abnormal centrosomes when compared to wild-type cells. Our results suggest that Brca2 heterozygosity in hamster cells primarily gives rise to sensitivity to DNA cross-linking agents, especially chromosomal instability, a feature that might also be displayed in BRCA2 heterozygous mutation carriers. abstract_id: PUBMED:8989365 Genomic instability in Chinese hamster cells after exposure to X rays or alpha particles of different mean linear energy transfer. Evidence has accumulated that radiation induces a transmissible persistent destabilization of the genome, which may result in effects arising in the progeny of irradiated but surviving cells. An enhanced death rate among the progeny of cells surviving irradiation persists for many generations in the form of a reduced plating efficiency. Such delayed reproductive death is correlated with an increased occurrence of micronuclei. Since it has been suggested that radiation-induced chromosomal instability might depend on the radiation quality, we investigated the effects of alpha particles of different LET by looking at the frequency of delayed micronuclei in Chinese hamster V79 cells after cytochalasin-induced block of cell division. A dose-dependent increase in the frequency of micronuclei was found in cells assayed 1 week postirradiation or later. Also, there was a persistent increase in the frequency of dicentrics in surviving irradiated cells. Moreover, we found an increased micronucleus frequency in all of the 30 clones isolated from individual cells which had been irradiated with doses equivalent to either one, two or three alpha-particle traversals per cell nucleus. We conclude that the target for genomic instability in Chinese hamster cells must be larger than the cell nucleus. Answer: The evolution of chromosomal instability in Chinese hamster cells indeed presents a changing picture, as evidenced by various studies. Research has shown that chromosomal instability in Chinese hamster cells is a dynamic process, with clones exhibiting different fates over time. Some clones that were initially stable post-X-irradiation became unstable, while others that were unstable either remained so, became stable, or even increased in instability (PUBMED:9881722). This indicates that chromosomal instability can evolve and is not a static characteristic. Further studies have revealed that Chinese hamster ovary (CHO) cells, commonly used in biotechnology, exhibit chromosomal rearrangements and partial deletions, contributing to their chromosomal instability (PUBMED:29987698). The DXB-11 cell line, a derivative of CHO cells, also demonstrates karyotype heterogeneity and chromosomal instability, with a significant proportion of cells displaying nonclonal additional structural rearrangements (PUBMED:33596973). Methotrexate (MTX)-mediated gene amplification in CHO cells has been associated with chromosomal instability and rearrangements, which are believed to be caused by DNA double-strand breaks (DSBs) resulting from a nucleotide shortage due to MTX-mediated inhibition of dihydrofolate reductase (DHFR) activity (PUBMED:33809068). Additionally, CHO cells with a higher number of chromosomes have been found to have more sites for vector integration, which could be advantageous for biopharmaceutical production, suggesting a link between chromosomal instability and increased recombinant protein production (PUBMED:26850366). The molecular aspects of chromosomal instability have also been explored, such as the role of the Chinese hamster telomeric protein chTRF1. The limited presence of chTRF1 at metaphase chromosome ends and its failure to bind interstitial telomeric repeats may contribute to the frequent karyotypic alterations observed in Chinese hamster cells (PUBMED:9798685). Furthermore, the V-C8 Chinese hamster cell mutant, which is hypersensitive to mitomycin C and exhibits chromosomal instability, represents a new complementation group among Chinese hamster cell mutants (PUBMED:8291021). In summary, the evolution of chromosomal instability in Chinese hamster cells is indeed a changing and complex picture, influenced by various factors including irradiation, gene amplification processes, telomere maintenance, and genetic mutations. This instability has implications for both the fundamental understanding of cellular processes and practical applications in biotechnology.
Instruction: Asleep versus awake: does it matter? Abstracts: abstract_id: PUBMED:29351243 Awake versus Asleep Deep Brain Stimulation Surgery: Technical Considerations and Critical Review of the Literature. Advancements in neuroimaging have led to a trend toward direct, image-based targeting under general anesthesia without the use of microelectrode recording (MER) or intraoperative test stimulation, also referred to as "asleep" deep brain stimulation (DBS) surgery. Asleep DBS, utilizing imaging in the form of intraoperative computed tomography (iCT) or magnetic resonance imaging (iMRI), has demonstrated reliable targeting accuracy of DBS leads implanted within the globus pallidus and subthalamic nucleus while also improving clinical outcomes in patients with Parkinson's disease. In lieu, of randomized control trials, retrospective comparisons between asleep and awake DBS with MER have shown similar short-term efficacy with the potential for decreased complications in asleep cohorts. In lieu of long-term outcome data, awake DBS using MER must demonstrate more durable outcomes with fewer stimulation-induced side effects and lead revisions in order for its use to remain justifiable; although patient-specific factors may also be used to guide the decision regarding which technique may be most appropriate and tolerable to the patient. abstract_id: PUBMED:26613177 "Asleep" deep brain stimulation for essential tremor. OBJECT Deep brain stimulation (DBS) performed under general anesthesia ("asleep" DBS) has not been previously reported for essential tremor. This is in part due to the inability to visualize the target (the ventral intermediate nucleus [VIM]) on MRI. The authors evaluate the efficacy of this asleep technique in treating essential tremor by indirect VIM targeting. METHODS The authors retrospectively reviewed consecutive cases of initial DBS for essential tremor performed by a single surgeon. DBS was performed with patients awake (n = 40, intraoperative test stimulation without microelectrode recording) or asleep (n = 17, under general anesthesia). Targeting proceeded with standardized anatomical coordinates on preoperative MRI. Intraoperative CT was used for stereotactic registration and lead position confirmation. Functional outcomes were evaluated with pre- and postoperative Bain and Findley Tremor Activities of Daily Living scores. RESULTS A total of 29 leads were placed in asleep patients, and 60 were placed in awake patients. Bain and Findley Tremor Activities of Daily Living Questionnaire scores were not significantly different preoperatively for awake versus asleep cohorts (p = 0.2). The percentage of postoperative improvement was not significantly different between asleep (48.6%) and awake (45.5%) cohorts (p = 0.35). Euclidean error (mm) was higher for awake versus asleep patients (1.7 ± 0.8 vs 1.2 ± 0.4, p = 0.01), and radial error (mm) trended higherfor awake versus asleep patients (1.3 ± 0.8 vs 0.9 ± 0.5, p = 0.06). There were no perioperative complications. CONCLUSIONS In the authors' initial experience, asleep VIM DBS for essential tremor without intraoperative test stimulation can be performed safely and effectively. abstract_id: PUBMED:30654750 Nasotracheal intubation-extubation-intubation and asleep-awake-asleep anesthesia technique for deep brain stimulation. Background: The asleep-awake-asleep (AAA) technique and laryngeal mask airway (LMA) is a common general anesthesia technique for deep brain stimulation (DBS) surgery. However, the LMA is not always the ideal artificial airway. In this report, we presented our experiences with nasotracheal intubation-extubation-intubation (IEI) and AAA techniques in DBS surgery for Parkinson's disease (PD) patients to meet the needs of surgery and ensure patients' safety and comfort. Case Presentation: Three PD patients scheduled for DBS surgery had to receive general anesthesia for various reasons. For the first asleep stage, general anesthesia and nasotracheal intubation was completed with routine methods. During the awake stage, we pulled the nasotracheal tube back right above the epiglottis under fiberoptic bronchoscope (FOB) guidance for microelectrode recording (MER), macrostimulation testing and verbal communication. Once monitoring is completed, we induced anesthesia with rapid sequence induction and utilized the FOB to advance the nasotracheal tube into the trachea again. To minimize airway irritations during the process, we sprayed the airway with lidocaine before any manipulation. The neurophysiologists completed neuromoinitroing successfully and all three patients were satisfied with the anesthesia provided at follow-up. Conclusion: Nasotracheal IEI and AAA anesthetic techniques should be considered as a viable option during DBS surgery. abstract_id: PUBMED:35754293 Gender Distribution in Deep Brain Stimulation for Parkinson's Disease: The Effect of Awake versus Asleep Surgery. There is evidence that men are more likely to undergo deep brain stimulation (DBS) for Parkinson's disease (PD), suggesting that women are relatively undertreated. 121 consecutive PD patients undergoing awake DBS with microelectrode recording and intraoperative clinical testing (30 patients, 5 women) or asleep MRI-guided and CT-verified (91 patients, 38 women) bilateral subthalamic nucleus DBS were included in this study. The results showed an increase in the proportion of female patients from 16.7% to 41.8% after changing our operative technique (OR = 5.61; 95% CI: 1.52-20.78; p = 0.010) from awake to asleep, suggesting that women are more likely to undergo DBS when operated asleep. abstract_id: PUBMED:34262518 Functional Neuroimaging During Asleep DBS Surgery: A Proof of Concept Study. Object: A real-time functional magnetic resonance imaging (fMRI) feedback during ventral intermediate nucleus (VIM) deep brain stimulation (DBS) under general anesthesia (or "asleep" DBS) does not exist. We hypothesized that it was feasible to acquire a reliable and responsive fMRI during asleep VIM DBS surgery. Methods: We prospectively enrolled 10 consecutive patients who underwent asleep DBS for the treatment of medication-refractory essential tremor. Under general anesthesia, we acquired resting-state functional MRI immediately before and after the cannula insertion. Reliability was determined by a temporal signal-to-noise-ratio >100. Responsiveness was determined based on the fMRI signal change upon insertion of the cannula to the VIM. Results: It was feasible to acquire reliable fMRI during asleep DBS surgery. The fMRI signal was responsive to the brain cannula insertion, revealing a reduction in the tremor network's functional connectivity, which did not reach statistical significance in the group analysis. Conclusions: It is feasible to acquire a reliable and responsive fMRI signal during asleep DBS. The acquisition steps and the preprocessing pipeline developed in these experiments will be useful for future investigations to develop fMRI-based feedback for asleep DBS surgery. abstract_id: PUBMED:37808488 The association between serum total folic acid concentration and severe difficulty falling asleep in US adults: NHANES 2005-2008. Objective: The present study aimed to explore the correlation of serum total folic acid concentration with severe difficulty falling asleep among US adults. Methods: Cross-sectional data were collected from the National Health and Nutrition Examination Survey (NHANES) from 2005 to 2006 and 2007 to 2008 cycles. Participants were divided into the severe difficulty falling asleep (SDFA) group and the control group according to the monthly frequency of having difficulty falling asleep. Serum total folic acid was taken as independent and dependent variables, respectively. The correlation of serum total folic acid concentration with severe difficulty falling asleep was examined using multivariable logistic regression models, where the adjusted odds ratio (OR) and 95% confidential intervals (CIs) were calculated. Results: Overall, 8,926 individuals from the NHANES 2005 to 2006 and 2007 to 2008 waves were included in the analysis, of whom 683 participants had severe difficulty falling asleep. Higher serum total folic acid concentration (ng/ml) was associated with lower odds of severe difficulty falling asleep after adjusting for potentially confounding factors (OR = 0.98; 95% CI: 0.97-1.00), while participants at the highest quartile had the least odds of severe difficulty falling asleep (OR = 0.55; 95% CI: 0.40-0.74). The subgroup analysis based on gender, smoking history, and diabetes showed that this negative correlation was more significant in males, smokers, and nondiabetic population after adjusting for confounding factors. Conclusion: High levels of serum folic acid were significantly related to less odds of severe difficulty in falling asleep among US adults, suggesting that folic acid supplementation may be beneficial to the prevention and even treatment of severe difficulty falling asleep. abstract_id: PUBMED:35932263 Association of clinical outcomes and connectivity in awake versus asleep deep brain stimulation for Parkinson disease. Objective: Deep brain stimulation (DBS) for Parkinson disease (PD) is traditionally performed with awake intraoperative testing and/or microelectrode recording. Recently, however, the procedure has been increasingly performed under general anesthesia with image-based verification. The authors sought to compare structural and functional networks engaged by awake and asleep PD-DBS of the subthalamic nucleus (STN) and correlate them with clinical outcomes. Methods: Levodopa equivalent daily dose (LEDD), pre- and postoperative motor scores on the Movement Disorders Society-Unified Parkinson's Disease Rating Scale part III (MDS-UPDRS III), and total electrical energy delivered (TEED) at 6 months were retroactively assessed in patients with PD who received implants of bilateral DBS leads. In subset analysis, implanted electrodes were reconstructed using the Lead-DBS toolbox. Volumes of tissue activated (VTAs) were used as seed points in group volumetric and connectivity analysis. Results: The clinical courses of 122 patients (52 asleep, 70 awake) were reviewed. Operating room and procedure times were significantly shorter in asleep cases. LEDD reduction, MDS-UPDRS III score improvement, and TEED at the 6-month follow-up did not differ between groups. In subset analysis (n = 40), proximity of active contact, VTA overlap, and desired network fiber counts with motor STN correlated with lower DBS energy requirement and improved motor scores. Discriminative structural fiber tracts involving supplementary motor area, thalamus, and brainstem were associated with optimal clinical improvement. Areas of highest structural and functional connectivity with VTAs did not significantly differ between the two groups. Conclusions: Compared to awake STN DBS, asleep procedures can achieve similarly optimal targeting-based on clinical outcomes, electrode placement, and connectivity estimates-in more efficient procedures and shorter operating room times. abstract_id: PUBMED:27290954 Asleep-awake-asleep regimen for epilepsy surgery: a prospective study of target-controlled infusion versus manually controlled infusion technique. Background: Asleep-awake-asleep (AAA) protocol for epilepsy surgery is a unique opportunity to accurately map epilepsy foci involved in motor and eloquent areas, allowing the operator to optimize the resection. Two different application modes of intravenous anesthesia for AAA craniotomies are widely used: infusion by means of target-controlled infusion (TCI) and traditional manually-controlled infusion (MCI). We conducted this study to examine whether intravenous anesthesia using the TCI system with propofol and remifentanil would be a more effective method than MCI in AAA epilepsy surgery. Methods: This prospective and single center study compared patients undergoing either TCI or MCI techniques for functional AAA epilepsy surgery. 35 cases used TCI including TCI-E (resection of epileptogenic foci in an eloquent area, n = 18) and TCI-M (resection of epileptogenic foci in a motor area, n = 17). Thirty-six cases used MCI including MCI-E (epileptogenic foci in an eloquent area, n = 16) and MCI-M (epileptogenic foci in a motor area, n = 20). Bispectral index value and hemodynamic profiles at different time points during the awake phase were recorded along with time for awakening and the occurrences of adverse events. Results: The TCI technique significantly shortened intraoperative awakening times during the third phase, TCI-E vs MCI-E 12.82 min ± 6.93 vs 29.9 min ± 9.04 (P = .000) and TCI-M vs MCI-M 16.8 min ± 5.19 vs 30.91 min ± 15.32 (P = .010). During the awake phase, the highest bispectral index score values appeared in the TCI-E group at all-time points. Mean arterial pressure and heart rate were more stable in the TCI-E group compared with the MCI-E group during the awake phase. Tachycardia and hypertension were most common in the MCI-E group (52.9% and 29.4%, P = .001 and P = .064). Conclusion: We found the superiority of TCI, which is faster intraoperative awakening and better hemodynamics along with secure airway management conditions. It is suggested that the TCI technique may be a feasible and effective technique and it might be a viable replacement of the MCI technique for AAA epilepsy surgery. abstract_id: PUBMED:33254172 A Phase 2 Randomized Trial of Asleep versus Awake Subthalamic Nucleus Deep Brain Stimulation for Parkinson's Disease. Objective: Asleep deep brain stimulation (DBS) for Parkinson's disease (PD) is being performed more frequently; however, motor outcomes and safety of asleep DBS have never been assessed in a prospective randomized trial. Methods: We conducted a prospective, randomized, noncomparative trial to assess the motor outcomes of asleep DBS. Leads were implanted in the subthalamic nucleus (STN) according to probabilistic stereotactic coordinates with a surgical robot under O-arm© imaging guidance under either general anesthesia without microelectrode recordings (MER) (20 patients, asleep group) or local anesthesia with MER and clinical testing (9 patients, awake group). Results: The mean motor improvement rates on the Unified Parkinson's Disease Rating Scale Part III (UPDRS-3) between OFF and ON stimulation without medication were 52.3% (95% CI: 45.4-59.2%) in the asleep group and 47.0% (95% CI: 23.8-70.2%) in the awake group, 6 months after surgery. Except for a subcutaneous hematoma, we did not observe any complications related to the surgery. Three patients (33%) in the awake group and 8 in the asleep group (40%) had at least one side effect potentially linked with neurostimulation. Conclusions: Owing to its randomized design, our study supports the hypothesis that motor outcomes after asleep STN-DBS in PD may be noninferior to the standard awake procedure. abstract_id: PUBMED:29923006 Dynamics of Alpha Rhythm Peak Frequency during Falling Asleep. Changes in the frequency characteristics of EEG alpha rhythm in during falling asleep were studied in three healthy individuals under conditions of long-term isolation (MARS-500 project). Falling asleep was preceded by enhanced alpha rhythm frequency. An inverse correlation between the duration of falling asleep and prevailing alpha rhythm frequency during active and relaxed wakefulness was revealed in the left hemisphere. These results demonstrate the principal possibility of predicting the duration of falling asleep by using alpha rhythm spectral analysis. It is assumed that the frequency of the alpha range spectral peak can be a marker of drowsiness and reflect the current need for sleep. Answer: The question of whether deep brain stimulation (DBS) surgery should be performed while the patient is awake or asleep has been a topic of discussion and research in the medical community. The literature presents various considerations and findings on this matter. Asleep DBS, which utilizes imaging such as intraoperative computed tomography (iCT) or magnetic resonance imaging (iMRI), has demonstrated reliable targeting accuracy and improved clinical outcomes in patients with Parkinson's disease (PD) when leads are implanted within the globus pallidus and subthalamic nucleus. Retrospective comparisons between asleep and awake DBS with microelectrode recording (MER) have shown similar short-term efficacy, with the potential for decreased complications in asleep cohorts. However, long-term outcome data is lacking, and awake DBS using MER may need to demonstrate more durable outcomes with fewer stimulation-induced side effects and lead revisions to remain justifiable. Patient-specific factors may also guide the decision on which technique is most appropriate and tolerable for the patient (PUBMED:29351243). In the case of essential tremor, asleep DBS without intraoperative test stimulation has been performed safely and effectively, even though the ventral intermediate nucleus (VIM) target is not visible on MRI. The study found no significant difference in postoperative improvement between asleep and awake cohorts, suggesting that asleep VIM DBS can be a viable option (PUBMED:26613177). The asleep-awake-asleep (AAA) anesthesia technique, which includes nasotracheal intubation-extubation-intubation (IEI), has been used in DBS surgery for PD patients. This approach allows for general anesthesia during the initial and final stages of surgery, with an awake phase for MER, macrostimulation testing, and verbal communication. This technique has been reported to meet the needs of surgery while ensuring patient safety and comfort (PUBMED:30654750). Gender distribution in DBS for PD has also been influenced by the choice of awake versus asleep surgery. A study showed an increase in the proportion of female patients undergoing DBS after changing the operative technique from awake to asleep, suggesting that women may be more likely to undergo DBS when operated on asleep (PUBMED:35754293). Functional neuroimaging during asleep DBS surgery has been explored as a proof of concept, with the feasibility of acquiring reliable and responsive functional MRI (fMRI) signals during surgery being demonstrated.
Instruction: Does chronic obstructive pulmonary disease with or without type 2 diabetes mellitus influence the risk of lung cancer? Abstracts: abstract_id: PUBMED:24854189 Does chronic obstructive pulmonary disease with or without type 2 diabetes mellitus influence the risk of lung cancer? Result from a population-based cohort study. Background: Previous studies have suggested that chronic obstructive pulmonary disease (COPD) is an independent risk factor for lung cancer. There are some evidence that people with diabetes are at a risk of developing many forms of cancer, but inconclusive with regard to lung cancer. The aim of this study was to evaluate whether COPD with or without type 2 diabetes mellitus (T2DM) influences the risk of developing lung cancer. Methods: This is a retrospective cohort study consisting of 20,730 subjects newly diagnosed with COPD ("cases"). Their data was collected from the National Health Insurance system of Taiwan from 1998 to 2010. Among these patients, 5,820 patients had T2DM and 14,910 patients did not have T2DM. The retrospective matched control group consisted of 20,729 subjects without either COPD or T2DM. The control group was matched with the cases for sex, age, and index year (the year that the patient was diagnosed with COPD). The subjects were followed until the end of 2011. Results: The findings of our study showed that the risk of lung cancer was higher in the COPD group than in the non-COPD group, with adjusted hazard ratio (HR) of 5.02 [95% confidence interval (CI) = 4.23-5.94] among total case group, adjusted HR was 5.38 (95% CI = 4.52-6.40) in the cohort without T2DM and adjusted HR was 4.05 (95% CI = 3.26-5.03) in the cohort with T2DM. We observed a significantly protective effect from lung cancer (adjusted HR = 0.75, 95% CI = 0.63-0.90) of diabetic cohort than non-diabetic cohort among patients with COPD. Conclusion: Patients with COPD had a significantly higher risk of developing lung cancer than healthy people. However, there was a protective effect of T2DM for lung cancer among patients with COPD. Further investigation may be needed to corroborate the mechanism or bring up reliable reasons. abstract_id: PUBMED:24991802 Diabetes but not insulin increases the risk of lung cancer: a Taiwanese population-based study. Background: The trend of lung cancer incidence in Taiwan is unknown, and the association between type 2 diabetes/insulin use and lung cancer is rarely studied. Methods: The trends of lung cancer incidence in 1979-2007 in the Taiwanese general population were calculated. A random sample of 1,000,000 subjects covered by the National Health Insurance in 2005 was recruited. A total of 494,002 men and 502,948 women and without lung cancer were followed for the annual cumulative incidence of lung cancer in 2005, with calculation of the risk ratios between diabetic and non-diabetic subjects. Logistic regression estimated the adjusted odds ratios for risk factors. Results: The trends increased significantly in both sexes (P<0.0001). The sex-specific annual cumulative incidence increased with age in either the diabetic or non-diabetic subjects, but the risk ratios attenuated with age. In logistic regressions, diabetes was associated with a significantly higher risk, with odds ratios (95% confidence interval) for diabetes duration <1, 1-3, 3-5 and ≥5 years versus non-diabetes of 2.189 (1.498-3.200), 1.420 (1.014-1.988), 1.545 (1.132-2.109), and 1.329 (1.063-1.660), respectively. Such an association was not related to a higher detection with chest X-ray examination. Insulin use and medications including oral anti-diabetic drugs, statin, fibrate, and anti-hypertensive agents were not significantly associated with lung cancer. Age, male sex, and chronic obstructive pulmonary disease were positively; but dyslipidemia, stroke and higher socioeconomic status were negatively associated with lung cancer. Conclusions: Diabetes is significantly associated with a higher risk of lung cancer, but insulin use does not increase the risk. abstract_id: PUBMED:34298709 Individual 5-Year Lung Cancer Risk Prediction Model in Korea Using a Nationwide Representative Database. Early detection of lung cancer by screening has contributed to reduce lung cancer mortality. Identifying high risk subjects for lung cancer is necessary to maximize the benefits and minimize the harms followed by lung cancer screening. In the present study, individual lung cancer risk in Korea was presented using a risk prediction model. Participants who completed health examinations in 2009 based on the Korean National Health Insurance (KNHI) database (DB) were eligible for the present study. Risk scores were assigned based on the adjusted hazard ratio (HR), and the standardized points for each risk factor were calculated to be proportional to the b coefficients. Model discrimination was assessed using the concordance statistic (c-statistic), and calibration ability assessed by plotting the mean predicted probability against the mean observed probability of lung cancer. Among candidate predictors, age, sex, smoking intensity, body mass index (BMI), presence of chronic obstructive pulmonary disease (COPD), pulmonary tuberculosis (TB), and type 2 diabetes mellitus (DM) were finally included. Our risk prediction model showed good discrimination (c-statistic, 0.810; 95% CI: 0.801-0.819). The relationship between model-predicted and actual lung cancer development correlated well in the calibration plot. When using easily accessible and modifiable risk factors, this model can help individuals make decisions regarding lung cancer screening or lifestyle modification, including smoking cessation. abstract_id: PUBMED:19782552 Risk factors for idiopathic pulmonary fibrosis in a Mexican population. A case-control study. The etiology of idiopathic pulmonary fibrosis (IPF) remains poorly understood, but some studies have suggested that cigarette smoking or other occupational or environmental exposures, diabetes mellitus, or gastroesophageal reflux may play a role. In this study we evaluated the clinical records of a group of 97 consecutive patients with IPF, and 560 patients suffering 5 different respiratory disorders that were examined as controls: asthma (n=111), chronic obstructive pulmonary disease (n=132), squamous cell lung carcinoma (n=118), lung adenocarcinoma (n=101) and patients with otorhinolaryngology problems but without lung disease (n=98). In bivariate analyses male sex, diabetes mellitus and being former cigarette smoker were associated with IPF. After adjusting by these variables, multivariate analysis revealed that type 2 diabetes mellitus [11.3% in IPF patients vs 2.9% in controls, OR=4.3 (95% CI: 1.9-9.8), p<0.0001] was an independent risk factor associated to IPF. Our results provide additional evidence of a putative relationship between DM2 and idiopathic pulmonary fibrosis. Experimental research is necessary for thorough assessment of the pathogenic mechanisms involved in this association. abstract_id: PUBMED:26385563 Socioeconomic inequalities in non-communicable diseases and their risk factors: an overview of systematic reviews. Background: Non-communicable diseases (NCDs) are the largest cause of premature death worldwide. Socioeconomic inequalities contribute to a disparity in the burden of NCDs among disadvantaged and advantaged populations in low (LIC), middle (MIC), and high income countries (HIC). We conducted an overview of systematic reviews to systematically and objectively assess the available evidence on socioeconomic inequalities in relation to morbidity and mortality of NCDs and their risk factors. Methods: We searched PubMed, The Cochrane Library, EMBASE, SCOPUS, Global Health, and Business Source Complete for relevant systematic reviews published between 2003 and December 2013. Two authors independently screened abstracts and full-text publications and determined the risk of bias of the included systematic reviews. Results: We screened 3302 abstracts, 173 full-text publications and ultimately included 22 systematic reviews. Most reviews had major methodological shortcomings; however, our synthesis showed that having low socioeconomic status (SES) and/or living in low and middle income countries (LMIC) increased the risk of developing cardiovascular diseases (CVD), lung and gastric cancer, type 2 diabetes, and chronic obstructive pulmonary disease (COPD). Furthermore, low SES increased the risk of mortality from lung cancer, COPD, and reduced breast cancer survival in HIC. Reviews included here indicated that lower SES is a risk factor for obesity in HIC, but this association varied by SES measure. Early case fatalities of stroke were lower and survival of retinoblastoma was higher in MIC compared to LIC. Conclusions: The current evidence supports an association between socioeconomic inequalities and NCDs and risk factors for NCDs. However, this evidence is incomplete and limited by the fairly low methodological quality of the systematic reviews, including shortcomings in the study selection and quality assessment process. abstract_id: PUBMED:30306761 Adherence to the 2015 Dutch dietary guidelines and risk of ten non-communicable diseases and mortality in the Rotterdam Study Objective: We aimed to evaluate the criterion validity of the 2015 food-based Dutch dietary guidelines, which were formulated based on evidence on the relation between diet and major chronic diseases. Design: We studied 9,701 participants of the Rotterdam Study, a population-based prospective cohort in individuals aged 45 years and over. Method: Dietary intake was assessed at baseline with a food-frequency questionnaire. For all participants, we examined adherence (yes/no) to fourteen items of the guidelines: vegetables (≥200g/d), fruit (≥200g/d), whole-grains (≥90g/d), legumes (≥135g/wk), nuts (≥15g/d), dairy (≥350g/d), fish (≥100g/wk), tea (≥450mL/d), ratio whole-grains:total grains (≥50%), ratio unsaturated fats & oils:total fats (≥50%), red and processed meat (<300g/wk), sugar-containing beverages (<150mL/d), alcohol (<10 g/d) and salt (≤6g/d). Total adherence was calculated as sum-score of the adherence to the individual items (0-14). Information on disease incidence and all-cause mortality was collected during a median follow-up period of 13.5 years (range 0-27.0). Results: Using Cox proportional-hazards models adjusted for confounders, we observed that every additional component adhered to was associated with a 3% lower mortality risk (HR=0.97,95% CI=0.95 - 0.98), lower risk of stroke (HR=0.95,95%CI 0.92;0.99), chronic obstructive pulmonary disease (HR=0.94,95%CI=0.91-0.98), colorectal cancer (HR=0.90,95%CI=0.84;0.96), and depression (HR=0.97,95%CI=0.95-0.999), but not with incidence of coronary heart disease, type 2 diabetes, heart failure, lung cancer, breast cancer, or dementia. Conclusion: Adherence to the Dutch dietary guidelines was associated with a lower mortality risk and a lower risk of developing some but not all of the chronic diseases on which the guidelines were based. abstract_id: PUBMED:31354621 Human Insulin Therapy Is Associated With an Increased Risk of Lung Cancer: A Population-Based Retrospective Cohort Study. Background: Whether human insulin may affect lung cancer risk requires investigation. Methods: All patients with a diagnosis of diabetes mellitus from 1996 to 2009 were enrolled from Taiwan's National Health Insurance. An entry date was set on January 1, 2004, and 1,007,617 patients with type 2 diabetes mellitus diagnosed before 2004 were followed up for new-onset lung cancer until December 31, 2009. Incidence rates of lung cancer for never-users, ever-users, and tertiles of three dose-response exposure parameters (i.e., time since starting insulin, cumulative dose, and cumulative duration) were calculated. Adjusted hazard ratios were estimated by Cox proportional hazards models. The joint effect of insulin and chronic obstructive pulmonary disease was also evaluated. Results: There were 156,720 ever-users and 850,897 never-users. The respective case numbers of incident lung cancer were 3,007 (1.92%) and 13,677 (1.61%), and the respective incidence rates were 424.45 and 313.60 per 100,000 person-years. The adjusted hazard ratio comparing ever-users vs. never-users was 1.545 (95% confidence interval: 1.478-1.614). The hazard ratios for the different subgroups of the three dose-response parameters all suggested a significantly higher risk of lung cancer associated with insulin use (P trend < 0.0001). Compared to patients without insulin use and without chronic obstructive pulmonary disease, insulin users who also had chronic obstructive pulmonary disease had the highest risk of lung cancer (adjusted hazard ratio: 1.891, 95% confidence interval: 1.767-2.024). Conclusions: This study suggests a significant association between human insulin use and lung cancer risk in patients with type 2 diabetes mellitus. abstract_id: PUBMED:31329872 Smoking, drinking, diet and physical activity-modifiable lifestyle risk factors and their associations with age to first chronic disease. Background: This study examined the incidence of a person's first diagnosis of a selected chronic disease, and the relationships between modifiable lifestyle risk factors and age to first of six chronic diseases. Methods: Ontario respondents from 2001 to 2010 of the Canadian Community Health Survey were followed up with administrative data until 2014 for congestive heart failure, chronic obstructive respiratory disease, diabetes, lung cancer, myocardial infarction and stroke. By sex, the cumulative incidence function of age to first chronic disease was calculated for the six chronic diseases individually and compositely. The associations between modifiable lifestyle risk factors (alcohol, body mass index, smoking, diet, physical inactivity) and age to first chronic disease were estimated using cause-specific Cox proportional hazards models and Fine-Gray competing risk models. Results: Diabetes was the most common disease. By age 70.5 years (2015 world life expectancy), 50.9% of females and 58.1% of males had at least one disease and few had a death free of the selected diseases (3.4% females; 5.4% males). Of the lifestyle factors, heavy smoking had the strongest association with the risk of experiencing at least one chronic disease (cause-specific hazard ratio = 3.86; 95% confidence interval = 3.46, 4.31). The lifestyle factors were modelled for each disease separately, and the associations varied by chronic disease and sex. Conclusions: We found that most individuals will have at least one of the six chronic diseases before dying. This study provides a novel approach using competing risk methods to examine the incidence of chronic diseases relative to the life course and how their incidences are associated with lifestyle behaviours. abstract_id: PUBMED:28825166 Adherence to the 2015 Dutch dietary guidelines and risk of non-communicable diseases and mortality in the Rotterdam Study. We aimed to evaluate the criterion validity of the 2015 food-based Dutch dietary guidelines, which were formulated based on evidence on the relation between diet and major chronic diseases. We studied 9701 participants of the Rotterdam Study, a population-based prospective cohort in individuals aged 45 years and over [median 64.1 years (95%-range 49.0-82.8)]. Dietary intake was assessed at baseline with a food-frequency questionnaire. For all participants, we examined adherence (yes/no) to fourteen items of the guidelines: vegetables (≥200 g/day), fruit (≥200 g/day), whole-grains (≥90 g/day), legumes (≥135 g/week), nuts (≥15 g/day), dairy (≥350 g/day), fish (≥100 g/week), tea (≥450 mL/day), ratio whole-grains:total grains (≥50%), ratio unsaturated fats and oils:total fats (≥50%), red and processed meat (<300 g/week), sugar-containing beverages (≤150 mL/day), alcohol (≤10 g/day) and salt (≤6 g/day). Total adherence was calculated as sum-score of the adherence to the individual items (0-14). Information on disease incidence and all-cause mortality during a median follow-up period of 13.5 years (range 0-27.0) was obtained from data collected at our research center and from medical records. Using Cox proportional-hazards models adjusted for confounders, we observed every additional component adhered to was associated with a 3% lower mortality risk (HR 0.97, 95% CI 0.95; 0.98), lower risk of stroke (HR 0.95, 95% CI 0.92; 0.99), chronic obstructive pulmonary disease (HR 0.94, 95% CI 0.91; 0.98), colorectal cancer (HR 0.90, 95% CI 0.84; 0.96), and depression (HR 0.97, 95% CI 0.95; 0.999), but not with incidence of coronary heart disease, type 2 diabetes, heart failure, lung cancer, breast cancer, or dementia. These associations were not driven by any of the individual dietary components. To conclude, adherence to the Dutch dietary guidelines was associated with a lower mortality risk and a lower risk of developing some but not all of the chronic diseases on which the guidelines were based. abstract_id: PUBMED:32877797 Antireflux surgery and risk of lung cancer by histological type in a multinational cohort study. Introduction: Airway micro-aspiration might contribute to the proposed associations between gastroesophageal reflux disease (GERD) and some lung diseases, including lung cancer. This study aimed to examine the hypothesis that antireflux surgery decreases the risk of small cell carcinoma, squamous cell carcinoma and adenocarcinoma of the lung differently depending on their location in relation to micro-aspiration. Methods: Population-based cohort study including patients having undergone antireflux surgery during 1980-2014 in Denmark, Finland, Iceland, Norway or Sweden. Patients having undergone antireflux surgery were compared with two groups: 1) the corresponding background population, by calculating standardised incidence ratios (SIRs) with 95% confidence intervals (CIs) and 2) non-operated GERD-patients, by calculating hazard ratios (HRs) with 95% CIs using multivariable Cox regression with adjustment for sex, age, calendar period, country, chronic obstructive pulmonary disease and obesity diagnosis or type 2 diabetes. Results: Among all 812,617 GERD-patients, 46,996 (5.8%) had undergone antireflux surgery. The SIRs were statistically significantly decreased for small cell carcinoma (SIR = 0.57, 95% CI 0.41-0.77) and squamous cell carcinoma (SIR = 0.75, 95% CI 0.60-0.92), but not for adenocarcinoma of the lung (SIR = 0.90, 95% CI 0.76-1.06). The HRs were also below unity for small cell carcinoma (HR = 0.63, 95% CI 0.44-0.90) and squamous cell carcinoma (HR = 0.80, 95% CI 0.62-1.03), but not for adenocarcinoma of the lung (HR = 1.03, 95% CI 0.84-1.26). Analyses restricted to patients with objective GERD (reflux oesophagitis or Barrett's oesophagus) showed similar results. Conclusions: This all-Nordic study indicates that patients who undergo antireflux surgery are at decreased risk of small cell carcinoma and squamous cell carcinoma of the lung, but not of adenocarcinoma of the lung. Answer: Chronic obstructive pulmonary disease (COPD) is an independent risk factor for lung cancer. A population-based cohort study in Taiwan found that the risk of lung cancer was higher in the COPD group than in the non-COPD group, with an adjusted hazard ratio (HR) of 5.02 among the total case group. The risk was even higher in the cohort without type 2 diabetes mellitus (T2DM), with an adjusted HR of 5.38, while the cohort with T2DM had a lower adjusted HR of 4.05. Interestingly, the study observed a significantly protective effect from lung cancer (adjusted HR = 0.75) of the diabetic cohort compared to the non-diabetic cohort among patients with COPD (PUBMED:24854189). Another study from Taiwan reported that diabetes is significantly associated with a higher risk of lung cancer, but insulin use does not increase the risk. This study also found that age, male sex, and COPD were positively associated with lung cancer (PUBMED:24991802). A Korean study developed a 5-year lung cancer risk prediction model that included COPD as one of the risk factors, along with age, sex, smoking intensity, body mass index (BMI), pulmonary tuberculosis (TB), and T2DM. The model showed good discrimination for predicting lung cancer risk (PUBMED:34298709). In summary, COPD is a significant risk factor for lung cancer, and the presence of T2DM may influence this risk. While COPD without T2DM is associated with a higher risk of lung cancer, the presence of T2DM seems to have a protective effect among patients with COPD. However, diabetes alone is associated with an increased risk of lung cancer, independent of insulin use. These findings suggest that the interplay between COPD, T2DM, and lung cancer risk is complex and may involve multiple mechanisms.
Instruction: Can patient and pain characteristics predict manometric sphincter of Oddi dysfunction in patients with clinically suspected sphincter of Oddi dysfunction? Abstracts: abstract_id: PUBMED:24472759 Can patient and pain characteristics predict manometric sphincter of Oddi dysfunction in patients with clinically suspected sphincter of Oddi dysfunction? Background: Biliopancreatic-type postcholecystectomy pain, without significant abnormalities on imaging and laboratory test results, has been categorized as "suspected" sphincter of Oddi dysfunction (SOD) type III. Clinical predictors of "manometric" SOD are important to avoid unnecessary ERCP, but are unknown. Objective: To assess which clinical factors are associated with abnormal sphincter of Oddi manometry (SOM). Design: Prospective, cross-sectional. Setting: Tertiary. Patients: A total of 214 patients with suspected SOD type III underwent ERCP and pancreatic SOM (pSOM; 85% dual SOM), at 7 U.S. centers (from August 2008 to March 2012) as part of a randomized trial. Interventions: Pain and gallbladder descriptors, psychosocial/functional disorder questionnaires. Main Outcome Measurements: Abnormal SOM findings. Univariate and multivariate analyses assessed associations between clinical characteristics and outcome. Results: The cohort was 92% female with a mean age of 38 years. Baseline pancreatic enzymes were increased in 5%; 9% had minor liver enzyme abnormalities. Pain was in the right upper quadrant (RUQ) in 90% (48% also epigastric); 51% reported daily abdominal discomfort. Fifty-six took narcotics an average of 33 days (of the past 90 days). Less than 10% experienced depression or anxiety. Functional disorders were common. At ERCP, 64% had abnormal pSOM findings (34% both sphincters, 21% biliary normal), 36% had normal pSOM findings, and 75% had at least abnormal 1 sphincter. Demographic factors, gallbladder pathology, increased pancreatobiliary enzymes, functional disorders, and pain patterns did not predict abnormal SOM findings. Anxiety, depression, and poorer coping were more common in patients with normal SOM findings (not significant on multivariate analysis). Limitations: Generalizability. Conclusions: Patient and pain factors and psychological comorbidity do not predict SOM results at ERCP in suspected type III SOD. ( Clinical Trial Registration Number: NCT00688662.). abstract_id: PUBMED:3979750 Manometric disorders in patients with suspected sphincter of Oddi dysfunction. Sphincter of Oddi (SO) manometry was conducted in 48 subjects who had previously undergone cholecystectomy. Ten of these subjects had no detectable biliary or pancreatic disease and served as controls. The other 38 patients were presumed to have SO dysfunction and had presented with episodes of severe biliary-type pain with either dilatation of the bile duct on a retrograde cholangiogram, transient changes in liver function tests, or both of these abnormalities. A triple-lumen low-compliance manometric system was used to record the SO basal pressure, SO phasic contraction amplitude, SO wave frequency, direction of wave propagation, and SO response to intravenously administered cholecystokinin-octapeptide 20 ng/kg. Satisfactory manometric recordings were obtained from 32 of 38 patients and of these, 25 patients showed one or more abnormalities when compared with data from the 10 controls. The abnormalities included excess of retrograde contractions (12), high frequency of SO phasic contractions (11), elevation of the SO basal pressure (8), and paradoxical cholecystokinin-octapeptide response (10). The study has demonstrated a spectrum of manometric abnormalities in the SO of patients with suspected SO dysfunction. abstract_id: PUBMED:23012668 Low but significant yield of endosonography in patients with suspected Sphincter of Oddi Dysfunction Type III with normal imaging studies. Aim: To determine the yield of endosonography (EUS) in patients with suspected Sphincter of Oddi Dysfunction (SOD) Type III in detecting abdominal abnormalities in those with normal upper endoscopy and imaging studies. Methods: A retrospective review of patients with suspected SOD who underwent EUS prior to undergoing an ERCP was performed. The inclusion criteria were: 1. patients with right upper quadrant abdominal pain for >/= 3 months; 2. no evidence of abnormalities on upper endoscopy, imaging studies and laboratory tests. Exclusion criteria were: history of acute or chronic pancreatitis, pancreas divisum, jaundice or any significant abnormality detected on prior studies. The main outcome measurements were to determine the diagnostic yield of EUS to diagnose the etiology of abdominal pain and detect clinically significant lesions. Results: We identified 143 patients with suspected SOD type III who had undergone EUS. A diagnosis of the etiology of abdominal pain by EUS was made in 12 of 143 patients (8%) with previously normal endoscopy and imaging studies. EUS identified the following: changes consistent with chronic pancreatitis (n=5), biliary stone (n=1), side-branch intraductal papillary mucinous neoplasms of the pancreas (n=3), benign bile duct stricture (n=1). Papillary stenosis was diagnosed in two patients with common bile duct dilation detected by EUS. Conclusions: EUS may detect significant abnormalities in a small subset of patients with suspected SOD with previous normal endoscopies and imaging studies. Although the yield of EUS is low, it should be considered in patients with persistent pain and those with a high clinical suspicion for pancreatic pathology. abstract_id: PUBMED:27165434 Duloxetine for the Treatment of Patients with Suspected Sphincter of Oddi Dysfunction: A Pilot Study. Objective: To examine the tolerability and efficacy of duloxetine in patients with suspected sphincter of Oddi dysfunction (SOD). Methods: An open-label, single-center, 12-week trial of duloxetine 60 mg once daily was conducted in 20 patients with suspected SOD. All patients were evaluated by expert pancreato-biliary specialists. The primary outcome measure was a Patient Global Impression of Change (PGIC) scale. Secondary measures included the pain burden, assessed by the Recurrent Abdominal Pain Intensity and Disability scale, the Short-Form Health Survey, and the Hospital Anxiety and Depression Scale. A positive clinical response was defined as a PGIC score of much or very much improved at 3 months and was estimated using a two-sided 90 % confidence interval. The primary outcome was analyzed using a one-sample binomial test at a significance level of 0.10. Results: Of the 20 screened patients, 18 were enrolled; 14 completed at least one post-baseline evaluation; and 10 patients completed the third month endpoint visit. Patients missing the third month visit were considered non-responders for the primary outcome. Response rates were 90 % for study completers (n = 10; 90 % CI 74-100; p = 0.02) and 64 % for patients who completed at least one post-baseline evaluation (n = 14; 90 % CI 43-85; p = 0.42). Seven patients did not complete the study due to adverse events (mostly fatigue and nausea). Conclusions: Duloxetine showed an indication of efficacy in the treatment of pain in patients with suspected SOD, but adverse events limited its use. These preliminary, open-label results justify definitive placebo-controlled trials. abstract_id: PUBMED:24445573 Psychosocial characteristics and pain burden of patients with suspected sphincter of Oddi dysfunction in the EPISOD multicenter trial. Objectives: Patients with several painful functional gastrointestinal disorders (FGIDs) are reported to have a high prevalence of psychosocial disturbance. These aspects have not been studied extensively in patients with suspected Sphincter of Oddi dysfunction (SOD). Methods: A total of 214 patients with post-cholecystectomy pain and suspected SOD were enrolled in seven US centers in a multicenter-randomized trial (Evaluating Predictors and Interventions in Sphincter of Oddi Dysfunction). Baseline assessments included pain descriptors and burden, structured psychosocial assessments of anxiety/depression, coping, trauma, and health-related quality of life. Patients with high levels of depression, suicidal ideation, or psychosis were excluded. Results: The study population (92% female, mean age 38) reported anxiety (9%), depression (8%), past sexual trauma (18%), and physical abuse (10%). Of the total screened population (n=1460), 3.9% of the patients were excluded because of the presence of defined severe psychological problems. The mean medical outcomes study short-form-36 (SF-36) physical and mental composite scores were 38.70 (s.d.=7.89) and 48.74 (s.d.=9.60), respectively. Most subjects reported symptoms of other FGIDs. There were no correlations between the extent of the pain burden in the 3 months before enrollment and the baseline anxiety scores or victimization history. However, those with greater pain burden were significantly more depressed. There were no meaningful differences in the psychosocial parameters in subjects with or without irritable bowel, and those who had cholecystectomy for stones or functional gallbladder disease. Those declining randomization were comparable to those randomized. Conclusions: Psychosocial comorbidity in SOD is high. However, it does not appear to differ significantly from that reported in surveys of age- and gender-matched general populations, and may be lower than reported with other FGIDs. abstract_id: PUBMED:1597742 A noninvasive test of sphincter of Oddi dysfunction in postcholecystectomy patients: the scintigraphic score. The ideal noninvasive test of sphincter of Oddi dysfunction (SOD) does not exist and the diagnosis of patients with postcholecystectomy pain often relies on invasive procedures. In this paper we describe a scintigraphic test for SOD: the scintigraphic score. This score combines quantitative and visual criteria for interpretation of hepatobiliary scans. Twenty-six consecutive postcholecystectomy patients underwent hepatobiliary imaging, ERCP, and sphincter manometry. Twelve patients had SOD and 14 had normal sphincters determined by clinical findings, ERCP, and manometric studies. All patients with normal sphincter had scores of 0-4, while patients with SOD had values of 5-12 for a perfect sensitivity and specificity of 100%. Hepatobiliary scans scored in this fashion may become the noninvasive test of choice to screen postcholecystectomy patients with suspected SOD. abstract_id: PUBMED:2920642 Endoscopic biliary manometry in patients with suspected sphincter of Oddi dysfunction and in patients with cystic dilatation of the bile ducts. We studied the motility of the sphincter of Oddi in 12 patients with suspected sphincter of Oddi dysfunction, in four patients with cystic dilatation of the bile ducts (two Caroli's cases and two fusiform choledochal cyst cases), and in 33 patients with retained common duct stones. In these last 33 patients, the motor activity of the sphincter of Oddi was similar to that recorded in nine control subjects without pancreatic or biliary diseases. In the suspected Oddi dysfunction cases, both the basal sphincteric pressure and the frequency of the phasic contractions were significantly elevated (P less than 0.001). Patients with biliary cystic dilatation showed an increased basal pressure, but the frequency of the contractions was elevated in only those with choledochal cysts and the amplitude in only one of the two patients with Caroli's disease. Motor disorders of the sphincter of Oddi provide a basis for an alternative etiopathogenesis of cystic disease of the biliary system and a possible explanation for pain and dilatation of the bile duct in patients with suspected sphincter of Oddi dysfunction. abstract_id: PUBMED:12584215 Scintigraphy versus manometry in patients with suspected biliary sphincter of Oddi dysfunction. Introduction: Sphincter of Oddi (SO) manometry is at present the "gold standard" investigation for patients with suspected biliary SO dysfunction. Non-invasive scintigraphy in cholecystectomised patients using a complex scoring system or the transit time from the hepatic hilum to the duodenum (HDTT) have been promoted as sensitive and specific alternatives. Aim: To evaluate the scintigraphic scoring system and HDTT in patients with suspected biliary SO dysfunction undergoing SO manometry. Methods: Cholecystectomised patients undergoing SO manometry for persistent biliary-type pain, as defined by the Rome II criteria, for which all other causes had been excluded, were prospectively studied. Scintigraphy with cholecystokinin octapeptide infusion was performed within a month prior to manometry. Scoring of the scans and measurement of HDTT was performed by independent blinded observers. Manometry of the biliary sphincter was performed per-endoscopically and defined as abnormal if basal pressure was > or = 40 mm Hg. Results: Thirty two patients were enrolled (30 females, mean age 45.1 years). Three patients were excluded from analysis because manometry from the bile duct was not technically possible. Eight patients had abnormal manometry. Scintigraphic scoring had a sensitivity of 25-38%, a specificity of 86-89%, positive predictive value (PPV) of 40-60%, and a negative predictive value (NPV) of 75-79%. The coefficient of variation for interobserver variation in scores was 0.72. HDTT sensitivity was 13%, specificity 95%, PPV 50%, and NPV 74%. Conclusions: Our findings indicate that scintigraphy using these methods of analysis correlates poorly with manometry in post cholecystectomy patients with suspected biliary SO dysfunction. abstract_id: PUBMED:9187879 Endoscopic bile duct stent placement as a predictor of outcome following endoscopic sphincterotomy in patients with suspected sphincter of Oddi dysfunction. Objectives: To determine whether symptomatic improvement following placement of endoscopic stent across the biliary sphincter could predict the longer-term clinical outcome after endoscopic sphincterotomy (ES). Methods: Twenty-three post-cholecystectomy patients with suspected sphincter of Oddi dysfunction underwent, sequentially, sphincter of Oddi manometry, endoscopic stent placement, ES, and follow-up for a further 6-12 months. Results: Eight (35%) patients either did not respond (n = 5), did not tolerate the stent (n = 1) or relapsed during stenting (n = 2). Only the patient who did not tolerate the stent from the outset (12%) improved after ES. Of five patients who responded to stenting and had ES within 8 weeks, only two (40%) remained asymptomatic. In contrast, of 10 patients who were pain-free during 12-14 weeks of stenting, nine (90%) continued to be asymptomatic after ES. All seven patients with an elevated sphincter of Oddi pressure responded to stenting and six benefited from sphincter ablation. Five (31%) of 16 patients who had normal sphincter pressure and had improvement after 12-14 weeks of stenting remained free from pain following ES. ES resulted in long-term freedom from pain in 12 of the 23 patients: six of the seven patients with elevated sphincter of Oddi pressure and six of the 16 subjects with normal manometry (P < 0.05). Conclusion: Freedom from symptoms during at least 12 weeks of stenting predicted a favourable outcome after ES, irrespective of sphincter of Oddi pressure. Patients who failed to improve or showed improvement only with short-term stenting were less likely to benefit. abstract_id: PUBMED:16518275 The underestimated role of opiates in patients with suspected sphincter of Oddi dysfunction after cholecystectomy. Aims: Pain recurrence after cholecystectomy is often attributed to sphincter of Oddi dysfunction, whose diagnostic criteria and treatments remain uncertain. We performed a retrospective study to assess the possible precipitating role of opiate ingestion in this setting. Methods: The retrospective study of the files of 147 consecutive patients investigated for post-cholecystectomy syndrome by endoscopic ultrasonography and/or endoscopic retrograde cholangiography yielded 37 cases of suspected biliary-type sphincter of Oddi dysfunction. Results: Thirteen patients (30%) with suspected sphincter of Oddi dysfunction had taken opiate-containing drugs 15 minutes to two hours (median 1 hr) before the onset of pain ("Opiate group"). When compared with the 23 patients having not taken opiates ("Non Opiate Group"), they were significantly younger (47 vs. 60 yrs), had a narrower common bile duct (5.0 vs. 7.7 mm), but had similar biochemical abnormalities and belonged to the same Milwaukee's classes, mainly class II. None of the patients in the "Opiate group" were submitted to retrograde cholangiography or endoscopic sphincterotomy vs. 52% and 39%, respectively of the patients of the "Non-Opiate Group". After a mean follow-up of 3.5 years, there were three recurrences of biliary-type pain (1 choledochal stone, and 2 suspected sphincter of Oddi dysfunction) in the "Opiate Group", and 2 (1 choledochal stone, 1 after codeine intake) in the "Non-Opiate Group". Conclusions: Opiate intake is a frequent cause of suspicion of sphincter of Oddi dysfunction after cholecystectomy, especially in young patients with a narrow common bile duct. A careful history taking is essential to avoid unnecessary and potentially harmful procedures. Answer: No, patient and pain characteristics do not predict manometric sphincter of Oddi dysfunction (SOD) in patients with clinically suspected SOD. A study involving 214 patients with suspected SOD type III who underwent endoscopic retrograde cholangiopancreatography (ERCP) and pancreatic sphincter of Oddi manometry (pSOM) found that demographic factors, gallbladder pathology, increased pancreatobiliary enzymes, functional disorders, and pain patterns did not predict abnormal SOM findings. Additionally, anxiety, depression, and poorer coping were more common in patients with normal SOM findings, but these were not significant predictors on multivariate analysis (PUBMED:24472759). Other studies have also explored various methods to diagnose SOD and predict outcomes. For instance, a scintigraphic score combining quantitative and visual criteria for interpretation of hepatobiliary scans showed perfect sensitivity and specificity in a small study, suggesting it could be a noninvasive test to screen for SOD (PUBMED:1597742). However, another study comparing scintigraphy with manometry found that scintigraphy had poor correlation with manometry results, indicating it may not be a reliable alternative (PUBMED:12584215). Furthermore, endoscopic bile duct stent placement was studied as a predictor of outcome following endoscopic sphincterotomy in patients with suspected SOD. The study found that symptomatic improvement following stent placement could predict a favorable outcome after sphincterotomy, especially if patients were symptom-free during 12-14 weeks of stenting (PUBMED:9187879). Additionally, the role of opiates in patients with suspected SOD after cholecystectomy was highlighted in a study that found opiate intake to be a frequent cause of suspicion for SOD, particularly in younger patients with a narrow common bile duct. This suggests that a careful patient history is crucial to avoid unnecessary procedures (PUBMED:16518275). In summary, while various diagnostic methods and predictors have been explored, patient and pain characteristics alone do not reliably predict manometric SOD in patients with clinically suspected SOD.
Instruction: Left-molar approach for direct laryngoscopy: is it easy? Abstracts: abstract_id: PUBMED:19234820 Left-molar approach for direct laryngoscopy: is it easy? Purpose: For direct laryngoscopy, we compared midline and left-molar approaches with respect to ease of intubation, using a Macintosh blade. We investigated the relationship between failure of the left-molar approach and preoperative risk factors for difficult intubation. Methods: With local ethics committee approval, 200 consecutive adult, nonpregnant patients were included in the study. The demographic data, body mass index, Mallampati modified score, interincisor gap, and mentohyoid and thyromental distances were measured preoperatively. First, the Macintosh blade was inserted using the midline approach, and then optimal external laryngeal manipulation (OELM) was applied. Second, the blade was inserted using the left-molar approach. The glottic views were assessed according to the Cormack-Lehane classification before and after OELM in both approaches. In cases where tracheal intubation failed with the left-molar approach, the midline approach was applied again and endotracheal intubation took place. Results: The grade I glottic view obtained using the midline approach without OELM did not change in 94.3% of the patients with the left-molar approach without OELM; in addition, the grade II glottic view improved to grade I in 52.8% of the patients with the same technique (P < 0.001). Although the number of patients with a grade I or II glottic view in the left-molar approach was 197, only 37 patients could be intubated using the left-molar approach. In addition, 59.5% of them were intubated at the second attempt with the left-molar approach, while the incidence of a second attempt was 1.2% with the midline approach (P < 0.001). There was no correlation between the preoperative risk factors for difficult intubation and failure of the left-molar approach. Conclusion: Difficulty in the insertion of the endotracheal tube limits the efficacy of the left-molar approach. It is not possible to predict the failure of intubation with the left-molar approach by considering the preoperative risk factors. abstract_id: PUBMED:18405410 Does left molar approach to laryngoscopy make difficult intubation easier than the conventional midline approach? Background And Objective: It has been reported that the left molar approach of laryngoscopy can make difficult intubation easier. The aim of this study was to investigate whether left molar approach to laryngoscopy provided a better laryngeal view in cases of unexpected difficult intubation. Methods: Following the approval of local Ethics Committee and written informed consent from the patients, out of 1386 patients who underwent general anaesthesia for surgery, 20 patients who could be ventilated by face mask but could not be intubated with conventional midline approach on the first attempt were included in the study. Those 20 patients, who had Grade III-IV laryngeal views on laryngoscopy by conventional midline approach, were subjected to left molar laryngoscopy, and their laryngeal views were evaluated. The external laryngeal compression was routinely used to improve the laryngeal view. When endotracheal intubation failed by left molar laryngoscopy, we performed the conventional midline approach again. All data were recorded. Results: Of the 20 patients studied, 18 had a Grade III laryngeal view and two had a Grade IV laryngeal view. Eighteen of them had a better laryngeal view with left molar laryngoscopy. Eleven of the 20 patients underwent successful intubation with the left molar laryngoscopy, which provided a significantly better laryngeal view and success rate of tracheal intubation than did the conventional midline approach (P < 0.01 and P < 0.01, respectively). Conclusions: Left molar laryngoscopy can make unexpected difficult intubation easier and should be attempted in cases of difficult intubation. abstract_id: PUBMED:25584338 Extra-oral approach for removal of ectopic impacted lower third molar: a case report. Extra oral approach for removal of the lower third molar is uncommon. This case report illustrates an example of removal of lower third molar by extra-oral approach preserving the inferior dental nerve. abstract_id: PUBMED:32538672 Endoscopic Trans-Lateral Molar Approach to Infratemporal Fossa Lesions. Background: To study the endoscopic trans-lateral molar (ETLM) approach to infratemporal fossa (ITF) lesions and analyze the advantages and disadvantages of this method. Methods: Four cases of ITF lesions were analyzed retrospectively. The clinical features, diagnosis and treatments, the operative process, and clinical applications of this surgical approach were discussed. Results: Postoperative pathologies were 2 pleomorphic adenomas, 1 schwannoma, and 1 inflammatory lesion. All patients had self-resolving cheek swelling and pharyngalgia in the short term, but 2 patients had numbness in the long term. There was no infection or bleeding in the postoperative period, and no difficulty in chewing after disease recovery. There was no tumor recurrence during the follow-up period. Conclusion: The ETLM approach is convenient, minimally invasive, and allows complete excision of benign ITF lesions, posterior to the lateral pterygoid muscle and mainly below the level of the hard palate. It is a simple and direct access to the ITF, but it is a narrow access because of the limitations of bones and soft tissues. Appropriate patient selection is mandatory for successful surgery. abstract_id: PUBMED:30944984 Endoscopic anatomical study of the trans-lateral molar approach to the infratemporal fossa. Background: The infratemporal fossa (ITF) is located deep in the skull base. Recently, the endoscopic transoral approach has enabled maxillofacial surgeons to access the ITF using a less invasive approach compared to the traditional transfacial and endonasal endoscopic approaches. Objective: The present study aims to provide maxillofacial surgeons with new data concerning direct endoscopic measurement and precise anatomical topography features of the endoscopic trans-lateral molar approach to ITF by comparing the endoscopic and regional anatomy of ITF. A clinical case receiving the proposed surgical approach is used to determine the feasibility of this technique. Method: The anatomical data were obtained by measuring the bone anatomical landmarks and analyzing the CT imaging data using GE's Advance Windows 4.1 software on 25 subjects (50 sides). Morphological pictures of the regional anatomy and endoscopic anatomy were obtained from 6 (12 sides) adult cadaver heads, and the anatomical features were described. The present study reports the management of one case using the proposed surgical approach. Results: The proposed surgical approach clearly revealed neurovascular, muscular, and surgical landmarks in the ITF. The surgical case supports the minimally invasive treatment approach, which could rapidly access the ITF and completely excise benign tumors. Conclusion: The anatomical studies and surgical case presentation helps us understand the spatial relationship of surgical landmarks of the surgical approach to the ITF for the treatment of benign lesions in the deep cranial base area. abstract_id: PUBMED:10638901 Left-molar approach improves the laryngeal view in patients with difficult laryngoscopy. Background: The molar approach of laryngoscopy is reported to improve glottic view in sporadic cases of difficult laryngoscopy. The authors studied the effect of molar approaches and optimal external laryngeal manipulation (OELM) using the Macintosh blade. Methods: A series of 1,015 adult patients who underwent general anesthesia and tracheal intubation was studied. Laryngoscopy was carried out using a Macintosh no. 3 or 4 standard blade. Three consecutive trials of direct laryngoscopy using the midline and left- and right-molar approaches were carried out under full muscle relaxation with optimal head and neck positioning. The best glottic views were recorded for each approach with and without OELM. Results: Difficult laryngoscopy with a midline approach accounted for 6.5% (66 cases) before OELM and 1.97% (20 cases) after OELM. A left-molar approach with OELM further reduced difficult laryngoscopy to seven cases (P < 0.001 vs. midline approach with OELM); a right-molar approach with OELM reduced difficult laryngoscopy to 18 cases (P = 0.48). Conclusions: The left-molar approach with OELM improves the laryngeal view in patients with difficult laryngoscopy. abstract_id: PUBMED:37491236 Palatal approach for surgical removal of mesioangularly impacted maxillary third molar - a pilot study. Objective: This study aimed to investigate the application of the palatal approach for surgical removal of IMTM, and to evaluate its success rate, surgical duration, postoperative outcomes, and incidence of complications. Method: Patients with mesioangularly IMTM (Archer Classification Class B) in the none-buccal position to the adjacent second molar, which were indicated for surgical removal, were enrolled in this study. The patients were assigned into two groups according to the surgical approach: the buccal or palatal approach. The impacted tooth positions, diagnosis, past dental and medical history, and radiographic examination were recorded pre-operatively. The duration, surgery details, and surgical complications were documented during the surgery. Result: 40 teeth were enrolled in our study. All teeth were removed completely. The operation time was significantly shorter in the palatal approach group compared to the buccal approach group (13.3 ± 2.8 min vs. 22.3 ± 5.5 min, P<0.001). The incidence of traumatic ulcers of the lips was significantly higher in the buccal approach group than in the palatal approach group (7/20 vs. 0/20, P = 0.008). Conclusion: It is more efficient to perform surgery with a palatal approach if a Class B mesioangularly IMTM is located in the non-buccal aspect of the adjacent second molar. Clinical Trial Registration Number: ChiCTR2000040063. abstract_id: PUBMED:23130026 Direct pulp-capping with calcium enriched mixture in primary molar teeth: a randomized clinical trial. Introduction: The aim of this trial was to compare clinical and radiographic success rates of direct pulp capping (DPC) using a novel biomaterial called Calcium Enriched Mixture (CEM) cement versus Mineral Trioxide Aggregate (MTA) in primary molar teeth. Materials And Methods: In this randomized clinical trial 42 deciduous molars in 21 patients who had at least two teeth requiring DPC, were treated. The enrolled patients were between 5-8 years. The molar teeth were randomly divided into two experimental groups. Patients and operators were blinded. The teeth were anaesthetized, caries were removed and after pinpoint exposure of dental pulp, haemostasis was achieved. The exposure points were capped with MTA or CEM cement. All teeth were restored with amalgam. Patients were recalled for the 6-month follow up. Statistical analysis was carried out using McNemar test. Results: Thirty-eight teeth were available for follow up (19 in each group). The radiographic evaluations did not show failure in experimental groups; however, in clinical examinations one sinus tract was found in CEM cement group. Clinical success rates in CEM cement and MTA groups were 94.8% and 100%, respectively. Dentinal bridge formation was not observed in the two experimental groups. Conclusion: There is no significant difference between treatment outcomes of direct pulp capping with either CEM cement or MTA; therefore, both biomaterials can be used successfully for DPC in primary molar teeth. abstract_id: PUBMED:32089337 Clinical Outcomes after Apical Surgery on the Palatal Root of the Maxillary First Molar Using a Palatal Approach. Introduction: Clinicians are often reluctant to use the palatal approach in apical surgery for the maxillary first molar. This study aimed to investigate the outcomes and complications associated with apical surgery on the palatal root of the maxillary first molar by using a palatal approach. Methods: We searched for patients who underwent apical surgery with a palatal approach on the palatal root of the maxillary first molar between March 2010 and September 2017 by a single operator. A total of 46 teeth from 46 subjects were included, and they were examined at 6 and 12 months after surgery and annually thereafter. To evaluate whether there was nerve damage on the surgical side, a pinprick test and cotton swab test were used with a 0-10 numerical rating scale, and the results were statistically compared by using the Wilcoxon signed-rank test. Results: Of the total 35 recall cases (recall rate, 76%; average follow-up time, 3.5 years), success was considered achieved in 32; 30 cases showed complete healing, and 2 had incomplete healing. Three cases resulted in failure, all occurring within 1 year after surgery. The pinprick test results showed that all the numerical rating scale scores were higher than 7, and the responses on the surgical site were not statistically different from those on the contralateral site (P = .109). All subjects showed normosensitivity to the cotton swab test. Conclusions: Apical surgery on the palatal root of the maxillary first molar using a palatal approach was predictable and successful, and the number of complications resulting from artery and nerve damage was small. abstract_id: PUBMED:32621999 Surgical approach of ectopic maxillary third molar avulsion: Systematic review and meta-analysis. Ectopic maxillary third molars (EMTM) are extracted mainly by the Caldwell-Luc technique but also by nasal endoscopy. There is currently no consensus on the treatment of this eruption and its management is heterogeneous and multidisciplinary. Two literature searches were performed with no time restrictions via Pubmed. In the first, we used the keywords "ectopic AND third molar" and in the second the keywords "dentigerous cyst AND ectopic third molar". For both articles, epidemiological, symptomatic, radiological and surgical data were recorded. Overall, 33 eligible articles were identified involving 39 cases of EMTM. 79% of patients were symptomatic. 87% of the teeth were associated with a dental cyst. In only 13% of cases was the location of the tooth in the sinus specified in the three planes of the space. Surgery was performed in 77% of patients by the Caldwell-Luc technique, by nasal endoscopy in 10% and by the Le Fort I approach in 3%. The indications for avulsion of EMTM are symptomatic patients or asymptomatic patients with an associated cyst. The intra-sinusal location of the tooth is not a factor in the choice of technique used, which depends rather on the individual skills of the surgeon. Although for a trained operator the Le Fort I osteotomy is an easy procedure, its interest in the treatment of EMTM is limited owing to the rare but potentially severe complications involved. Answer: The left-molar approach for direct laryngoscopy has been investigated in several studies to determine its ease and effectiveness compared to the conventional midline approach. The findings suggest that while the left-molar approach can improve the glottic view and make difficult intubations easier in some cases, it is not without challenges. One study found that although the left-molar approach without optimal external laryngeal manipulation (OELM) improved the grade II glottic view to grade I in 52.8% of patients, only 37 out of 197 patients could be intubated using this approach. Moreover, a significant number of patients required a second attempt at intubation with the left-molar approach, indicating that it may not be as easy as the midline approach (PUBMED:19234820). Another study reported that the left-molar approach provided a significantly better laryngeal view and success rate of tracheal intubation compared to the conventional midline approach in cases of unexpected difficult intubation. Out of 20 patients with Grade III-IV laryngeal views on conventional midline laryngoscopy, 11 were successfully intubated using the left-molar approach (PUBMED:18405410). However, another study highlighted that the left-molar approach with OELM significantly reduced the number of difficult laryngoscopies compared to the midline approach with OELM, suggesting that the left-molar approach can be beneficial in improving the laryngeal view in patients with difficult laryngoscopy (PUBMED:10638901). In summary, while the left-molar approach can improve the laryngeal view and facilitate intubation in certain difficult cases, it may not be universally easy to perform and can be associated with a higher incidence of second attempts at intubation. It is not possible to predict the failure of intubation with the left-molar approach by considering preoperative risk factors (PUBMED:19234820), and its efficacy may be limited by the difficulty in inserting the endotracheal tube. Therefore, the ease of the left-molar approach for direct laryngoscopy may vary depending on the individual patient's anatomy and the clinician's experience with the technique.
Instruction: The relationship between continuity of care and the health behaviors of patients: does having a usual physician make a difference? Abstracts: abstract_id: PUBMED:10386567 The relationship between continuity of care and the health behaviors of patients: does having a usual physician make a difference? Background: Implicit in "any willing provider" and "freedom of choice" legislation is the assumption that ongoing provider relationships lead to better patient outcomes on average. Although previous studies have identified associations of usual source of care with medical utilization, its relationship to patient lifestyle has not been examined. Objective: To determine the effect of having a usual physician on health behaviors. Methods: Data on 3,140 adults from the 1995 Mid-Life in the US study were used to estimate logistic regressions of the effect of having a usual physician on exercise, obesity, vitamin-taking, smoking quits, substance abuse behaviors, preventive medical visits, and respondent assessments of the ability to affect one's own health and risk of heart attacks and cancer. Results: Respondents with a usual physician were 3 times as likely to have had a preventive medical visit during the past year. Among lower-income respondents, those with usual physicians were one-half as likely to report substance abuse behaviors. Instrumenting reduced the magnitude of the former but not latter effect. No other significant differences were found. Conclusions: Strategies designed to foster regular patient-provider relationships may affect certain health behaviors, such as preventive care visits and substance abuse. Yet in the absence of interventions to improve the effectiveness of these relationships, they are unlikely to be a powerful policy instrument for achieving widespread improvements in patient lifestyle choices. abstract_id: PUBMED:15043375 Provider continuity in family medicine: does it make a difference for total health care costs? Background: International comparisons of health care systems have shown a relationship at the macro level between a well-structured primary health care plan and lower total health care costs. The objective of this study was to assess whether provider continuity with a family physician is related to lower health care costs using the individual patient as the unit of analysis. Methods: We undertook a study of a stratified sample of patients (age, sex, region, insurance company) for which 2 cohorts were constructed based on the patients' utilization pattern of family medicine (provider continuity or not). Patient utilization patterns were observed for 2 years. The setting was the Belgian health care system. The participants were 4,134 members of the 2 largest health insurance companies in 2 regions (Aalst and Liège). The main outcome measures were the total health care costs of patients with and without provider continuity with a family physician, controlling for variables known to influence health care utilization (need factors, predisposing factors, enabling factors). Results: Bivariate analyses showed that patients who were visiting the same family physician had a lower total cost for medical care. A multivariate linear regression showed that provider continuity with a family physician was one of the most important explanatory variables related to the total health care cost. Conclusions: Provider continuity with a family physician is related to lower total health care costs. This finding brings evidence to the debate on the importance of structured primary health care (with high continuity for family practice) for a cost-effective health policy. abstract_id: PUBMED:11093565 Continuity of care and the physician-patient relationship: the importance of continuity for adult patients with asthma. Background: We assessed the role and importance of continuity of care in predicting the perceptions of the physician-patient relationship held by patients with asthma. Methods: We analyzed the 1997 statewide probability survey of adult Kentucky Medicaid recipients. The participants included 1726 respondents with 2 or more visits to a physician's office, clinic, or emergency department in the previous 12 months. Of these, 404 reported having asthma. The respondents used 5-point single-item scales to rate continuity of care, provider communication, and patient influence over treatment. Results: Multivariate linear regression analyses were used to assess the contribution of continuity of care to provider communication and patient influence in the presence of control variables. Those variables included age, sex, education, race, number of visits, general health, health improvement, and life satisfaction. For persons with asthma, continuity of care was the only variable that significantly contributed to the provider communication model (P = .01) and the only variable other than life satisfaction that contributed to the patient influence model (P < .05 for each). For patients who did not have asthma, continuity of care was one of several variables contributing significantly (P < .05) to the provider communication and patient influence models. Conclusions: Particularly for patients with asthma, continuity of care was linked to patient evaluations of their interaction with the physician. Because of this, changes in health care systems that promote discontinuity with individual physicians may be particularly disruptive for patients with chronic diseases. abstract_id: PUBMED:21808749 Effects of continuity of care and patient dispositional factors on the physician-patient relationship. Background: We developed a questionnaire to examine the influence of physician and patient variables on the quality of the physician-patient relationship. Methods: More than 300 family medicine patients completed self-report measures of the physician-patient relationship and variables likely to influence it. Results: The quality of relationship was related to continuity of physician care (having a primary physician, duration of that relationship, and frequency of visits) and to patient dispositional variables (neuroticism, positive and negative affectivity) but not to demographic variables. The regression model included having a primary physician, duration of relationship with that physician, and positive affectivity. Relationship quality was, in turn, associated with outcomes (adherence to care, treatment response, satisfaction with care, and commitment to physician). Conclusions: The quality of physician-patient relationship is influenced by physician continuity and patient dispositional variables. Better understanding of these may contribute to the therapeutic potential of this important relationship. abstract_id: PUBMED:17431698 The effects of primary care physician visit continuity on patients' experiences with care. Background: Visit continuity is important to patients and valued by physicians. However, it is virtually impossible for primary care physicians (PCPs) to provide care during every paneled patient visit. It remains unclear whether PCP visit discontinuity can be planned in a way that is least disruptive to patients' experiences with care. Objective: This study aims to clarify whether visit continuity affects patients' experiences with primary care equally for all patients. Design: From January 2004 through March 2005, a large multispecialty practice in Massachusetts administered the Ambulatory Care Experience Survey (ACES) monthly to a random sample of patients visiting each of 145 PCPs. The analytic sample includes 14,835 patients with 2 or more primary care visits over the 6 months before being surveyed. Usual Provider Continuity (UPC), an administratively based measure of PCP visit continuity, was calculated for all respondents. Multilevel regression models that accounted for the clustering of patients within physicians modeled the relationship between UPC and each ACES measure. Interaction effects between UPC and gender, education, self-rated health, and PCP-patient relationship duration were tested. Results: Physician-patient interaction quality, including physician communication, knowledge of the patient, health promotion support, and organizational access were more strongly influenced by visit continuity among respondents in early stages of a PCP-patient relationship (P < 0.01) and with worse self-rated health (P < 0.01). Conclusions: Improvements in physician-patient relationship quality can be achieved by targeting visit continuity improvement efforts to patients who benefit most, particularly those in early stages of a PCP-patient relationship and/or perceive their health as poor. abstract_id: PUBMED:7804470 The effects of psychological research on the continuity of the patient-physician relationship in a primary care health maintenance organization setting. This project was designed to explore the impact of sensitive psychological research on the continuity of the relationship between the patient and the physician researcher in a primary care health maintenance organization setting. Subjects were 134 women who participated in a sensitive psychological research project that was conducted by a female family physician researcher. Subjects were matched by height and weight, length of the study interval, and type of medical service with 127 female controls who were seen by a male family physician during the same period in the same health maintenance facility. Participants were compared for patient-initiated changes in primary care physicians during equalized follow-up periods. No significant differences between subjects and controls were found. These results suggest that sensitive psychological research does not appear to deter patients who are research participants from continuing their professional relationship with the physician researcher. abstract_id: PUBMED:36596694 Relational Continuity, Physician Payment, and Team-Based Primary Care in the Canadian Health Care System. Purpose: Continuity is a core component of primary care and known to differ by patient characteristics. It is unclear how primary care physician payment and organization are associated with continuity. Methods: We analyzed administrative data from 7,110,036 individuals aged 16+ in Ontario, Canada who were enrolled to a physician and made at least 2 visits between October 1, 2017 and September 30, 2019. Continuity with physician and practice group was quantified using the usual provider of care index. We used log-binomial regression to assess the relationship between enrollment model and continuity adjusting for patient characteristics. Results: Mean physician and group continuity were 67.3% and 73.8%, respectively, for patients enrolled in enhanced fee-for-service, 70.7% and 76.2% for nonteam capitation, and 70.6% and 78.7% for team-based capitation. These differences were attenuated in regression models for physician-level continuity and group-level continuity. Older age was the most notable factor associated with continuity. Compared with those 16 to 34, those 80 and older had 1.45 times higher continuity with their physician. Conclusion: Our results suggest that continuity does not differ substantially by physician payment or organizational model among primary care patients who are formally enrolled with a physician in a setting with universal health insurance. abstract_id: PUBMED:11199905 Continuity of care and trust in one's physician: evidence from primary care in the United States and the United Kingdom. Background And Objectives: Patients' trust in their physician to act in their best interest contributes to the effectiveness of medical care and may be related to the structure of the health care system. This study explored the relationship between continuity of care and trust in one's physician, particularly in terms of differences between the United States and the United Kingdom (UK). Methods: We conducted a cross-sectional survey of adult patients (n = 418 in the United States and n = 650 in the UK) who presented in outpatient primary care settings in the United States (Charleston, SC, and Lexington, Ky) and in the UK (Leicester and Exeter). Results: A high percentage of both groups of patients reported having a usual place of care and doctor. A total of 69.8% of UK patients and 8.0% of US patients have had their regular physician for > or = 6 years. US patients (92.4%) are more likely than UK patients (70.8%) to value continuity with a doctor. Both groups had high levels of trust in their regular doctor. Trust was related to one continuity measure (length of time for the relationship) but not to another (usual provider continuity index more than 1 year). In a multivariate model, country of residence had no independent relationship with trust, but continuity of care was significantly related. Conclusions: Higher continuity is associated with a higher level of trust between a patient and a physician. Efforts to improve the relationship between patients and physicians may improve the quality and outcomes of care. abstract_id: PUBMED:33761874 Influence of organization and demographic characteristics of primary care practices on continuity of care: analysis of a retrospective cohort from 287 primary care practices covering about 6 million people in Catalonia. Background: There is evidence that an ongoing patient-physician relationship is associated with improved health outcomes and more efficient health systems. The main objective of this study is to describe the continuity of care in primary healthcare in Catalonia (Spain) and to analyze whether the organization of primary care practices (PCP) or their patients' sociodemographic characteristics play a role in its continuity of care. Methods: Four indices were used to measure continuity of care: Usual Provider Index (UPC), Modified Modified Continuity Index (MMCI), Continuity of Care Index (COC), and Sequential Continuity Index (SECON). The study was conducted on 287 PCP of the Catalan Institute of Health (Institut Català de la Salut-ICS). Each continuity of care index was calculated at the patient level (3.2 million patients and 35.5 million visits) and then aggregated at the PCP level. We adjusted linear regression models for each continuity index studied, considering the result of the index as an independent variable and demographic and organizational characteristics of the PCP as explanatory variables. Pearson correlation tests were used to compare the four continuity of care indices. Results: Indices' results were: UPC: 70,5%; MMCI: 73%; COC: 53,7%; SECON: 60,5%. The continuity of care indices had the highest bivariate correlation with the percentage of appointments booked with an assigned health provider (VISUBA variable: the lower the value, the higher the visits without an assigned health provider, and thus an organization favoring immediate consultation). Its R2 ranged between 56 and 63%, depending on the index. The multivariate model which explained better the variability of continuity of care indices (from 49 to 56%) included the variables VISUBA and rurality with a direct relationship; while the variables primary care physician leave days and training practices showed an inverse relationship. Conclusion: Study results suggest that an organization of primary care favoring immediate consultation is related to a lower continuity of patient care. abstract_id: PUBMED:15506580 Patient-physician shared experiences and value patients place on continuity of care. Purpose: We undertook a study to examine the impact of experiences shared between patient and physician and the value patients place on continuity of care. Methods: Data on 4,454 patients collected in The Direct Observation of Primary Care (DOPC) study conducted between October 1994 and August 1995 were analyzed to assess the value patients place on continuity, length of patient-physician relationship, and experiences shared between patient and physician. Results: A significant interaction was yielded between duration of relationship and experiences shared between patient and physician (P = .03). For all lengths of relationship with the physician, the value that patients have for continuity increased when patients indicated experiences shared with the physician. For patients who did not report experiences shared with the physician, the longer the relationship, the greater the value placed on continuity. Conclusions: The results of this study point to the importance of the experiences shared between patients and physicians and the value that patients place on continuity with their regular physician. Answer: The relationship between continuity of care and health behaviors of patients has been explored in various studies, and evidence suggests that having a usual physician can indeed make a difference in certain health behaviors and outcomes. One study found that respondents with a usual physician were three times as likely to have had a preventive medical visit during the past year, and among lower-income respondents, those with usual physicians were half as likely to report substance abuse behaviors. However, no other significant differences were found in behaviors such as exercise, obesity, vitamin-taking, and smoking quits (PUBMED:10386567). Another study indicated that provider continuity with a family physician was related to lower total health care costs, suggesting that a regular patient-provider relationship may contribute to more cost-effective health care utilization (PUBMED:15043375). For adult patients with asthma, continuity of care was the only variable that significantly contributed to the provider communication model and was also significantly related to patient influence over treatment, highlighting the importance of continuity for patients with chronic diseases (PUBMED:11093565). The quality of the physician-patient relationship was found to be influenced by continuity of physician care, including having a primary physician and the duration of that relationship, as well as patient dispositional variables like neuroticism and positive affectivity. This quality of relationship was associated with outcomes such as adherence to care, treatment response, satisfaction with care, and commitment to the physician (PUBMED:21808749). In terms of patients' experiences with care, visit continuity was more strongly influential for respondents in the early stages of a PCP-patient relationship and those with worse self-rated health (PUBMED:17431698). Overall, these studies suggest that continuity of care, particularly having a usual physician, can have a positive impact on preventive care visits, substance abuse behaviors, health care costs, the quality of the physician-patient relationship, and patient experiences with care. However, the effectiveness of continuity of care as a policy instrument for achieving widespread improvements in patient lifestyle choices may be limited without additional interventions to improve the effectiveness of patient-provider relationships (PUBMED:10386567).
Instruction: Does psychological characteristic influence physicians' communication styles? Abstracts: abstract_id: PUBMED:16052317 Does psychological characteristic influence physicians' communication styles? Impact of physicians' locus of control on interviews with a cancer patient and a relative. Context: Physicians' psychological characteristics may influence their communication styles and may thus interfere with patient-centred communication. Objective: Our aim was to test the hypothesis that, in interviews with a cancer patient and a relative, physicians with an "external" locus of control (LOC; who believe that life outcomes are controlled by external forces such as luck, fate or others) have a communication style different from that of physicians with an "internal" LOC (who believe that life outcomes are controlled by their own characteristics or actions). Design, Setting, Participants And Intervention: Eighty-one voluntary physicians practising in the field of oncology were recorded while performing an actual and a simulated interview with a cancer patient and a relative. Main Outcome Measures: Physicians' communication skills were assessed using the Cancer Research Campaign Workshop Evaluation Manual. Physicians' LOC was assessed using the Rotter I-E scale. The communication skills of the upper and lower quartiles of physicians in respect of their scores on this scale were compared using Student's t test. Results: In actual interviews, physicians with an "external" LOC talked more to the relative (P=0.017) and used more utterances with an assessment function (P=0.010) than physicians with an "internal" LOC. In simulated interviews, physicians with an "external" LOC used less utterances that give premature information (P=0.031) and used more utterances with a supportive function, such as empathy and reassurance (P=0.029), than physicians with an "internal" LOC. Conclusion: These results provide evidence that physicians' LOC can influence their communication styles. Physicians' awareness of this influence constitutes a step towards a tailoring of their communication skills to every patient's and relative's concerns and needs and thus towards a patient-centred communication. abstract_id: PUBMED:33854302 The Psychological Effects of Physicians' Communication Skills on COVID-19 Patients. Purpose: The COVID-19 pandemic has affected physician-patient communication (PPC) in multiple ways. This study aims to report on the impact of physician-patient communication (PPC) skills on COVID-19 patients' psychology in Jordan. Sample And Methods: In this study, two questionnaires were designed. The first questionnaire targeted physicians, with 72 responses, and the second questionnaire targeted patients, with 248 responses. Both questionnaires contained common sections covering nine aspects of communication such as empathy, honesty, optimism, simple and deliberateness. Results: This study found that the psychological effect of physicians' positive communication skills on COVID-19 patients is significant. There were almost statistical agreement between physicians' and patients' questionnaire responses that the physicians' communication skills have positively affected the patients' psychological status; all patients' responses confirmed this finding. Conclusion: Based on this study's findings, appropriate and continuous training will advance physicians' communication skills in the form of exercises that could be as simple as class- or lecture-based activities, or using technology-based learning. Using a protocol or handbook to guide such communication is another essential strategy to enhance physician-patient communication (PPC). The study recommends that physicians must be aware that PPC skills required may vary depending on whether they are dealing with a pandemic or non-pandemic situation. Generally, appropriate or positive communication skills are considered one of the main factors effecting patients' psychological responses to their diagnosis. abstract_id: PUBMED:26502916 Physicians' communication styles as correlates of elderly cancer patients' satisfaction with their doctors. Physician-patient communication style is of utmost importance to patients with life-threatening diseases. This study identifies the most desired physician communication style by older cancer patients; and examines which of the studied communication styles significantly explains cancer patients' satisfaction with family physicians. A total of 200 older cancer patients, with average age of 75 years, participated in the study, yielding a response rate of 42%. Prospective respondents were randomly selected from the list of cancer patients in the central geographical district of Israel's second largest Health Maintenance Organization fund. Respondents rated their satisfaction with physicians as relatively high. All three communication styles studied were found to be associated with patient's satisfaction. Associations were found between self-rated health, time since the diagnosis of cancer and satisfaction. Women were less satisfied than men with their physicians. Two variables emerged as significant predictors of satisfaction: the physician's caring communication style and patient's gender. Intervention programmes should focus on elevating physicians' awareness of the importance of their communication with cancer patients in general, and of the caring communication style in particular. abstract_id: PUBMED:32158296 The Role of Attachment Styles and Communication Skills in Predicting Nursing and Medical Students' Perception of Elder Abuse. Background: The knowledge and abilities of nurses and physicians in perceiving and dealing with abuse are necessary for the improvement of older people's health. Therefore, the aim of this study was to investigate the role of attachment styles and communication skills in predicting nursing and medical students' perception of elder abuse in Yazd, Iran. Methods: The present study was a descriptive cross-sectional design that was conducted in the form of multistage sampling on 397 nursing and medical students at Shahid Sadoughi University of Medical Sciences in Yazd, Iran. The Elderly Caregiving Questionnaire (ECQ), Adults' Attachment Styles Inventory (AAI), and the revised version of the Communication Skills Questionnaire were used for data collection. Data were analyzed using the SPSS version 22 software. Results: The findings revealed that the participants had an appropriate understanding of elder abuse. The highest level of perception of elder abuse was in the dimension of psychological abuse (24.5± 5.22) and the lowest level was related to the dimension of physical abuse (21.7± 4.74). Additionally, a positive significant relationship was found between the score of students' perception of abuse, and secure and avoidant attachment styles as well as students' communication skills (p<0.01). According to regression analysis, these predictors explained 8.6% of the observed variance in the students' perception of elder abuse. Conclusion: These findings indicate that attachment styles influenced the individual's perception of elder abuse. Therefore, in future planning and research, this should be given more attention. abstract_id: PUBMED:32581936 Attachment Styles and Communication of Displeasing Truths. This work explores how humans manage the communication of a displeasing and face-threatening truth and how the communicative strategies of the sender and the reaction of the receiver are influenced by their attachment style. Two experimental studies demonstrate that the attachment styles of both senders and receivers can influence the communicative styles of the sender, the emotions that both senders and receivers feel or attribute to their interlocutor, and the reactions of the receivers. In Study 1, couples of participants played a bogus computer game, ostensibly to test their abilities. Subsequently, "the spokesperson" was given the task to communicate to the "the receiver" a bogus low score of the other and a high score of oneself. Finally, all participants completed an adult Attachment Style Questionnaire (ASQ). A content analysis of the verbal messages of the spokespersons showed two main communication styles: frankness and mitigation. The results suggest that the spokespersons' attachment style influences these communication styles. Using a similar procedure, Study 2 showed that spokespersons with a high avoidant attachment more frequently used frankness when communicating low scores to the receivers. Furthermore, the emotions and impressions reported by anxious and avoidant spokespersons and receivers, respectively, confirm the negative model of the self or the other, typical, respectively, in anxious and avoidant attachment. The detection of communicative strategies stemming from different attachment styles might be of use in user modeling and the planning of personalized systems. abstract_id: PUBMED:37224937 Parenting styles and psychological resilience: The mediating role of error monitoring. Parenting styles are associated with children's psychological resilience. However, the underlying mechanisms of this have not been investigated. Parenting styles influence how individuals respond to self-inflicted errors, and error monitoring is related to psychological resilience. Therefore, this study proposed that error monitoring might be a bridging factor between parenting styles and psychological resilience. Seventy-two young healthy adults were recruited for this study. Parenting styles were assessed using the Parental Bonding Instrument, and psychological resilience was measured using the Connor-Davidson Resilience Scale. Error monitoring was investigated in the Flanker task using event-related potentials (ERPs), and two error-related components of ERPs were measured: error-related negativity (ERN) and error positivity. Mediation analyses showed that the ERN partially mediated the relationship between parenting styles and psychological resilience. Specifically, a higher level of self-reported parental overprotection was related to larger ERN amplitude, which in turn was associated with lower psychological resilience. Additionally, a higher level of self-reported parental allowance of autonomy was related to lower ERN amplitude, which in turn was linked to higher psychological resilience. These results suggest that shaping children's sensitivity in early automatic error detection is one possible mechanism through which parental styles influence their psychological resilience. abstract_id: PUBMED:34940107 Communication Styles and Attention Performance in Primary School Children. Communication styles are the three communication patterns that result from merging the verbal elements of communication, the non-verbal elements and the paraverbal elements. The objective of this study was to test what effect different communication styles have on attention performance in primary school children. We administered the assertive behavior scale for children (CABS), the symbol digit modalities test (SDMT) and the d2 attention test to a sample of 77 participants of primary school. A statistically significant and proportional correlation was found between the assertive communication style and the total number of correct answers of the SDMT. We also found positive correlations between the assertive communication style and d2 attention test performance. Children with an assertive communication style have better attention performance than children with passive or aggressive communication styles. abstract_id: PUBMED:30924516 Interpersonal Communication Competence and Conflict Handling Styles of Nurses in Intensive Care Units Background: Nurses in intensive care units (ICUs) must be capable of employing appropriate conflict handling styles in order to communicate with medical teams, patients, and patient family members effectively. There is a current lack of research on the interpersonal communication competence and conflict handling styles of ICU nursing staffs in Taiwan. Purpose: This study aimed to: (1) explore the interpersonal communication competence and conflict handling styles of ICU nurses and related factors and (2) examine the relationship between the interpersonal communication competence and conflict handling styles of ICU nurses. Methods: A cross-sectional correlational research approach was adopted and convenience sampling was used to recruit 351 ICU nurses from three hospitals in southern Taiwan. Data were collected using a demographic questionnaire, interpersonal communication competence scale, and conflict handling styles scale. Results: (1) ICU nurses demonstrated a moderate or higher level of overall interpersonal communication competence, with "integrating" identified as the most commonly used conflict handling style. (2) Higher educational attainment and job position were both associated with better interpersonal communication competence. (3) The adoption of "compromising" and "avoiding" styles was significantly affected by age; the adoption of "obliging", "compromising", "avoiding", and "integrating" styles was significantly affected by education level; the adoption of "obliging", "compromising", and "integrating" styles was significantly affected by job position; and the adoption of "dominating", "compromising", and "integrating" styles was significantly affected by seniority. (4) Interpersonal communication competence correlated significantly and positively with the adoption of "dominating", "obliging", "compromising", and "integrating" styles. Conclusions / Implications For Practice: Future study of the views of ICU nurses toward their own styles in handling interpersonal conflicts is needed. Furthermore, holding training courses annually to orient ICU nurses with the advantages and disadvantages of interpersonal conflict handling styles is recommended. abstract_id: PUBMED:38459474 The relationship between Empathy and listening styles is complex: implications for doctors in training. Background: Effective communication is the key to a successful relationship between doctors and their patients. Empathy facilitates effective communication, but physicians vary in their ability to empathize with patients. Listening styles are a potential source of this difference. We aimed to assess empathy and listening styles among medical students and whether students with certain listening styles are more empathetic. Methods: In this cross-sectional study, 97 medical students completed the Jefferson scale of Empathy (JSE) and the revised version of the Listening Styles Profile (LSP-R). The relationship between empathy and listening styles was assessed by comparing JSE scores across different listening styles using ANOVA in SPSS software. A p-value less than 0.05 was considered significant. Results: Overall, the students showed a mean empathy score of 103 ± 14 on JSE. Empathy scores were lower among clinical students compared to preclinical students. Most of the medical students preferred the analytical listening style. The proportion of students who preferred the relational listening style was lower among clinical students compared to preclinical students. There was no significant relationship between any of the listening styles with empathy. Conclusion: Our results do not support an association between any particular listening style with medical students' empathic ability. We propose that students who have better empathetic skills might shift between listening styles flexibly rather than sticking to a specific listening style. abstract_id: PUBMED:34445939 Breaking bad news to cancer patients and their families: Attitudes toward death among Turkish physicians and their communication styles. To investigate how Turkish oncologists' attitudes toward death influence their emotional states, outlooks, and communication styles when breaking bad news to cancer patients and/or their families. Cross-sectional study using self-completed questionnaires. The study sample consisted of 35 physicians working at an oncology department. Physicians completed a quantitative one-time survey developed by the authors and the Death Attitude Profile-Revised (DAP-R). Thirty-one physicians completed the survey and the DAP-R. A mean of 13.39 ± 8.82 minutes was allocated for breaking bad news; 87.1% of the participants avoided using the word "cancer" and 42% avoided using the word "death". The attitudes characterized by "death avoidance" and "fear of death" were found to be related to the emotional difficulty experienced by the physicians, and were also associated with less eye contact with the patient, and less attention paid to the language used while breaking bad news. It is important for physicians to be aware of how their attitudes toward death affect their communication with patients during bad news. They should be provided in-service professional education, and therapeutic support. Answer: Yes, psychological characteristics do influence physicians' communication styles. Studies have shown that physicians' locus of control (LOC), a psychological characteristic, can affect their communication style during interviews with cancer patients and their relatives. Physicians with an "external" LOC, who believe that life outcomes are controlled by external forces, tend to talk more to the relative, use more utterances with an assessment function, and in simulated interviews, use less premature information and more supportive utterances like empathy and reassurance, compared to physicians with an "internal" LOC, who believe that life outcomes are controlled by their own actions (PUBMED:16052317). Furthermore, physicians' communication skills have been found to significantly impact COVID-19 patients' psychological status, with positive communication skills being crucial for patients' psychological responses to their diagnosis (PUBMED:33854302). The communication style of physicians also correlates with elderly cancer patients' satisfaction, with a caring communication style being a significant predictor of satisfaction (PUBMED:26502916). Attachment styles, another psychological characteristic, have been shown to influence communication strategies, emotions, and reactions in the context of communicating displeasing truths (PUBMED:32581936). Additionally, parenting styles, which shape psychological resilience, have been linked to error monitoring, a process that is influenced by how individuals respond to self-inflicted errors (PUBMED:37224937). Communication styles also affect attention performance in children, with assertive communication styles correlating with better attention performance (PUBMED:34940107). In the context of nursing, interpersonal communication competence and conflict handling styles are important for effective communication in intensive care units, and these are influenced by factors such as educational attainment and job position (PUBMED:30924516). Lastly, empathy, a key component of effective communication between doctors and patients, does not appear to be associated with any particular listening style among medical students. Instead, students with better empathetic skills may flexibly shift between listening styles (PUBMED:38459474). Turkish physicians' attitudes toward death have also been found to influence their emotional states and communication styles when breaking bad news to cancer patients and their families (PUBMED:34445939).
Instruction: Can the short index of problems (SIP) be improved? Abstracts: abstract_id: PUBMED:16047535 Can the short index of problems (SIP) be improved? Validity and reliability of the three-month SIP in an emergency department sample. Objective: Although the Short Index of Problems (SIP) is often used, little is known about the psychometric properties of the SIP in special populations. The present study seeks to determine the following: (1) whether it is possible to substitute items to enhance the psychometric properties of the SIP and (2) whether the SIP, or improved scale, is as sensitive as the Drinker Inventory of Consequences (DrInC) to assess intervention effectiveness. Method: The sample consisted of 404 injured patients who were treated in the Emergency Department (ED) of a major teaching hospital that serves southern New England. Three approaches were used to guide development of the 3-month SIP-R, the potential alternative to the SIP. Cronbach's alpha assessed intrascale reliability; hierarchical multiple regression assessed construct validity; performance of the scales assessing intervention change were compared to the total 3-month DrInC as a function of intervention using analysis of covariance (ANCOVA). Results: There was no evidence that changing the current SIP items will significantly improve performance. The 3-month SIP performed as well as the 3-month DrInC-2R in predicting 12-month DrInC scores and in determining intervention change at 12 months. Of the 45 DrInC items, 31 also predicted a difference across intervention groups. Conclusions: These results suggest there is no advantage to changing the current SIP items. The 3-month SIP is a psychometrically sound measure for assessing consequences of alcohol consumption in an ED sample and is almost as sensitive to intervention change as the full DrInC. abstract_id: PUBMED:19261243 Reliability and validity of the alcohol short index of problems and a newly constructed drug short index of problems. Objective: This study evaluated the psychometric properties of the 15-item alcohol Short Index of Problems (SIP) instrument and those of a newly constructed 15-item drug Short Index of Problems (SIP-D) instrument in 277 newly entered substance-abuse patients. Method: The SIP is derived from the longer, 50-item Drinker Inventory of Consequences (DrInC), which was designed to assess adverse consequences of alcohol use. The SIP-D was constructed by substituting the term "drug use" for the term "drinking" in each SIP item. A 3-month recall interval was employed. Results: Factor analyses of each of the instruments revealed similar solutions, with only one main factor accounting for the majority of variance. Nonparametric item response theory methods produced the same finding. Internal consistency reliability estimates for the SIP and SIP-D total scores were .98 and .97, respectively. Concurrent validity was demonstrated by examining the correlations of the total scores for each of the instruments with the recent summary indexes of the newly revised Addiction Severity Index (ASI-Version 6): alcohol, drug, medical, economic, legal, family/social, and psychiatric problems. Conclusions: This study is the first to confirm the psychometric validity of the SIP when used as an independent instrument unembedded within the DrInC. The study also supports the use of the SIP-D as a brief measure of adverse consequences of drug use. The findings strongly support the unidimensional structure of both measures. abstract_id: PUBMED:14506404 Psychometric properties of the short index of problems as a measure of recent alcohol-related problems. Background: The Drinker Inventory of Consequences (DrInC) measures overall consequences of drinking and yields five subscale scores. A short form of the DrInC, the Short Index of Problems (SIP), was developed for use when time does not permit completion of the DrInC. This study investigates the psychometric properties of the SIP. Methods: The study sample consisted of 153 problem drinkers who were participants in a placebo-controlled trial of naltrexone and brief counseling. Results: The SIP showed good internal consistency, good concurrent validity, and adequate stability. Four of the five SIP subscales contributed unique variance beyond general drinking consequences. Conclusions: The SIP is useful for measuring drinking consequences in a sample of problem drinkers, and the subscale scores can be interpreted clinically. Further research on the SIP in other populations is warranted. abstract_id: PUBMED:29643843 SIPSim: A Modeling Toolkit to Predict Accuracy and Aid Design of DNA-SIP Experiments. DNA Stable isotope probing (DNA-SIP) is a powerful method that links identity to function within microbial communities. The combination of DNA-SIP with multiplexed high throughput DNA sequencing enables simultaneous mapping of in situ assimilation dynamics for thousands of microbial taxonomic units. Hence, high throughput sequencing enabled SIP has enormous potential to reveal patterns of carbon and nitrogen exchange within microbial food webs. There are several different methods for analyzing DNA-SIP data and despite the power of SIP experiments, it remains difficult to comprehensively evaluate method accuracy across a wide range of experimental parameters. We have developed a toolset (SIPSim) that simulates DNA-SIP data, and we use this toolset to systematically evaluate different methods for analyzing DNA-SIP data. Specifically, we employ SIPSim to evaluate the effects that key experimental parameters (e.g., level of isotopic enrichment, number of labeled taxa, relative abundance of labeled taxa, community richness, community evenness, and beta-diversity) have on the specificity, sensitivity, and balanced accuracy (defined as the product of specificity and sensitivity) of DNA-SIP analyses. Furthermore, SIPSim can predict analytical accuracy and power as a function of experimental design and community characteristics, and thus should be of great use in the design and interpretation of DNA-SIP experiments. abstract_id: PUBMED:26137007 Role of the CacyBP/SIP protein in gastric cancer. Various reports indicate that calcyclin binding protein/Siah-1-interacting protein (CacyBP/SIP) is an important protein in tumorigenesis, but whether CacyBP/SIP promotes or suppresses cancer may depend on the cell type. In order to investigate whether CacyBP/SIP is significant in gastric cancerous tumorigenesis, the present study used immunohistochemistry to analyze 181 gastric cancer tissue samples, as well as 181 healthy tissue samples from the same gastric cancer patients. The immunohistochemical results were compared against patient data and pathological analysis of the tissue slices, including gender, age, degree of tumor differentiation and tumor, node, metastasis (TNM) stage. In addition, the level of CacyBP/SIP expression was detected in three frozen tissue samples of gastric adenocarcinoma using western blot analysis. Of the 181 cases analyzed in the present study, 80 cases were identified as non-metastatic gastric cancer and 101 cases were identified as gastric cancer that had metastasized to the lymph nodes. Tissue biopsies from the two sets of patients were examined using immunohistochemistry to identify the level of CacyBP/SIP expression in metastatic and primary gastric cancer tissues. Statistical analyses were performed on all data. The immunohistochemical analysis revealed that CacyBP/SIP was expressed in 31% (56/181) of gastric adenocarcinoma tissue samples and 7% (12/181) of adjacent non-cancerous gastric tissues (P<0.05). Furthermore, the expression levels of CacyBP/SIP were higher in cancerous tissue compared with the adjacent non-cancerous gastric tissue using western blotting. No association was identified between CacyBP/SIP expression and patient age (P=0.975), gender (P=0.185), degree of tumor differentiation (P=0.076) or TNM stage (P=0.979). Among the 101 patients with metastatic gastric cancer, CacyBP/SIP was expressed at primary sites in 31% (31/101) of cases and at metastatic sites in 26% (26/101) of cases (P=0.434). However, among the 80 patients with non-metastatic gastric cancer, CacyBP/SIP was expressed at the tumor site in 34% (27/80) of cases, which was not significantly different from the 31% (25/80) of cases in the metastatic group (P=0.662). These findings indicate that CacyBP/SIP expression is not a marker of gastric cancer or metastatic gastric cancer, nor does it appear to correlate with the clinicopathological features of gastric cancer. abstract_id: PUBMED:18058418 Assessing negative consequences in patients with substance use and bipolar disorders: psychometric properties of the short inventory of problems (SIP). The Short Inventory of Problems (SIP), a questionnaire that measures negative consequences of alcohol use, has not been validated among substance users with co-occurring psychiatric illness. This study examined the psychometric properties of the SIP in 57 outpatients diagnosed with substance use disorder and bipolar disorder. We modified the items to assess drugs in addition to alcohol and, further, we added corresponding items to assess consequences of bipolar disorder. This modified version of the SIP was psychometrically sound, and may be useful in patients with both disorders. A greater understanding of perceived negative consequences may enhance outcome assessment in dually diagnosed populations. abstract_id: PUBMED:28080093 Psychometric properties of the Short Inventory of Problems (SIP) with adjudicated DUI intervention participants. We used responses of two large samples of court-ordered participants from a statewide alcohol/driving safety program to investigate factor structure, score reliability, and criterion-related validity of the Short Inventory of Problems (SIP). Exploratory and confirmatory factor analyses, using both item-level and subscore-level data, support a one-factor structure for the SIP. Internal consistency score reliability estimates were consistent across samples and high enough to warrant use for making decisions about individuals. Item response theory model calibration of the scale, using a two-parameter logistic model, yielded consistent estimates of location and discrimination (slope) across samples. Estimated scale scores correlated moderately with an independent indicator of alcohol problems and poorly with an indicator of risky driving behavior, lending evidence of convergent and discriminant validity. We judge the SIP as adequately described by a single factor, that the joint person-item scale is coherent, and scores behave consistently across samples. (PsycINFO Database Record abstract_id: PUBMED:37067716 Air quality prediction for Chengdu based on long short-term memory neural network with improved jellyfish search optimizer. Air quality prediction plays an important role in preventing air pollution and improving living environment. For this prediction, many indicators can be employed to reflect the air quality, among which air quality index (AQI) is the most commonly used. However, existing methods are relatively simple and the corresponding prediction accuracy needs to be improved. Particularly, the prediction accuracy is affected by the parameter selection of methods, and the corresponding optimization problems are usually non-convex and multi-modal. Therefore, based on long short-term memory (LSTM) neural network with improved jellyfish search optimizer (IJSO), a novel hybrid model denoted by IJSO-LSTM is proposed to predict AQI for Chengdu. In order to evaluate the optimizing ability of IJSO, other variants of jellyfish search optimizer as well as other state-of-the-art meta-heuristic algorithms are applied to optimize the hyperparameters of LSTM neural network for comparison, and the results confirm that IJSO is more suitable for optimizing LSTM neural network. In addition, compared with other well-known models, the results demonstrate IJSO-LSTM has higher prediction accuracy with root-mean-square error, mean absolute error, and mean absolute percentage error controlling below 4, 3, and 4%, respectively. abstract_id: PUBMED:37671037 Advances and perspectives of using stable isotope probing (SIP)-based technologies in contaminant biodegradation. Stable isotope probing (SIP) is a powerful tool to study microbial community structure and function in both nature and engineered environments. Coupling with advanced genomics and other techniques, SIP studies have generated substantial information to allow researchers to draw a clearer picture of what is occurring in complex microbial ecosystems. This review provides an overview of the advances of SIP-based technologies over time, summarizes the status of SIP applications to contaminant biodegradation, provides critical perspectives on ecological interactions within the community, and important factors (controllable and non-controllable) to be considered in SIP experimental designs and data interpretation. Current trend and perspectives of adapting SIP techniques for environmental applications are also discussed. abstract_id: PUBMED:24148139 Construct validity of the short inventory of problems among Spanish speaking Hispanics. Objective: Research on ethnic health disparities requires the use of psychometrically sound instruments that are appropriate when applied to ethnically diverse populations. The Short Inventory of Problems (SIP) assesses alcohol-related consequences and is often used as a measure to evaluate intervention effectiveness in alcohol research; however, whether the psychometric properties of this instrument are comparable across language and ethnicity remains unclear. Method: Multi-group confirmatory factor analysis (MGCFA) was used to test for the invariance of the measurement structure of the SIP across White Non-Hispanic English speaking (N=642), Hispanic English speaking (N=275), and Hispanic Spanish speaking (N=220) groups. Results: The MGCFA model in which factor loadings, measurement intercepts, and item residuals were constrained to be equal between English speakers and Spanish speakers exhibited a reasonable fit to the data, χ(2)(221)=1089.612 p<.001, TLI=.926; CFI=.922, RMSEA=.059 (90% CI=.055-.062). The ΔCFI supported strict factorial invariance, ΔCFI=.01, across groups; no significant group differences were found between factor loadings, measurement intercepts, or item residuals between English speakers and Spanish speakers. Conclusions: This study extends the existing confirmatory factor analysis results of the SIP by providing additional data to inform the utility of the SIP among Hispanics. Strict factorial invariance between Spanish and English speakers is necessary to: conclude that the underlying constructs have the same meaning across groups; test for group differences in the latent variables across groups; and presume that group differences are attributable only to true differences between groups. Thus, the SIP is strongly supported for evaluating the effectiveness of alcohol treatment among Hispanics. Answer: Based on the provided abstracts, there is no evidence that changing the current items of the Short Index of Problems (SIP) will significantly improve its performance. The study described in abstract PUBMED:16047535 investigated the validity and reliability of the three-month SIP in an emergency department sample and found that the SIP performed as well as the Drinker Inventory of Consequences (DrInC) in predicting 12-month DrInC scores and in determining intervention change at 12 months. The study concluded that there is no advantage to changing the current SIP items and that the three-month SIP is a psychometrically sound measure for assessing consequences of alcohol consumption in an emergency department sample, almost as sensitive to intervention change as the full DrInC. Additionally, other studies have confirmed the psychometric validity of the SIP when used as an independent instrument unembedded within the DrInC (PUBMED:19261243), and have found it to be useful for measuring drinking consequences in a sample of problem drinkers, with the subscale scores being interpretable clinically (PUBMED:14506404). The SIP has also been validated among substance users with co-occurring psychiatric illness (PUBMED:18058418), and its factor structure, score reliability, and criterion-related validity have been supported in studies with court-ordered participants from a statewide alcohol/driving safety program (PUBMED:28080093). Therefore, based on the evidence from these studies, it appears that the SIP is already a robust tool, and there is no clear indication that it can be improved by changing its current items.
Instruction: Long-term results of mitral repair in patients with severe left ventricular dysfunction and secondary mitral regurgitation: does the technique matter? Abstracts: abstract_id: PUBMED:27126133 Long-term results of mitral repair in patients with severe left ventricular dysfunction and secondary mitral regurgitation: does the technique matter? Objectives: An isolated undersized annuloplasty was used to treat mitral regurgitation (MR) secondary to dilated cardiomyopathy (DCM) if the baseline coaptation depth (CD) was <1 cm. In the presence of significant tethering of the mitral leaflets (CD ≥1 cm), the edge-to-edge (EE) technique was combined with annuloplasty to improve the durability of the repair. The long-term results of this approach are unknown and represent the objective of this study. Methods: To obtain long-term outcome data, we included in the study population the first 105 consecutive patients with severe left ventricular dysfunction (ejection fraction 29 ± 6.6%) and secondary MR submitted to mitral valve repair. Forty patients underwent isolated undersized annuloplasty and 65 patients received the EE technique combined with annuloplasty. Preoperative and postoperative data were prospectively entered into a dedicated database. Clinical and echocardiographic follow-ups were performed in our institutional outpatient clinic. Results: Follow-up was 90% complete. The median follow-up time was 7.2 years (interquartile range 4.3;10.4). The longest follow-up time was 16.5 years. A comparative analysis between the annuloplasty group and the EE group was performed. Baseline LV dimensions and function were slightly worse in the EE group, but only the severity of tethering was significantly more pronounced than in the annuloplasty group. Hospital mortality (3 vs 2.5%, P = 1.0) and 10-year overall survival (42 ± 6.7 vs 55 ± 8.5%, P = 0.2) were not significantly different in the EE and annuloplasty group, respectively. Cumulative incidence functions of cardiac death were similar as well (at 10-years, 34.3 ± 8.1 vs 37.9 ± 6.4%, respectively, P = 0.4). At 10 years, cumulative incidence function of recurrence of MR ≥3+ was lower in the EE patients (10.3 ± 4.1 vs 30.8±8.0%, P = 0.01). Isolated annuloplasty [hazard ratio (HR) 4.84, 95% confidence interval (CI) 1.46-16.1, P = 0.01] and residual MR >1+ at hospital discharge (HR 5.25, 95% CI 2.00-13.8, P < 0.001) were significantly related to the development of MR ≥3. Failure of repair was associated with recurrence of New York Heart Association III or IV symptoms (P < 0.001). Conclusions: In patients with end-stage DCM and secondary MR, the association of the EE technique to the undersized annuloplasty significantly decreases the rate of recurrent MR at long-term. This higher repair durability did not translate into a better long-term prognosis in this series. abstract_id: PUBMED:34511756 Mitral valve repair in chronic severe mitral regurgitation: short-term results and analysis of mortality predictors. Introduction: Mitral valve repair is the accepted treatment for mitral regurgitation (MR) but lack of resources and socioeconomic concerns delay surgical referral and intervention in developing countries. We evaluated immediate and short-term results of mitral valve repair for non-ischemic MR at our centre and aimed to identify the predictors of in-hospital and follow-up mortality. Materials And Methods: The study was conducted at a tertiary-level hospital in South India. All patients >18 years with severe non-ischemic MR who underwent mitral valve repair over a period of 6 years were included. Perioperative data was collected from hospital records and follow-up data was obtained by prospective methods. Results: There were 244 patients (170 males). Most of the patients were in the age group 31-60 years (76.6%). Aetiology of MR was degenerative (n = 159; 65.2%), rheumatic (n = 34; 13.9%), structural (n = 42; 17.2%), or miscellaneous (n = 9; 3.7%). All patients underwent ring annuloplasty with various valve repair techniques. One hundred patients (44.7%) underwent additional cardiac procedures. At discharge, MR was moderate in 4 patients; the rest had no or mild MR. The mean hospital stay of survivors was 7.1 days (SD 2.52, range 5-25 days). There were 9 in-hospital deaths (3.68%) and 10 deaths during follow-up (4.2%). The mean follow-up period was 1.39 years, complete for 87.6%. Pre-operative left ventricle ejection fraction (LVEF) <60% (p = 0.04) was found to be significantly associated with immediate mortality. Logistic regression analysis detected age (p = 0.019), female sex (p = 0.015), and left ventricular (LV) dysfunction at discharge (p = 0.025) to be significantly associated with follow-up mortality. Conclusion: Pre-operative LV dysfunction was identified as a significant risk factor for in-hospital mortality. Female sex, age greater than 45 years, and LV dysfunction at discharge were found to be significantly associated with follow-up mortality. Hence, it is important to perform mitral valve repair in severe regurgitation patients before significant LV dysfunction sets in for a better outcome. abstract_id: PUBMED:19632855 Long-term clinical outcome of mitral valve repair in asymptomatic severe mitral regurgitation. Objective: To assess the long-term survival, the incidence of cardiac complications and the factors that predict outcome in asymptomatic patients with severe degenerative mitral regurgitation (MR) undergoing mitral valve repair. Methods: Up to 143 asymptomatic patients (mean age 63+/-12 years) with severe degenerative MR who underwent mitral valve repair between 1990 and 2001 were subsequently followed up for a median of 8 years. The study population was subdivided into three subgroups: patients with left ventricular (LV) dysfunction and/or dilatation (n=18), patients with atrial fibrillation and/or pulmonary hypertension (n=44) and patients without MR-related complications (n=81). Results: For the patients, 10-year overall and cardiovascular survival was 82+/-4% and 90+/-3%. At 10 years, patients without preoperative MR-related complications had significantly better overall survival than patients with preoperative LV dysfunction and/or dilatation (89+/-4% vs 57+/-13%, log rank p=0.001). Patients without preoperative MR-related complications also tended to have a better 10-year overall and cardiovascular survival than patients with atrial fibrillation and/or pulmonary hypertension (overall survival of 79+/-8%), although this did not reach statistical significance (log rank p=0.17). Cox regression analysis identified the baseline left ventricular ejection fraction and age as the sole independent predictors of outcome. Conclusion: Our data indicate that in asymptomatic patients with severe degenerative MR, mitral valve repair is associated with an excellent long-term prognosis. Nonetheless, the presence of preoperative MR-related complications, in particular LV dysfunction and/or dilatation, greatly attenuates the benefits of surgery. This suggests that mitral valve repair should be performed early, before any MR-related complications ensue. abstract_id: PUBMED:30377076 Long-Term Results Following Repair for Degenerative Mitral Regurgitation - Analysis of Factors Influencing Durability. Background: The majority of patients with degenerative mitral regurgitation (DMR) are amenable to reconstructive procedures. There is debate regarding factors that influence long-term durability with respect to repair technique, valve remodelling and progressive myxomatous change. Methods: A total of 685 patients with DMR underwent mitral valve repair by a single surgeon between 1991 and 2011 with follow-up completed at 31 December 2016. Repair rate for patients undergoing surgery for DMR was over 90%. Mean age was 64 years (18-89) with 66.2% male, 47% NYHA class III-IV, and 20% had permanent atrial fibrillation (PAF). Major associated procedures were performed in 28% of patients (189); including coronary artery bypass graft (CABG) (127), aortic valve replacement (15), aortic root surgery (3) and tricuspid valve annuloplasty (61). Results: Operative mortality (≤30 days) occurred in four patients (0.58%). At 20 years, survival was 58%, freedom from reoperation was 90% and freedom from reoperation and non-operated recurrent MR >2+ (relapse) was 78%. Factors influencing survival were advancing age, left ventricular (LV) dysfunction (ejection fraction <60% or end systolic dimension >40mm), New York Heart Association (NYHA) III-IV and PAF. Predictors of relapse were the degree of residual intraoperative mitral regurgitation (p<0.001), anterior leaflet prolapse (p<0.001) and the addition of a sliding annuloplasty in isolated posterior leaflet repair (p=0.023). The majority of reoperations were for technical issues related to the original repair. A competent valve at 6 months to 3 years postoperatively predicted an excellent long-term result. Conclusion: The great majority of degenerative mitral valves are repairable regardless of age with excellent long-term results achievable following surgery. Survival is reduced by significant symptoms, LV dysfunction and preoperative PAF. Repair is best performed before these features develop. Durability is largely dependent on the technical performance of the repair and degree of residual MR on the post-pump transoesophageal echocardiogram. We recommend surgery should be performed by surgeons specialising and skilled in mitral valve repair. abstract_id: PUBMED:34469799 Mitral Valve Repair Versus Replacement in The Elderly. The disadvantages of mitral valve replacement with a bioprosthesis in the long-term may not play an important role if the shorter life expectancy of older patients is taken into account. This study aims to evaluate whether mitral valve replacement in the elderly is associated with similar outcome compared to repair in the short- and long-term. All patients aged 70 years and older undergoing minimally invasive mitral valve surgery were studied retrospectively. Primary outcome was 30-day complication rate, secondary outcome was long-term survival and freedom from re-operation. 223 Patients underwent surgery (124 replacement and 99 repair) with a mean age of 76.4 ± 4.2 years. 30-Day complication rate (replacement 73.4% versus repair 67.7%; p=.433), 30-day mortality (replacement 4.0% versus repair 1.0%; p=.332) and 30-day stroke rate (replacement 0.0% versus repair 1.0%; p=.910) were similar in both groups. Multivariable cox regression revealed higher age, diabetes and left ventricular dysfunction as predictors for reduced long-term survival, while a valve replacement was no predictor for reduced survival. Sub analysis of patients with degenerative disease showed no difference in long-term survival after propensity weighting (HR 1.4; 95%CI 0.84 - 2.50; p=.282). The current study reveals that mitral valve repair and replacement in the elderly can be achieved with good short- and long-term results. Long-term survival was dependent on patient related risk factors and not on the type of operation (replacement versus repair). abstract_id: PUBMED:26792923 Long-term results of mitral valve surgery for degenerative anterior leaflet or bileaflet prolapse: analysis of negative factors for repair, early and late failures, and survival. Objectives: To evaluate the feasibility of mitral valve repair in patients with anterior leaflet (ALP) or bileaflet prolapse (BLP) and identify factors predisposing patients to replacement. To compare long-term survival of patients submitted to repair (Group Repair) against those submitted to replacement (Group Replacement), and investigate causes of early and late failures of repair. Methods: From January 1992 through December 2012, 768 patients with ALP or BLP were submitted to mitral valve surgery, of whom 501 had degenerative involvement [Myxomatous (Myx)-336 (67.1%) or fibroelastic deficiency (Fed)-165 (32.9%)] and constituted the study population. Isolated ALP was present in 274 patients (54.7%) and BLP in 227 (45.3%). Associated procedures were admitted. Results: Patients with Fed were significantly older (64.4 ± 12.1 vs 54.8 ± 15.5 years, P < 0.001), more symptomatic (63 vs 44.3%; P < 0.001) and with higher incidence of atrial fibrillation (43.6 vs 26.2%; P < 0.001). Repair was achieved in 94.8% of patients with an overall 30-day mortality rate of 1.2% (0.3% in the last decade) and no differences regarding aetiology. Age, moderate to severe left ventricular (LV) dysfunction, previous cardiac surgery, multiple segment prolapse, mitral calcification, leaflet retraction and the performing surgeon were independently associated with replacement. Group Repair patients had a greater adjusted 20-year survival by comparison with Group Replacement (43.4 ± 5.5 vs 13.6 ± 11.3%; P < 0.001) and similar to that of the age- and sex-adjusted general population (P = 0.10). Valve replacement, New York Heart Association (NYHA) class III-IV, pulmonary hypertension and LV dysfunction emerged as independent predictors of late mortality. Patients in NYHA class I-II experienced a higher repair rate (98.4%) and better survival than those in Class III-IV. Two repair patients were reoperated during the first year after surgery (early failure) and both were 'rerepaired'. Late failure was observed in 21 patients, mostly for progression of the disease. The 20-year rate of freedom from reoperation was 88 ± 2.7%, significantly worse in ALP patients (P = 0.040), and not different between Fed and Myx. Conclusions: Patients with ALP or BLP can be submitted to surgery with low mortality and great probability of repair in expert hands. Patients should be operated on at an early phase (asymptomatic or mildly symptomatic), because there is a higher probability of repair and greater benefit on long-term survival. abstract_id: PUBMED:24589200 Timing of valve repair for severe degenerative mitral regurgitation and long-term left ventricular function. Objectives: Current guidelines recommended surgery for patients with severe degenerative mitral regurgitation (MR) when specific left ventricular (LV) dimensions or ejection fraction (EF) are reached, based on previous postoperative survival studies. The aim of this study was to evaluate the incidence and predictors of long-term postoperative LV dysfunction, and investigate the preoperative parameters necessary to maintain or recover long-term LV function in the era of mitral valve (MV) repair. Methods: We retrospectively reviewed 473 consecutive patients undergoing MV repair for severe degenerative MR for whom both preoperative and 3-year postoperative echocardiographic data were available in our institution. Preoperative and 3-year postoperative echocardiographic data and clinical outcomes were evaluated. Results: Receiver operating characteristic analysis identified preoperative LVEF 63% or less (area under curve [AUC], 0.725; P<.001) and LV end-systolic dimension (ESD) 39 mm or greater (AUC, 0.724; P<.001) as cut-off values for predicting LVEF less than 50% 3 years after surgery. On multivariate analysis, both preoperative LVEF and LVESD were not significant predictors of 3-year postoperative LV dysfunction among patients with preoperative LVEF greater than 63% and LVESD less 39 mm, whereas preoperative LVESD (odds ratio [OR], 2.22; P=.004), higher age (OR, 1.03; P=.04), and atrial fibrillation (OR, 2.68; P=.01) were independent predictors among patients with preoperative LVEF 63% or less or LVESD 39 mm or greater. Conclusions: Early MV repair with LVEF greater than 63% and LVESD less than 39 mm preserved long-term postoperative LV function well, and smaller preoperative LVESD was associated with long-term LV function recovery, even in patients with preoperative LV dysfunction. abstract_id: PUBMED:10221393 Mitral valve prolapse. Comparison between valvular repair and replacement in severe mitral regurgitation. Background: The aim of this study was to analyse long term results of mitral valve repair of degenerative mitral regurgitation compared to valve replacement. Methods: A hundred-twenty-five consecutive patients with severe mitral valve insufficiency who underwent cardiac surgery from January 1987 to December 1995 were included in the study. Mean age was 55+/-16 years (77 males, 48 females). Mitral repair was performed in 62 patients and mitral valve was replaced in 63 patients. Mean follow-up was 5 years. The repair procedures were based on quadrangular resection of the posterior leaflet, chordal replacement and transposition. Annuloplasty was performed in 100% of cases. The technique of valve replacement was conventional with complete excision of the valve in the majority of cases. Results: Operative mortality following valve repair was 1.6%, no death occurred in the prosthesic group. In the repair group overall survival and re-operation rate were respectively 95.2% and 6.5%, while in the replacement group were 93.7% and 7.9%. No endocarditis and thromboembolic accidents were observed following valvuloplasty, while in the prostheses 6.3% of patients had endocarditis and 1.6% had a thromboembolic event. Mild or moderate left ventricular dysfunction was present in 5 patients after valvuloplasty and in 9 patients with prostheses. Conclusions: Considering these results we conclude that, in patients with severe degenerative mitral insufficiency, mitral valve repair is warranted whenever it is possible. The advantages given by maintaining the native valve suggest that surgery should be considered in asymptomatic patients before the occurrence of the left ventricular dysfunction. abstract_id: PUBMED:32082821 Transaortic repair of concomitant mitral insufficiency in patients with critical aortic stenosis undergoing aortic valvular replacement. Background: In this study, we present operation technique and outcomes of transaortic mitral valve repair in high-risk patients undergoing aortic valve replacement due to severe aortic stenosis. Methods: Between January 2005 and March 2016, a total of 11 patients (7 females, 4 males; mean age 71.2±4.1 years; range, 65 to 77 years) with severe aortic valve stenosis (aortic valve area <1 cm2 or aortic valve area index <0.6 cm2/m2) and concomitant moderate or severe mitral regurgitation (non-ischemic, regurgitant jet origin between A2-P2 portions) secondary to left ventricular dysfunction (EuroSCORE logistic score >5%, left ventricular ejection fraction <30%) who were operated were retrospectively analyzed. Aortic valve replacement and transaortic mitral edge-to-edge repair was applied to all patients. Operations were performed through sternotomy, cardiopulmonary bypass, and bicaval venous return. Transesophageal echocardiography was used to evaluate mitral valve before surgery and valve functions after surgery. Postoperative course of all patients was monitored, and postoperative complications were recorded. Results: The mean preoperative ejection fraction was 24.5±4.1% and the mean transaortic pressure gradient was 35.8±4.8 mmHg. The mean aortic cross-clamp time was 62.09±10.1 (range, 43 to 76) min and the median cardiopulmonary bypass time was 90.1±11.9 (range, 66 to 114) min. No hospital mortality was observed. In the postoperative period, two patients experienced renal insufficiency. Hemofiltration was initiated in these patients and no dialysis was required at two weeks. One patient had postoperative atrial fibrillation and one patient had pericardial effusion leading to cardiac tamponade and this patient underwent reoperation. The patients were followed up for a mean of four years and control echocardiography didn"t detect increase in mitral regurgitation degree. Conclusion: Transaortic edge-to-edge mitral valve repair can be used in high-risk patients undergoing aortic valve replacement. This technique is feasible with shorter cross-clamp time and can reduce mortality and morbidity in selected high-risk patients. abstract_id: PUBMED:37494779 Left ventricular reverse remodeling after annular-mitral valve repair in end-stage heart failure of dilated cardiomyopathy: A case report. Introduction: There are numerous patients with functional mitral regurgitation resulting from dilated cardiomyopathy. The decision between surgical correction and medical management of severe mitral regurgitation in heart failure is often difficult. Case Presentation: This is a 38-year-old women presented with recurrent symptoms of congestive heart failure. Transthoracic echocardiography revealed a dilated cardiomyopathy with global severe left ventricular dysfunction and severe functional mitral valve regurgitation. She underwent a mitral valve restrictive annuloplasty. The follow-up showed an improved of left ventricle dimensions and function. Discussion: The novel annular repair techniques have an obvious potential to improve long-term competence of Mitral Valve Repair. Therefore, they could be considered as a viable therapeutic option in patients presenting with end-stage cardiomyopathy with severe mitral regurgitation. Conclusion: This study demonstrate the reverse left ventricular (LV) remodeling after effective mitral valve repair in advanced dilated cardiomyopathy and its impact on clinical outcome and repair durability. Answer: The long-term results of mitral repair in patients with severe left ventricular dysfunction and secondary mitral regurgitation suggest that the technique used for repair does indeed matter. In a study where patients with end-stage dilated cardiomyopathy and secondary mitral regurgitation were treated, the addition of the edge-to-edge (EE) technique to undersized annuloplasty significantly decreased the rate of recurrent mitral regurgitation (MR) at long-term follow-up. However, this improved repair durability did not translate into a better long-term prognosis in the series (PUBMED:27126133). In another study, pre-operative left ventricle ejection fraction (LVEF) less than 60% was found to be significantly associated with immediate mortality following mitral valve repair for non-ischemic MR. Logistic regression analysis identified age, female sex, and left ventricular dysfunction at discharge as significant predictors of follow-up mortality. This indicates the importance of performing mitral valve repair before significant LV dysfunction sets in for a better outcome (PUBMED:34511756). Furthermore, in asymptomatic patients with severe degenerative MR undergoing mitral valve repair, the presence of preoperative MR-related complications, particularly LV dysfunction and/or dilatation, greatly attenuates the benefits of surgery. This suggests that mitral valve repair should be performed early, before any MR-related complications ensue (PUBMED:19632855). The durability of mitral valve repair for degenerative mitral regurgitation (DMR) is influenced by factors such as the degree of residual intraoperative mitral regurgitation, anterior leaflet prolapse, and the addition of a sliding annuloplasty in isolated posterior leaflet repair. A competent valve at 6 months to 3 years postoperatively predicted an excellent long-term result (PUBMED:30377076). In the elderly, mitral valve repair and replacement can be achieved with good short- and long-term results, and long-term survival was dependent on patient-related risk factors rather than the type of operation (replacement versus repair) (PUBMED:34469799). Overall, these studies highlight the importance of the surgical technique and timing of the repair in influencing the long-term outcomes of patients with severe LV dysfunction and secondary MR.
Instruction: General practitioner based screening for cervical cancer: higher participation of women with a higher risk? Abstracts: abstract_id: PUBMED:9200061 General practitioner based screening for cervical cancer: higher participation of women with a higher risk? Objective: To test the hypothesis that a personal invitation for cervical screening by a woman's own general practitioner (GP) achieves a higher attendance of women with an increased risk for cervical cancer. Setting: Two general practices and the local health authority screening programme for cervical cancer, Nijmegen, The Netherlands. Methods: Attendance rates of women with an increased risk of cervical cancer were compared for two invitation strategies: (a) invitation by the woman's own GP, and (b) invitation by a national call system through the local health authority. Data on risk profiles were gathered by questionnaire. Two hundred and thirty eight women eligible for screening were invited by their GPs (GP group), and 235 women by the local health authority (control group) in 1992. Results: The personal invitation by the GP resulted in an 18% higher overall attendance, and a 28% higher attendance of women with greater risk because of sexual behaviour and smoking. Conclusion: Greater involvement of the GP in inviting women for cervical cancer screening results in a higher attendance, particularly among women with increased risk, than a less personal health authority call system. abstract_id: PUBMED:21437898 Cervical screening participation and risk among Swedish-born and immigrant women in Sweden. Cervical cancer is one of the most common cancers among women worldwide, although cervical screening has reduced the incidence in many high-income countries. Low screening uptake among immigrant women may reflect differences in risk of cervical cancer. We investigated the degree of participation in cervical screening among immigrant and Swedish-born women and their concurrent risk of cervical cancer based on individual information on Pap smears taken both from organized and opportunistic screening. Mean degree of participation in cervical screening was estimated for women between 23 and 60 years from 1993 to 2005, stratified by birth region and age at migration. In Poisson regression models, we estimated relative risks (RRs), incidence rates and incidence rate ratios of cervical cancer for women adhering or not to the cervical screening program. We also assessed effect of adherence to screening on the risk of cervical cancer among immigrant groups compared to Swedish-born women. The degree of participation was 62% and 49% among Swedish-born and immigrant women, respectively, with large variations between immigrant groups. Participation was lowest among those immigrating at older ages. Swedish-born and immigrant women who where nonadherent to the cervical screening program had a fivefold excess risk of cervical cancer compared to adherent women. After adjustment for screening adherence, excess RRs of cervical cancer were statistically significant only for women from Norway and the Baltic States. Participation to screening is lower among immigrant than Swedish-born women, and adherence to the recommended screening intervals strongly prevents cervical cancer. abstract_id: PUBMED:16261714 Participation in the Dutch national screening programme for uterine cervic cancer higher after invitation by a general practitioner, especially in groups with a traditional low level of attendance Objective: To gain insight into the differences in participation in the screening programme for uterine cervix cancer between women invited by a general practitioner (GP) and women invited by the local health authority (GGD). Specific attention was given to those groups whose participation is generally below average. Design: Descriptive epidemiological study. Method: In the period 2000-2003, 237,719 women (30-60 years of age) were invited to participate in the national uterine cervix cancer screening programme in the Southwest of the Netherlands; 37.1% of the women were invited by the GP and 62.9% by the GGD. Data were obtained from the Cervix Information System of the GGDs. Differences in attendance between those invited by the GP and those invited by the GGD were tested by linear regression. Participation was defined as the number of women for whom the result of a cervical smear was known, divided by the number invited. Results: Invitation by a GP led to a 7.9% (95% CI: 7.5-8.3) higher attendance rate than invitation by a GGD. This difference in attendance was higher for women born in Morocco, Turkey, Surinam and the Netherlands Antilles/Aruba (17.2%; 95% CI: 15.2-I9.2), young women (11.9%; 95% CI: 10.8-13.0), women with a low socio-economic status (11.6%; 95% CI: 10.4-12.7), and women who lived in highly urban areas (13.0%; 95% CI: 12.3-13.6). The differences were the greatest among non-western women who were also part of another low-attendance group: 19.0% (95% CI: 16.7-21.2) for women who lived in highly urban areas and 20.8% (95% CI: 16.8-24.9) for those in the youngest age group. abstract_id: PUBMED:18661453 Participation in cervical cancer screening by age and region--a cohort study with a 3 1/2 year follow-up on 2,223,135 women in Bavaria. Objective Of The Study: Cervical cancer rates are higher in Germany than in comparable European countries. While other European countries have implemented invitation programs, Germany relied on annual free access only. Are sufficient screening participation rates achieved? Methods: We analyzed data from 2,223,135 Bavarian women between 2002/3 and 2005/4 in a retrospective cohort, to evaluate screening participation, frequency, age dependencies and regional differences. Results: The highest screening participation was amongst women between 20 and 29 (54.6% had at least one visit after one year, 84% within three years) and decreased progressively with increasing age. Participation was lowest for women above 70 in rural areas (in some regions less than 20% had at least one screening within three years). Conclusion: Poor participation rates in general are unlikely to account for the high cervical cancer rates in Germany. Low participation rates in elderly women, particularly those in rural regions might contribute to the well-known peak of cervical cancer in elderly women. abstract_id: PUBMED:27894911 Personal and provider level factors influence participation to cervical cancer screening: A retrospective register-based study of 1.3 million women in Norway. High coverage is essential for an effective screening programme. Here we present screening barriers and facilitators among 1.3 million women aged 25-69years eligible for screening within the Norwegian Cervical Cancer Screening Program (NCCSP). We defined non-adherence as no screening test in 2008-2012. We divided adherent women into those screened spontaneously, and those who had a smear after receiving a reminder from the NCCSP. Explanatory variables were extracted from several nationwide registers, and modelled by modified Poisson regression. In total, 34% of women were non-adherent. 31% of native Norwegians were non-adherent, compared to 50% of immigrants. Immigrant status was a strong predictor of non-adherence, but the vast majority of non-adherent women were still native Norwegians. Higher non-adherence rates were associated with having a male general practitioner (GP), a foreign GP, a young GP, and distance to the screening site. Being unmarried, having no children, having lower socioeconomic position and region of residence predicted non-adherence and, to a smaller extent, reminded adherence to screening. In contrast, previous experience with cervical abnormalities substantially increased adherence to screening. The population-based screening programme promotes equity by recruiting women who are less likely to participate spontaneously. However, socioeconomic disparities were evident in a country with a nationwide programme and a policy of equal access to health care. Initiatives aimed at removing practical and financial barriers to equitable screening delivery and at reducing the effect of sociodemographic attributes on screening participation are needed. abstract_id: PUBMED:11680199 The general practitioner and screening reproductive age women All women in age of procreation might benefit of well established screening tests. The latter being conduct to identify, in this apparently healthy population, diseased or carrier women of an affection. This screening program might be realised by a general practitioner and must be encouraged. Actually, in Belgium, there is no national screening program for this population. Recommendations might be proposed for all screening tests that can be realised in preconceptual period. These concern screening of diseases which affect all women like cervical cancer, which might influence the pregnancy or infant outcome, like an infection or an hereditary disease, or which could be adversely affected by the pregnancy, like diabetes. abstract_id: PUBMED:35834620 A Home-Mailed Versus General Practitioner-Delivered Vaginal Self-Sampling Kit for Cervical Cancer Screening: A Cluster Randomized Controlled Trial with a Cost-Effectiveness Analysis. Objective: We assessed whether general practitioner (GP) delivery of a vaginal self-sampling kit was non-inferior to home-mailed delivery on cervical cancer screening (CCS) participation. Methods: Two hundred and ten French GPs from Indre-et-Loire French department were randomized into two groups, and their unscreened women patients aged 30-65 were included in February-March 2015. In the GP delivery group (n = 105 GPs; 1,806 women), women were sent a reminder letter inviting them to collect a vaginal self-sampling kit at their regular GP's practice. In the home-mailed delivery group (n = 105 GPs; 1,806 women), women were sent a reminder letter with a vaginal self-sampling kit directly at home. The primary outcome was participation in complete CCS within 9 months. A cost-effectiveness analysis was also performed. Results: At 9 months, 14.9% (95% CI: 12.9-16.9) and 27.9% (95% CI: 25.7-30.0) of women in the GP and home-mailed delivery groups participated in complete CCS. The absolute between-group difference was -13.0 percentage points (95% CI: -15.9 to -10.0) in favor of the home-mailed delivery group, crossing the non-inferiority pre-defined non-inferiority margin of 5%. The home-mailed delivery strategy cost 50.81€ more per additional woman screened. Conclusions:The GP delivery was inferior to home-mailed delivery in increasing participation in CCS. Home-mailed delivery of a vaginal self-sampling kit is a cost-effective way to increase CCS in that the additional cost of this strategy seems acceptable. This study is registered at www.Clinicaltrials.gov NCT02255084. abstract_id: PUBMED:22097116 Role of the chosen general practitioner in educating women on the importance of regular gynecological examinations Introduction: Since regular screening is the best way of preventing the development of cervical cancer, the objective has been set to assess the motivation of women to have regular gynecological examinations and to estimate the role of the chosen general practitioner. Material And Methods: The survey was performed on the basis of the prospective study done at the Health Centre "Novi Sad" in 2009 during the systematic regular examinations carried out by general practitioners. Results: It was found that 60.8% of the examined women had regular checkups; 21.5% visited their doctor once in the period of two to five years and 4.9% had undergone the examination in a period > 10 years, whereas 1.9 women had never had an examination. Other examinees had occasional check-ups with various time laps between them. Discussion: The reasons for not visiting a gynecologist were fear of the examination, absence of discomforts and lack of time. However, 87.2% of the examinees visited a gynecologist after they had been advised to do so by their general practitioner. Gynecological finding was good in 87.6% of the women, 3.4% were found to have carcinoma and 8.9% had some other abnormal finding. Conclusion: According to the obtained results, it has been concluded that the chosen general practitioner has a very important role in motivating women to have regular gynecological examinations and in educating them on the risk factors for developing malignant diseases and on the possible prevention. abstract_id: PUBMED:33126544 Cervical Cancer Screening Participation among Women of Russian, Somali, and Kurdish Origin Compared with the General Finnish Population: A Register-Based Study. Migrant-origin women are less prone to cervical screening uptake compared with host populations. This study examined cervical cancer screening participation and factors associated with it in the Finnish mass screening program during 2008-2012 in women of Russian, Somali and Kurdish origin compared with the general Finnish population (Finns) in Finland. The study population consists of samples from the Finnish Migrant Health and Well-being Study 2010-2012 and Health 2011 Survey; aged 30-64 (n = 2579). Data from the Finnish screening register linked with other population-based registry data were utilized. For statistical analysis we employed logistic regression. Age-adjusted screening participation rates were Russians 63% (95% CI: 59.9-66.6), Somalis 19% (16.4-21.6), Kurds 69% (66.6-71.1), and Finns 67% (63.3-69.8). In the multiple-adjusted model with Finns as the reference; odds ratios for screening were among Russians 0.92 (0.74-1.16), Somalis 0.16 (0.11-0.22), and Kurds 1.37 (1.02-1.83). Among all women, the substantial factor for increased screening likelihood was hospital care related to pregnancy/birth 1.73 (1.27-2.35), gynecological 2.47 (1.65-3.68), or other reasons 1.53 (1.12-2.08). Screening participation was lower among students and retirees. In conclusion, screening among the migrant-origin women varies, being significantly lowest among Somalis compared with Finns. Efforts using culturally tailored/population-specific approaches may be beneficial in increasing screening participation among women of migrant-origin. abstract_id: PUBMED:21512747 Research participation among Asian American women at risk for cervical cancer: exploratory pilot of barriers and enhancers. The purposes of this exploratory pilot were to describe perceived barriers to participation in cervical cancer prevention research, and identify culturally-appropriate communication strategies to recruit Asian women into cancer prevention research. This thematic analysis of transcripts, from focus groups and in-depth interviews, was conducted in English, Vietnamese, and Mandarin Chinese, at a community clinic in Philadelphia, Pennsylvania. Thirty participants were either Vietnamese (35%) or Chinese (65%). Mean age was 36.8 (SD 9.9 years). Reasons for non-participation were: lack of time, inconvenience, mistrust of institutions and negative experiences, lack of translated materials, feeling intimidated by English, and the lack of translation of key words or terms. Enhancers of participation were: endorsement by a spouse, monetary compensation, and a personalized approach that offers a benefit for Asian women. To increase participation, first one must remove language barriers and, preferably, use specific dialects. Second, one must specify if benefits are indirectly or directly related to the family or cultural group. Asian research participants in our study consistently expressed that a significant motivator was their desire to be of help, in some way, to a family member or to the Asian community in general. Answer: The hypothesis that a personal invitation for cervical screening by a woman's own general practitioner (GP) achieves a higher attendance of women with an increased risk for cervical cancer is supported by several studies. A study conducted in the Netherlands found that a personal invitation by the GP resulted in an 18% higher overall attendance, and a 28% higher attendance of women with greater risk due to sexual behavior and smoking, compared to an invitation by a national call system through the local health authority (PUBMED:9200061). Similarly, another Dutch study showed that invitation by a GP led to a 7.9% higher attendance rate than invitation by the local health authority, with even greater differences in attendance among women born in Morocco, Turkey, Surinam, and the Netherlands Antilles/Aruba, young women, women with a low socio-economic status, and women who lived in highly urban areas (PUBMED:16261714). In contrast, a study in Bavaria, Germany, indicated that poor participation rates in general were unlikely to account for the high cervical cancer rates in the country, but low participation rates in elderly women, particularly those in rural regions, might contribute to the well-known peak of cervical cancer in elderly women (PUBMED:18661453). In Norway, a retrospective register-based study found that 34% of women were non-adherent to cervical cancer screening, with higher non-adherence rates associated with having a male GP, a foreign GP, a young GP, and distance to the screening site (PUBMED:27894911). A cluster randomized controlled trial in France assessed whether GP delivery of a vaginal self-sampling kit was non-inferior to home-mailed delivery on cervical cancer screening participation. The study found that GP delivery was inferior to home-mailed delivery in increasing participation in cervical cancer screening (PUBMED:35834620). Overall, these studies suggest that greater involvement of the GP in inviting women for cervical cancer screening can result in higher attendance, particularly among women with increased risk factors. However, the effectiveness of GP-based screening invitations may vary depending on the population and the specific characteristics of the women being invited.
Instruction: Inguinal hernia in female infants: a cue to check the sex chromosomes? Abstracts: abstract_id: PUBMED:16042738 Inguinal hernia in female infants: a cue to check the sex chromosomes? Objectives: To review the clinical presentation of complete androgen insensitivity syndrome (CAIS) and assess the current practice of considering the diagnosis of CAIS in female infants presenting with inguinal hernia. Patients And Methods: AIS arises from target tissue resistance to the actions of androgens due to molecular abnormalities in the androgen receptor. Patients with CAIS are born with normal external female genitalia, and although inguinal hernias are uncommon in female infants, they are a well-known presentation of CAIS. Such patients were identified from the Cambridge Intersex Database and details of presentation, presence and laterality of inguinal hernia and contents, and family history of CAIS, were recorded. A questionnaire detailing different indications for considering CAIS in female infants with a hernia was distributed to members of the British Association of Paediatric Surgeons and the British Society for Paediatric Endocrinology and Diabetes. Results: More than half of patients with CAIS presented with inguinal hernia, of which half were bilateral and a third contained gonads. Completed questionnaires were returned by 87 surgeons and 64 endocrinologists, and most of the surgeons and endocrinologists would consider CAIS in all female infants with a hernia. Bilateral hernias, hernias containing gonads and a family history of CAIS would prompt clinicians to consider the diagnosis. Conclusion: Most clinicians agreed that CAIS should be considered in all female infants with inguinal hernia, as this is the commonest mode of presentation in childhood. Macroscopic inspection of the internal genital structures coupled, perhaps, with gonadal biopsy is recommended. Fluorescence in situ hybridization offers a rapid and reliable method to check the sex chromosomes. Liaison between the paediatric surgeon and endocrinologist is essential in management of infants with CAIS. abstract_id: PUBMED:33968844 Disorders/Differences of Sex Development Presenting in the Newborn With 46,XY Karyotype. Differences/disorders of sex development (DSD) are a heterogeneous group of congenital conditions, resulting in discordance between an individual's sex chromosomes, gonads, and/or anatomic sex. The management of a newborn with suspected 46,XY DSD remains challenging. Newborns with 46,XY DSD may present with several phenotypes ranging from babies with atypical genitalia or girls with inguinal herniae to boys with micropenis and cryptorchidism. A mismatch between prenatal karyotype and female phenotype is an increasing reason for presentation. Gender assignment should be avoided prior to expert evaluation and possibly until molecular diagnosis. The classic diagnostic approach is time and cost-consuming. Today, a different approach may be considered. The first line of investigations must exclude rare life-threatening diseases related to salt wasting crises. Then, the new genetic tests should be performed, yielding increased diagnostic performance. Focused imaging or endocrine studies should be performed on the basis of genetic results in order to reduce repeated and invasive investigations for a small baby. The challenge for health professionals will lie in integrating specific genetic information with better defined clinical and endocrine phenotypes and in terms of long-term evolution. Such advances will permit optimization of counseling of parents and sex assignment. In this regard, society has significantly changed its attitude to the acceptance and expansion beyond strict binary male and female sexes, at least in some countries or cultures. These management advances should result in better personalized care and better long-term quality of life of babies born with 46,XY DSD. abstract_id: PUBMED:19546054 Ambiguous genitalia in a newborn with 45,X/46,X,idic(Y) ovotesticular disorder of sex development. Objective: To discuss a case of ovotesticular disorder of sex development (DSD) with ambiguous genitalia, isodicentric Y sex chromosome mosaicism, and unique histopathologic findings. Methods: We report the clinical, laboratory, imaging, and operative findings, and we highlight the pertinent features of this case. Results of hormonal and genetic testing are reviewed, and histopathologic findings are illustrated. Results: A term newborn had ambiguous genitalia and was found to have an uncommon 45,X/46,X,idic(Y) karyotype. This infant had a 2-cm phallic structure, a penoscrotal hypospadias, a gonad easily palpable in the right scrotum, and a second gonad palpable high in the left inguinal canal. On ultrasonography, both gonads appeared as normal testicles, and no müllerian structures were identified. Testosterone and dihydrotestosterone levels were normal for a male neonate. After assessment of the patient at 2 months of age because of an incarcerated left inguinal hernia, both gonads were removed and were found to have both testicular and ovarian tissues spread throughout, with a fallopian tube on the left and an incipient juvenile granulosa cell tumor on the right. He was then diagnosed with ovotesticular DSD and continued to be raised as a boy. Conclusion: This male infant had undervirilization and an ovotesticular DSD. He had evidence of both ovarian and testicular tissues, in conjunction with apparent alterations in local müllerian inhibiting substance levels that allowed one fallopian tube to be preserved. The proportion of actively transcribed Y material in the chromosomes of patients with this karyotype may partially explain the variable phenotypes that can occur. abstract_id: PUBMED:1785642 Monozygotic twins of discordant sex both with 45,X/46,X,idic(Y) mosaicism. A female twin with short stature, unusual facial appearance, widely spaced nipples, and coarctation of the aorta was found to have a peripheral blood lymphocyte karyotype of 45,X(43%)/46,X,idic(Y)(p11). Her twin brother, also short with similar facial appearance, had the same mosaicism (40% 45,X). Cultured skin fibroblast studies showed discrepant karyotypes of 45,X (100%) in the girl and 45,X (78%)/46,X,idic(Y)(p11) in the boy. The mother and the father had normal chromosomes. Comparison of 27 biochemical markers yielded a likelihood of monozygosity of 0.9977. This report documents the occurrence of discordant phenotypic sex in monozygotic twins, involving gonadal dysgenesis with an abnormal dicentric Y, which presumably occurred de novo, followed by anaphase lag probably before the occurrence of twinning. Unequal distribution of the two resultant cell lines in various tissues of each twin could account for the development of the very different phenotypes, apparently normal boy and Ullrich-Turner girl. abstract_id: PUBMED:34338568 Molecular study and genotype-phenotype in Chinese female patients with 46, XY disorders of sex development. Objective: The rare condition 46, XY disorders of sex development (DSDs) is characterized by the female phenotype and male karyotype. We aimed to describe the genetic basis of 46, XY DSDs in nine patients and the genotype-phenotype relationships of the genes involved. Methods: Targeted next-generation sequencing (NGS) was used to analyze the underlying hereditary etiology in nine female patients with 46, XY DSDs. In silico analyses were used to predict the effects of novel variants on the protein function of the identified genes. Results: Primary amenorrhea with the absence of puberty, inguinal hernia, and clitoridauxe were common complaints. All enrolled patients had a differential etiology by genetic testing, and five novel genetic variants involved in four genes (SRY, AR, NR5A1, and LHCGR) were identified. A novel nonsense variant of SRY c.51C > G was found in XY patients without testicles. Two novel heterozygous variants, i.e. c.265A > T (Ile89Leu) and c.422T > C (Val141Ala), of the LHCGR gene were found in male pseudo-hermaphroditism. Conclusions: We expanded the genetic mutation spectrum and described in detail the genotype-phenotype relationships of 46, XY DSDs. DNA sequencing for SRY should be a priority in female patients with 46, XY DSDs. NGS is useful for clarifying genetic pathogenesis and could provide a basis for clinical diagnosis and treatments of patients with 46, XY DSDs. abstract_id: PUBMED:34988914 The role of ultrasound to evaluate the disorders of sex development: a pictorial essay. Ultrasonography is usually the first modality used to evaluate patients with disorders of sex development (DSD). To determine the sex in patients with DSD, the following four categories are carefully evaluated: chromosomal, gonadal, anatomical internal genitalia, and external genitalia. However, in the clinical setting, the only information that sonographers have prior to ultrasound examination is the appearance of the external genitalia. The following DSD presentations are commonly observed: (1) male external genitalia present at birth, without testis in the scrotum or with a small penis; (2) female external genitalia present at birth, with an inguinal hernia or clitoromegaly; (3) neonates with ambiguous genitalia at birth; and (4) female or male external genitalia without sexual maturity. In this retrospective study of several clinical cases, we demonstrated an ultrasound-based sex determination approach for these clinical presentations. We found that sonographers evaluated the external genitalia in relation to the distal urethra within the corpus spongiosum and corpus cavernosum and the presence or absence of follicles within the detected gonads to determine the sex of the patient. abstract_id: PUBMED:29263053 A Novel Sex Chromosome Mosaicism 45,X/45,Y/46,XY/46,YY/47,XYY Causing Ambiguous Genitalia. Sex chromosomal mosaicism has been considered as a major cause of human sexual differentiation disorders, like partial virilization and ambiguous genitalia. 45,X/46,XX, 45,X/46,XY and 46,XY/47,XXY are three most common sex chromosome mosaics associated with human ambiguous genitalia. Here, we report the case of a 3-year-old boy with ambiguous genitalia, bilateral cryptorchidism, and with an inguinal hernia. G banded cytological karyotyping and FISH analyses revealed that the patient has extremely rare and novel sex chromosome mosaic 45,X/45,Y/46,XY/46,YY/47,XYY karyotype. These cells exist in different percentages, important for phenotypic appearance of the patient. This is a first report of an unusual mosaic karyotype causing ambiguous genitalia. abstract_id: PUBMED:7665210 Quantified testicular histology in boys with sex chromosome abnormalities. Testicular histology in adult men with Klinefelter's syndrome (KS) is characterized by degenerative changes of the seminiferous epithelium. In contrast, limited changes have been reported in XXY fetuses. However, knowledge about the natural history of the degeneration of testicular histology in KS is scarce, and similarly testicular histology in prepubertal individuals with XY/XXY and XYY karyotypes is described incompletely. We have performed a qualitative and quantitative study of testicular histology in 11 XXY, one XY/XXY mosaic and two XYY boys between the neonatal period and 13 years of age using stereological methods and control data from normal XY boys and XY boys undergoing surgery for testicular maldescent. Testicular specimens were obtained at autopsy (n=2) or at time of surgery for malposition of the testis (n=10) or for inguinal hernia (n=2). The seminiferous tubules showed no sign of degeneration in any of the specimens. The volume density of seminiferous tubules was normal at all ages, whereas mean tubular diameter was reduced in all but three XXY boys. Germ cell numbers were only normal in XXY and XYY infants as well as in the 12 year old mosaic. No germ cells were observed in any XXY boy age 2 years or more. Leydig cells were observed in the infants and in a 13 year old XXY boy. The changes were comparable to what has been found in the most severely affected XY boys with testicular maldescent. We conclude that testicular histology in individuals with sex chromosome aneuploidy seems to be near normal during infancy, after which time degenerative changes occur. abstract_id: PUBMED:37626235 Sex-based differences in inguinal hernia factors. Background: Much of our knowledge about inguinal hernias is based on males. Meanwhile, it is established that women have worse outcomes after inguinal hernia repair, with more chronic pain and higher recurrences. Pediatric literature shows inguinal hernias in females are more likely to be bilateral, incarcerated, and carry a stronger genetic predisposition than males. We aimed to evaluate sex-based differences in inguinal hernia factors in adults, to help supplement the paucity of literature in the adult population. Methods: An institutional database of patients undergoing repair of primary inguinal hernias was queried with focus on preoperative risk factors and operative characteristics. Multivariate analysis was performed looking for independent variables associated with a greater number of hernia defects found intraoperatively. Results: Among 494 patients, 202 (40.9%) were female. Number of risk factors among females was significantly higher than males (1.53 vs 1.2, p = 0.003). Females had significantly more constipation, GERD, and asthma and lower BMI than males. Family history of hernias was similar between both sexes. As expected, females had significantly less direct hernias (12.9% vs 32.9%, p < 0.001) and more femoral hernias (38.5% vs 12.2%, p < 0.001) than males. Bilaterality was similar. Females undergoing inguinal hernia repair averaged 1.23 prior deliveries. Regression analysis showed age, sex, BMI, and number of deliveries were not correlated with the number of defects. Conclusions: Females undergoing primary inguinal hernia repair had more preoperative risk factors for inguinal hernia than males. In our population, there was no higher incidence of bilaterality or significant genetic predisposition in females as noted by family history of hernias. Age, sex, BMI and number of deliveries did not correlate with the number of hernia defects found. Our study promotes awareness of inguinal hernias in females and presents new data to quantify sex-based differences and predispositions to inguinal hernias. abstract_id: PUBMED:32608588 Groin hernia in female patients: sex-specific management Groin hernia in women is an entity whose clinical manifestations can be quite subtle, and which therefore risks to go unnoticed. Imaging studies by ultrasound or dynamic CT/MRI is strongly recommended in case of a clinical doubt. Optimal treatment consists of laparoscopic surgery, even in oligosymptomatic patients, because the risk of incarceration, and subsequent morbidity and mortality is high especially in cases of femoral hernia, which are frequent and often overlooked in women. During pregnancy, the risk of developing a groin hernia is very low (around 0,1%) and a wait-and-see attitude is to be preferred as much as possible, except in the case of an acute complication (incarceration). Answer: Yes, inguinal hernia in female infants can be a cue to check the sex chromosomes. Inguinal hernias are uncommon in female infants, but they are a well-known presentation of complete androgen insensitivity syndrome (CAIS), which arises from target tissue resistance to the actions of androgens due to molecular abnormalities in the androgen receptor. Patients with CAIS are born with normal external female genitalia, and more than half of patients with CAIS present with inguinal hernia, of which half are bilateral and a third contain gonads. Most clinicians agree that CAIS should be considered in all female infants with inguinal hernia, and macroscopic inspection of the internal genital structures, perhaps coupled with gonadal biopsy, is recommended. Fluorescence in situ hybridization offers a rapid and reliable method to check the sex chromosomes, and liaison between the paediatric surgeon and endocrinologist is essential in the management of infants with CAIS (PUBMED:16042738). Additionally, disorders/differences of sex development (DSD) presenting in newborns with a 46,XY karyotype can manifest as female infants with inguinal herniae. Gender assignment should be avoided prior to expert evaluation and possibly until molecular diagnosis. New genetic tests should be performed, yielding increased diagnostic performance, and focused imaging or endocrine studies should be performed based on genetic results to reduce repeated and invasive investigations for a small baby (PUBMED:33968844). Therefore, the presence of an inguinal hernia in a female infant should prompt clinicians to consider the possibility of a sex chromosome anomaly or a disorder of sex development, and appropriate genetic testing should be conducted to confirm the diagnosis and guide management.
Instruction: Multilevel bypass grafting: is it worth it? Abstracts: abstract_id: PUBMED:24704851 Multilevel bypass grafting: is it worth it? Background: Traditionally, multilevel arterial disease has been treated with an inflow procedure only but simultaneous multilevel bypass graft procedures have been attempted. However, these procedures are potentially high risk. We report our single-center experience of performing multilevel bypass grafts over the last 15 years. Methods: We retrospectively identified patients undergoing simultaneous aortoiliac and infrainguinal bypasses between January 1996 and January 2011 at a single district general hospital. Results: There were 32 multilevel procedures performed. Indication for surgery was acute ischemia in 10 (31.3%), critical ischemia without tissue loss in 10 (31.3%), with tissue loss in 10 (31.3%), and claudication in 2 (6.3%). In 23 (71.9%) cases inflow was restored using a direct iliofemoral or aortofemoral reconstruction. In the remaining 9 (28.1%), an extra-anatomic bypass was constructed. Two (6.3%) patients died within 30 days of surgery. Twenty-nine (90.6%) patients survived to discharge. Twenty-eight patients (87.5%) were alive 1 year after surgery. Limb salvage was 96.9%, 85.7%, and 75.9% at 30 days, 1 year, and 5 years, respectively. Twelve (37.5%) patients required a total of 19 further ipsilateral vascular procedures. Conclusions: Our results demonstrate that multilevel bypass procedures can be performed with good long-term outcomes and acceptable mortality, in what is typically a high-risk group with extensive comorbidities. In patients with severe critical limb ischaemia and tissue loss, who have a combination of aortoiliac and infrainguinal disease, there are significant benefits to a primary multilevel grafting procedure. abstract_id: PUBMED:30644764 A New Multilevel CART Algorithm for Multilevel Data with Binary Outcomes. The multilevel logistic regression model (M-logit) is the standard model for modeling multilevel data with binary outcomes. However, many assumptions and restrictions should be considered when applying this model for unbiased estimation. To overcome these limitations, we proposed a multilevel CART (M-CART) algorithm which combines the M-logit and single level CART (S-CART) within the framework of the expectation-maximization. Simulation results showed that the proposed M-CART provided substantial improvements on classification accuracy, sensitivity, and specific over the M-logit, S-CART, and single level logistic regression model when modeling multilevel data with binary outcomes. This benefit of using M-CART was consistently found across different conditions of sample size, intra-class correlation, and when relationships between predictors and outcomes were nonlinear and nonadditive. abstract_id: PUBMED:34565811 Multiple Group Analysis in Multilevel Data Across Within-Level Groups: A Comparison of Multilevel Factor Mixture Modeling and Multilevel Multiple-Indicators Multiple-Causes Modeling. The purpose of this two-part study is to evaluate methods for multiple group analysis when the comparison group is at the within level with multilevel data, using a multilevel factor mixture model (ML FMM) and a multilevel multiple-indicators multiple-causes (ML MIMIC) model. The performance of these methods was evaluated integrally by a series of procedures testing weak and strong invariance models and the latent group mean differences testing after holding for factorial invariance. Two Monte Carlo simulation studies were conducted under the following conditions: number of clusters, cluster size, and the design type in groups. A multilevel one-factor confirmatory factor analysis (CFA) model as a research model in Study 1 was investigated to compare the results under different conditions with those of previous studies. A multilevel two-factor CFA model as a research model in Study 2 was evaluated by fitting alternative models that can be applied when the model is complicated. The results indicated that the two methods were reasonable in multilevel multiple groups analysis across within-level groups. However, pros and cons were found between the two methods. In the multilevel one-factor CFA model, ML MIMIC model was slightly better when the sample size is small. In the multilevel complex model, two alternative models of ML FMM were recommended because the weak invariance testing of ML MIMIC was considerably time-consuming. Finally, it was shown that information criteria, which are criteria for determining whether factorial invariance is established, need to be applied differently according to the sample size conditions. Guidelines for this situation are provided. abstract_id: PUBMED:30573933 Multilevel Modeling of Cognitive Diagnostic Assessment: The Multilevel DINA Example. Many multilevel linear and item response theory models have been developed to account for multilevel data structures. However, most existing cognitive diagnostic models (CDMs) are unilevel in nature and become inapplicable when data have a multilevel structure. In this study, using the log-linear CDM as the item-level model, multilevel CDMs were developed based on the latent continuous variable approach and the multivariate Bernoulli distribution approach. In a series of simulations, the newly developed multilevel deterministic input, noisy, and gate (DINA) model was used as an example to evaluate the parameter recovery and consequences of ignoring the multilevel structures. The results indicated that all parameters in the new multilevel DINA were recovered fairly well by using the freeware Just Another Gibbs Sampler (JAGS) and that ignoring multilevel structures by fitting the standard unilevel DINA model resulted in poor estimates for the student-level covariates and underestimated standard errors, as well as led to poor recovery for the latent attribute profiles for individuals. An empirical example using the 2003 Trends in International Mathematics and Science Study eighth-grade mathematical test was provided. abstract_id: PUBMED:30819268 Three Issues in Multilevel Research. In this article, three important issues in organizational multilevel research are discussed and clarified, namely: (a) The interpretation of "cross-level direct effects" in theoretical and research multilevel models, (b) the specification of the emergence processes involved in higher-level constructs, and (c) the sample size recommendations for using multilevel statistical methods. By doing so, this article hopes to contribute to the improvement of organizational multilevel research. abstract_id: PUBMED:30758057 Multilevel analysis of matching behavior. Multilevel modeling has been considered a promising statistical tool in the field of the experimental analysis of behavior and may serve as a convenient statistical analysis for matching behavior because it structures data in groups (or levels) to account simultaneously for the within-subject and between-subject variances. Heretofore, researchers have sometimes pooled data erroneously from different subjects in a single analysis by using average ratios, average response and reinforcer rates, aggregation of subjects, etc. Unfortunately, this leads to loss of information and biased estimations, which can severely undermine generalization of the results. Instead, a multilevel approach is advocated to combine several subjects' matching behavior. A reanalysis of previous data on matching behavior is provided to illustrate the method and point out its advantages. It illustrates that multilevel regression leads to better estimations, is more convenient, and offers more behavioral information. We hope this paper will encourage the use of multilevel modeling in the statistical practices of behavior analysts. abstract_id: PUBMED:32351423 Using Multilevel Mediation Model to Measure the Contribution of Beliefs to Judgments of Learning. Recent studies on judgments of learning (JOLs) suggest that beliefs play an important role in the formation of JOLs. The current article introduces a multilevel mediation model to quantify the contribution of beliefs to JOL formation when both JOLs and global belief-based predictions are measured from the same group of participants. Our examples of fitting the multilevel mediation model to hypothetical and published datasets revealed that it is feasible to use the multilevel mediation model to examine the mediation effect of beliefs on the relationship between a cue and JOLs, and quantitatively compare the effects of beliefs and processing fluency on JOLs in one model. Then we compared the current multilevel mediation model and the multilevel moderation model implemented in previous studies, and discussed their similarities and differences. Finally, a data simulation was performed to explain the inflation of Type I error for the multilevel mediation model when we regress global belief-based predictions on the cue, and suggestions about appropriate steps for conducting multilevel mediation analysis are proposed. abstract_id: PUBMED:35915358 Multilevel multivariate meta-analysis made easy: An introduction to MLMVmeta. The basic random effects meta-analytic model is overwhelmingly dominant in psychological research. Indeed, it is typically employed even when more complex multilevel multivariate meta-analytic models are warranted. In this paper, we aim to help overcome challenges so that multilevel multivariate meta-analytic models will be more often employed in practice. We do so by introducing MLMVmeta-an easy-to-use web application that implements multilevel multivariate meta-analytic methodology that is both specially tailored to contemporary psychological research and easily estimable, interpretable, and parsimonious-and illustrating it across three case studies. The three case studies demonstrate the more accurate and extensive results that can be obtained via multilevel multivariate meta-analytic models. Further, they sequentially build in complexity featuring increasing numbers of experimental factors and conditions, dependent variables, and levels; this in turn necessitates increasingly complex model specifications that also sequentially build upon one another. abstract_id: PUBMED:28952787 On the Multilevel Nature of Meta-Analysis: A Tutorial, Comparison of Software Programs, and Discussion of Analytic Choices. The term "multilevel meta-analysis" is encountered not only in applied research studies, but in multilevel resources comparing traditional meta-analysis to multilevel meta-analysis. In this tutorial, we argue that the term "multilevel meta-analysis" is redundant since all meta-analysis can be formulated as a special kind of multilevel model. To clarify the multilevel nature of meta-analysis the four standard meta-analytic models are presented using multilevel equations and fit to an example data set using four software programs: two specific to meta-analysis (metafor in R and SPSS macros) and two specific to multilevel modeling (PROC MIXED in SAS and HLM). The same parameter estimates are obtained across programs underscoring that all meta-analyses are multilevel in nature. Despite the equivalent results, not all software programs are alike and differences are noted in the output provided and estimators available. This tutorial also recasts distinctions made in the literature between traditional and multilevel meta-analysis as differences between meta-analytic choices, not between meta-analytic models, and provides guidance to inform choices in estimators, significance tests, moderator analyses, and modeling sequence. The extent to which the software programs allow flexibility with respect to these decisions is noted, with metafor emerging as the most favorable program reviewed. abstract_id: PUBMED:29066984 Relating Measurement Invariance, Cross-Level Invariance, and Multilevel Reliability. Data often have a nested, multilevel structure, for example when data are collected from children in classrooms. This kind of data complicate the evaluation of reliability and measurement invariance, because several properties can be evaluated at both the individual level and the cluster level, as well as across levels. For example, cross-level invariance implies equal factor loadings across levels, which is needed to give latent variables at the two levels a similar interpretation. Reliability at a specific level refers to the ratio of true score variance over total variance at that level. This paper aims to shine light on the relation between reliability, cross-level invariance, and strong factorial invariance across clusters in multilevel data. Specifically, we will illustrate how strong factorial invariance across clusters implies cross-level invariance and perfect reliability at the between level in multilevel factor models. Answer: Multilevel bypass grafting, which involves simultaneous aortoiliac and infrainguinal bypasses, has been traditionally considered a high-risk procedure due to the complexity of treating multilevel arterial disease. However, a retrospective study conducted over 15 years at a single district general hospital suggests that multilevel bypass procedures can be performed with good long-term outcomes and acceptable mortality rates, especially in a high-risk group with extensive comorbidities (PUBMED:24704851). The study reported that out of 32 multilevel procedures performed, the indications for surgery included acute ischemia (31.3%), critical ischemia without tissue loss (31.3%), critical ischemia with tissue loss (31.3%), and claudication (6.3%). The majority of cases (71.9%) involved inflow restoration using direct iliofemoral or aortofemoral reconstruction, while the rest (28.1%) required extra-anatomic bypass construction. The 30-day mortality rate was 6.3%, with 90.6% of patients surviving to discharge. One year after surgery, 87.5% of patients were alive, and limb salvage rates were 96.9% at 30 days, 85.7% at 1 year, and 75.9% at 5 years. Additionally, 37.5% of patients required further ipsilateral vascular procedures. The results of this study indicate that in patients with severe critical limb ischemia and tissue loss, who have a combination of aortoiliac and infrainguinal disease, there are significant benefits to a primary multilevel grafting procedure. Therefore, based on these findings, multilevel bypass grafting can be considered worth it for certain patient groups, offering a viable option with good survival and limb salvage rates despite the high-risk nature of the procedure (PUBMED:24704851).
Instruction: Efficacy of neck dissection: are surgical volumes important? Abstracts: abstract_id: PUBMED:19296506 Efficacy of neck dissection: are surgical volumes important? Unlabelled: OBJECTIVES/HYPHOTHESIS: Surgical volumes have been shown in many fields to reflect expertise and to be associated with improved clinical outcomes. We sought to test similar hypotheses that neck dissection is a volume-sensitive procedure wherein the number of lymph nodes harvested and clinical outcomes are correlated with the number of neck dissections performed. Methods: We conducted a retrospective analysis of 375 neck dissections. The number of nodes harvested was examined by multiple linear regression before and after mathematical correction for pathologists' variation over time. The prior experiences of the surgeons were also averaged, depending on whether there was regional recurrence after at least 9 months of follow-up. Results: More nodes were found by the pathologists over time, with 2.1 more nodes being found on average for each successive year (P < .001). After correcting for this effect of time, there remained a significant relationship between surgeons' experience and number of nodes harvested (P < .003). Additionally, cases that recurred were operated by less-experienced surgeons (P = .02). Conclusions: We have demonstrated that there is a "learning curve" for neck dissection. Our most experienced surgeon harvested on average 11 more nodes than similar specimens submitted by the group of inexperienced surgeons. The effect of experience remains significant even without adjusting for time (P < .001). Surgical experience also affects clinical outcome months after the dissection. This retrospective review supports surgical volume as an indicator of expertise in neck dissection that could be used to assess trainees' progress and for quality assurance in large head and neck units. Laryngoscope, 2009. abstract_id: PUBMED:31005201 Selection of lymph node target volumes for definitive head and neck radiation therapy: a 2019 Update. Background And Purpose: In 2000, a panel of experts published a proposal for the selection of lymph node target volumes for definitive head and neck radiation therapy (Radiother Oncol, 2000; 56: 135-150). Hereunder, this selection is updated and extended to also cover primary sites not previously covered. Patients And Methods: The lymphatic spread of head and neck cancers into neck lymph nodes was comprehensively reviewed based on radiological, surgical and pathological literature regarding both initial involvement and patterns of failure. Then a panel of worldwide head and neck radiotherapy experts agreed on a consensus for the selection of both high- and low-risk lymph node target volumes for the node negative and the node positive neck. Results: An updated selection of lymph node target volumes is reported for oral cavity, oropharynx, hypopharynx, larynx, nasopharynx, paranasal sinuses, nasal cavity and carcinoma of unknown primary as a function of the nodal staging (UICC 8th edition). Conclusions: The selection of lymph node target volumes for head and neck cancers treated with IMRT/VMAT or other highly conformal techniques (e.g. proton therapy) requires a rigorous approach. This updated proposal of selection should help clinicians for the selection of lymph nodes target volumes and contribute to increase consistency. abstract_id: PUBMED:31101541 Delineation of the primary tumour clinical target volumes and neck node levels selection of parotid cancers Salivary glands tumours are uncommon tumours showing a large diversity of histological types. This article presents a synthesis of patterns and paths of invasion of parotid glands tumours in order to propose an approach of the delineation of primary tumour clinical target volumes and of the selection of lymph nodes target volumes. This article does not discuss treatment indications but defines clinical target volumes to treat if radiotherapy is indicated. Postoperative situation being the most frequent, the delineation of primary tumour clinical target volume is based on an anatomical approach. abstract_id: PUBMED:11715313 Selection and delineation of lymph node target volumes in head and neck neoplasms This article reviews all clinical and pathological data available in the literature supporting the concept of selectivity in the neck nodes to be included in the Clinical Target Volume for head and neck squamous cell carcinoma. Using the terminology of neck node levels and the guidelines for the surgical delineation of these levels proposed by the Committee for Head and Neck Surgery and Oncology of the American Academy for Otolaryngology-Head and Neck Surgery, recommendations are proposed for both the selection and the delineation of lymph node target volumes. abstract_id: PUBMED:25663644 Responsiveness and minimal important changes for the Neck Disability Index and the Neck Pain Disability Scale in Italian subjects with chronic neck pain. Purpose: To evaluate the responsiveness and minimal important changes (MICs) for the Italian Neck Disability Index (NDI) and Neck Pain Disability Scale (NPDS) as well as which questionnaire was the most responsive in subjects with chronic neck pain (NP). Methods: At the beginning and end of an 8-week rehabilitation programme, 200 patients completed the NDI and NPDS. After the programme, the global perceived effect (GPE) was also evaluated and collapsed to produce a dichotomous outcome (improved vs. stable). Responsiveness was calculated by distribution [effect size (ES); standardised response mean (SRM)] and anchor-based methods (ROC curves; correlations between change scores of NPDS and NDI, and GPE). ROC curves were also used to compute the best cutoff levels between improved and stable subjects (MICs). Results: The ES was 0.66 and 0.73, and the SRM was 1.09-1.26 for the NDI and NPDS, respectively. The ROC analyses revealed AUCs of 0.96 and 0.91 for the NDI and NPDS, respectively; the MICs were 7-percentage points for the NDI (sensitivity: 98%; specificity: 81%) and 10 for NPDS (93; 83%), showing equivalent responsiveness properties. Baseline NPDS scores did not affect MIC estimate for the NPDS, while, for the NDI, higher MICs were found for patients with worst disability levels. Correlations between change scores of the NDI and NPDS and GPE were, respectively, high (0.71) and moderate (0.59). Conclusions: The Italian NDI and NPDS were equally sensitive in detecting clinical changes in subjects with chronic NP undergoing rehabilitation. We recommend taking the MICs into account when assessing patient improvement or planning studies in this clinical context. abstract_id: PUBMED:23351366 Sex-specific prediction of neck muscle volumes. Biomechanical analyses of the head and neck system require knowledge of neck muscle forces, which are often estimated from neck muscle volumes. Here we use magnetic resonance images (MRIs) of 17 subjects (6 females, 11 males) to develop a method to predict the volumes of 16 neck muscles by first predicting the total neck muscle volume (TMV) from subject sex and anthropometry, and then predicting individual neck muscle volumes using fixed volume proportions for each neck muscle. We hypothesized that the regression equations for total muscle volume as well as individual muscle volume proportions would be sex specific. We found that females have 59% lower TMV compared to males (females: 510±43cm(3), males: 814±64cm(3); p<0.0001) and that TMV (in cm(3)) was best predicted by a regression equation that included sex (male=0, female=1) and neck circumference (NC, in cm): TMV=269+13.7NC-233Sex (adjusted R(2)=0.868; p<0.01). Individual muscle volume proportions were not sex specific for most neck muscles, although small sex differences existed for three neck muscles (obliqus capitis inferior, longus capitis, and sternocleidomastoid). When predicting individual muscle volumes in subjects not used to develop the model, coefficients of concordance ranged from 0.91 to 0.99. This method of predicting individual neck muscle volumes has the advantage of using only one sex-specific regression equation and one set of sex-specific volume proportions. These data can be used in biomechanical models to estimate muscle forces and tissue loads in the cervical spine. abstract_id: PUBMED:37162484 Responsiveness and minimal clinically important changes to physical therapy interventions of Persian versions of copenhagen neck functional disability index, neck bournemouth questionnaire and spine functional index questionnaires in people with chronic neck pain. Background And Purpose: Chronic neck pain has a significant impact on the patient's quality of life. Specific outcome measures like Copenhagen Neck Functional Disability Index (CNFDI), Neck Bournemouth questionnaire (NBQ), and Spine Functional Index (SFI) are reliable and valid measures that have been used for comprehensively evaluating neck-related disabilities. However, responsiveness has not yet been investigated. The aim of this study was to examine the responsiveness and clinically meaningful changes of the CNFDI, NBQ, and SFI for Persian patients with chronic neck pain. Methods: Prospective recruitment of 145 patients with chronic neck pain was conducted. Participants completed the Persian versions of CNFDI, NBQ, and SFI at baseline and after 4 weeks of physical therapy. Also, the Global Rating of Change Scale (GRCS) was completed in the post-intervention assessment. The Receiver Operating Characteristics (ROC) curve and correlational analysis were used for evaluating the responsiveness. In addition, the Minimal Clinically Important Change (MCIC) was determined. Results: All selected outcome measures revealed an area under the curve of 0.96. The MCICs of 10, 7, and 10 points were found for the CNFDI, NBQ, and SFI, respectively. The results showed an excellent Gamma correlation coefficient of the CNFDI (γ = 0.98), NBQ (γ = 0.99), and SFI (γ = 0.99) with the GRCS. Conclusion: The Persian versions of the CNFDI, NBQ, and SFI have acceptable responsiveness properties for evaluating the changes in health status in patients with chronic neck pain following physical therapy interventions. abstract_id: PUBMED:23651866 Predictors of weight loss during conformal radiotherapy for head and neck cancers - how important are planning target volumes? Aims: Nutritional compromise is common during high dose radiotherapy (RT) or chemoradiotherapy (CRT) for head and neck cancers. We aimed to identify the factors that determine nutritional outcome for head and neck cancer patients during radiotherapy. Methods: Data from 103 patients with head and neck cancer treated with highly conformal radiotherapy to doses of 60 Gy or more in 30-33 fractions in the adjuvant or definitive setting was analyzed. All patients received complex 3D conformal radiotherapy (3DCRT) or intensity modulated radiotherapy (IMRT). Patients received regular nutritional counseling and need-based interventions. Their weight was recorded at the beginning and end of radiotherapy. Using univariate and multivariate models we tested possible predictors of weight loss of >5% and NG tube requirement. Results: The mean weight loss was 3.8%. The incidence of weight loss >5% was 37.9% and NG tube placement was 24.3%. The factors significantly associated with >5% weight loss in the univariate analysis were tumor site (oro-hypopharyngeal vs. others), definitive vs. adjuvant RT; prescription dose of >60 Gy vs. 60 Gy; CRT vs. RT alone; prescription dose planning target volume (PTV) volume >235 cc and total PTV volume >615 cc. Age, sex, T stage, N stage and modality (3DCRT/IMRT) were not significant. In multivariate analysis, the total PTV volume, prescription dose PTV volume and use of chemotherapy were significant after controlling for other factors. Patients could be risk stratified based on the use of CRT and large PTV volumes. Patients with none, one or both factors had a likelihood of >5% weight loss of 0%, 30.3%, and 56.9% (p < 0.001) and likelihood of NG tube placement of 5.3%, 15.2% and 37.3% (p = 0.007). Conclusions: It is possible to predict weight loss and NG tube requirements from disease and treatment related factors. PTV volumes are important predictors of nutritional compromise. Risk stratification may enable more focused counseling and identification of patients who require preventive interventions. abstract_id: PUBMED:24011671 Preoperative radiotherapy for rectal cancer: target volumes Preoperative radiochemotherapy followed by total mesorectal excision is the standard of care for T3-T4-N0 or TxN1 rectal cancer. Defining target volumes relies on the patterns of nodal and locoregional failures. The lower limit of the clinical target volume depends also on the type of surgery. Conformational radiotherapy with or without intensity-modulated radiotherapy implies an accurate definition of volumes and inherent margins in the context of mobile organs such as the upper rectum. Tumoral staging recently improved with newer imaging techniques such as MRI with or without USPIO and FDG-PET-CT. The role of PET-CT remains unclear despite encouraging results and MRI is a helpful tool for a reliable delineation of the gross tumour volume. Co-registration of such modalities with the planning CT may particularly guide radiation oncologists through the gross tumour volume delineation. Acute digestive toxicity can be reduced with intensity modulation radiation therapy. Different guidelines and CT-based atlas regarding the target volumes in rectal cancer give the radiation oncologist a lot of ground for reproducible contours. abstract_id: PUBMED:34272632 Methodological approach for determining the Minimal Important Difference and Minimal Important Change scores for the European Organisation for Research and Treatment of Cancer Head and Neck Cancer Module (EORTC QLQ-HN43) exemplified by the Swallowing scale. Purpose: The aim of this study was to explore what methods should be used to determine the minimal important difference (MID) and minimal important change (MIC) in scores for the European Organisation for Research and Treatment of Cancer Head and Neck Cancer Module, the EORTC QLQ-HN43. Methods: In an international multi-centre study, patients with head and neck cancer completed the EORTC QLQ-HN43 before the onset of treatment (t1), three months after baseline (t2), and six months after baseline (t3). The methods explored for determining the MID were: (1) group comparisons based on performance status; (2) 0.5 and 0.3 standard deviation and standard error of the mean. The methods examined for the MIC were patients' subjective change ratings and receiver-operating characteristics (ROC) curves, predictive modelling, standard deviation, and standard error of the mean. The EORTC QLQ-HN43 Swallowing scale was used to investigate these methods. Results: From 28 hospitals in 18 countries, 503 patients participated. Correlations with the performance status were |r|< 0.4 in 17 out of 19 scales; hence, performance status was regarded as an unsuitable anchor. The ROC approach yielded an implausible MIC and was also discarded. The remaining approaches worked well and delivered MID values ranging from 10 to 14; the MIC for deterioration ranged from 8 to 16 and the MIC for improvement from - 3 to - 14. Conclusions: For determining MIDs of the remaining scales of the EORTC QLQ-HN43, we will omit comparisons of groups based on the Karnofsky Performance Score. Other external anchors are needed instead. Distribution-based methods worked well and will be applied as a starting strategy for analyses. For the calculation of MICs, subjective change ratings, predictive modelling, and standard-deviation based approaches are suitable methods whereas ROC analyses seem to be inappropriate. Answer: Yes, surgical volumes are important in the efficacy of neck dissection. A retrospective analysis of 375 neck dissections demonstrated that there is a "learning curve" for neck dissection, with more experienced surgeons harvesting significantly more lymph nodes than less experienced surgeons. The study found that the most experienced surgeon in the group harvested on average 11 more nodes than similar specimens submitted by the group of inexperienced surgeons. Additionally, cases that recurred were operated on by less-experienced surgeons, suggesting that surgical experience also affects clinical outcomes months after the dissection (PUBMED:19296506). This supports the idea that surgical volume is an indicator of expertise in neck dissection and could be used to assess trainees' progress and for quality assurance in large head and neck units.
Instruction: Goal-directed upper limb movements by children with and without DCD: a window into perceptuo-motor dysfunction? Abstracts: abstract_id: PUBMED:21643625 Proteolysis-inducing factor core peptide mediates dermcidin-induced proliferation of hepatic cells through multiple signalling networks. Dermcidin is a candidate oncogene capable of increasing the number of cultured neuronal, breast cancer and prostate cancer cells and improving the survival of hepatic cells. The dermcidin gene encodes the proteolysis-inducing factor core peptide (PIF-CP) and the skin antimicrobial peptide DCD-1. The peptide responsible for inducing proliferation of cells and the mechanisms involved are unknown. In this study, we confirmed a proliferative effect of dermcidin overexpression of 20% (p<0.02) in the HuH7 human hepatic cell line. Proliferation was abrogated by prevention of PIF-CP translation or inactivation of its calcineurin-like phosphatase domain by site-directed mutagenesis. Prevention of DCD-1 translation had no effect. Treatment of cells with a 30 amino acid synthetic PIF-CP induced an analogous increase in proliferation of 14%. Microarray analysis of PIF-CP-treated cells revealed low but significant changes in 111 potential mediator genes. Pathway analysis revealed several gene networks involved in the cellular response to the peptide, one with VEGFB as a hub and two other networks converging on FOS and MYC. Quantitative PCR confirmed direct upregulation of VEGFB. These data reveal PIF-CP as the key mediator of dermcidin-induced proliferation and demonstrate induction of key oncogenic pathways. abstract_id: PUBMED:16685272 Dermcidin expression in hepatic cells improves survival without N-glycosylation, but requires asparagine residues. Proteolysis-inducing factor, a cachexia-inducing tumour product, is an N-glycosylated peptide with homology to the unglycosylated neuronal survival peptide Y-P30 and a predicted product of the dermcidin gene, a pro-survival oncogene in breast cancer. We aimed to investigate whether dermcidin is pro-survival in liver cells, in which proteolysis-inducing factor induces catabolism, and to determine the role of potentially glycosylated asparagine residues in this function. Reverse cloning of proteolysis-inducing factor demonstrated approximately 100% homology with the dermcidin cDNA. This cDNA was cloned into pcDNA3.1+ and both asparagine residues removed using site-directed mutagenesis. In vitro translation demonstrated signal peptide production, but no difference in molecular weight between the products of native and mutant vectors. Immunocytochemistry of HuH7 cells transiently transfected with V5-His-tagged dermcidin confirmed targeting to the secretory pathway. Stable transfection conferred protection against oxidative stress. This was abrogated by mutation of both asparagines in combination, but not by mutation of either asparagine alone. These findings suggest that dermcidin may function as an oncogene in hepatic as well as breast cells. Glycosylation does not appear to be required, but the importance of asparagine residues suggests a role for the proteolysis-inducing factor core peptide domain. abstract_id: PUBMED:17626247 Dermcidin expression confers a survival advantage in prostate cancer cells subjected to oxidative stress or hypoxia. Background: Dermcidin (DCD) is a candidate survival gene in breast cancer. DCD gene expression has been identified in prostate cancer cell lines and primary prostate cancer tissue. The DCD protein is composed of proteolysis-inducing factor-core peptide (PIF-CP) and the skin antimicrobial DCD-1. The aim of this work was to: (i) establish if the DCD gene confers resistance of prostate cancer cells to hypoxia and oxidative stress; (ii) identify the component of the gene transcript responsible for this effect. Methods: Site-directed mutagenesis was used to create mutant DCD vectors. PC-3M prostate cancer cells were stably transfected with pcDNA3.1+ vectors encoding the entire DCD cDNA, mutant DCD vectors, or a control empty vector. Oxidative stress was produced using menadione, glucose oxidase, or hydrogen peroxide. Cell hypoxia was induced by incubation at 0.2% oxygen. Results: Comparison of cell growth showed a 54.5% relative-proliferative advantage for the DCD-transfected PC-3M cells compared with sham-transfected cells after 8 days of cell growth (P = 0.03). Overexpression of DCD provided upto 36% absolute survival advantage over sham-transfected cells following induction of oxidative stress or hypoxia (P = 0.004). On exposure to hypoxia or oxidative stress PC-3M cells overexpressing the entire DCD gene had upto 42% survival advantage over those transfectants lacking the PIF-CP sequence (P = 0.004). Conclusions: DCD and PIF-CP are proliferation and survival factors in prostate cancer cells subjected to stressors found in the prostate cancer microenvironment. Thus, DCD and specifically PIF-CP are potential targets for the treatment of prostate cancer. abstract_id: PUBMED:28940999 Hearts transplanted after circulatory death in children: Analysis of the International Society for Heart and Lung Transplantation registry. We aimed to describe worldwide DCD HT experience in children using the International Society for Heart and Lung Transplantation Registry. The Registry was queried for primary HT performed in children (2005-2014). Kaplan-Meier analysis was used to assess survival for recipients grouped by DCD or DBD hearts. Recipient characteristics were compared between DCD and DBD and between survivors and non-survivors of DCD HT. Among 3877 pediatric HT performed, 21 (0.5%) were DCD. DCD 1-year survival was 61% vs 91% DBD, P < .01. DCD recipients were more often supported by ECMO pre-HT (24% vs 6%, P < .001) and more often receiving inhaled nitric oxide (10% vs 0.6%, P < .001) compared to DBD. Older DCD recipients had significantly lower 1-year survival of 57% vs 93% for DBD, P < .01. Survival for infant DCD recipients was not statistically different to DBD recipients (survival 62% at 1 year and 62% at 5 years for DCD vs 85% at 1 year and 77% at 5 years for DBD, P = .15). Recipients of DCD HT who died were more often supported by ECMO pre-HT (56% non-survivors vs 0% survivors, P = .004) and receiving mechanical ventilation (44% vs 0%, P = .012). DCD HT is uncommon in children. DCD-independent factors in recipients may have contributed to worse survival as DCD recipients who died were more often supported by ECMO and mechanical ventilation. More research is needed to identify donor factors and recipient factors that contribute to mortality after DCD HT. abstract_id: PUBMED:26101262 Dermcidin: a skeletal muscle myokine modulating cardiomyocyte survival and infarct size after coronary artery ligation. Aims: Coronary artery disease is the leading cause of death in western countries, and its association with lower extremity peripheral artery disease (LE-PAD) represents an independent predictor of worse outcome. However, the molecular mechanisms underlying these effects are currently unknown. Methods And Results: To investigate these processes, we used in vitro approaches and several mouse models: (i) unilateral limb ischaemia by left common femoral artery ligation [peripheral ischaemia (PI), n = 38]; (ii) myocardial infarction by permanent ligation of the left descending coronary artery (MI, n = 40); (iii) MI after 5 weeks of limb ischaemia (PI + MI, n = 44); (iv) sham operation (SHAM, n = 20). Compared with MI, PI + MI hearts were characterized by a significant increase in cardiomyocyte apoptosis, larger infarct areas, and decreased cardiac function. By using a proteomic approach, we identified a ≅ 8 kDa circulating peptide, Dermcidin (DCD), secreted by ischaemic skeletal muscles, enhancing cardiomyocytes apoptosis under hypoxic conditions and infarct size after permanent coronary artery ligation. siRNA interference experiments to reduce DCD circulating levels significantly reduced infarct size and ameliorated cardiac function after MI. Conclusion: Our data demonstrate that chronic limb ischaemia activates detrimental pathways in the ischaemic heart through humoral mechanisms of remote organ crosstalk. Thus, DCD may represent a novel important myokine modulating cardiomyocyte survival and function. abstract_id: PUBMED:19299728 Vitamin d-directed rheostatic regulation of monocyte antibacterial responses. The active form of vitamin D, 1,25-dihydroxyvitamin D (1,25(OH)(2)D) enhances innate immunity by inducing the cathelicidin antimicrobial peptide (hCAP). In monocytes/macrophages, this occurs primarily in response to activation of TLR, that induce expression of the vitamin D receptor and localized synthesis of 1,25(OH)(2)D from precursor 25-hydroxyvitamin D(3) (25OHD). To clarify the relationship between vitamin D and innate immunity, we assessed changes in hCAP expression in vivo and ex vivo in human subjects attending a bone clinic (n = 50). Of these, 38% were vitamin D-insufficient (<75 nM 25OHD) and received supplementation with vitamin D (50,000 IU vitamin D(2) twice weekly for 5 wk). Baseline 25OHD status or vitamin D supplementation had no effect on circulating levels of hCAP. Therefore, ex vivo changes in hCAP for each subject were assessed using peripheral blood monocytes cultured with 10% autologous serum (n = 28). Under these vitamin D "insufficient" conditions the TLR2/1 ligand 19 kDa lipopeptide or the TLR4 ligand LPS, monocytes showed increased expression of the vitamin D-activating enzyme CYP27b1 (5- and 5.5-fold, respectively, both p < 0.01) but decreased expression of hCAP mRNA (10-fold and 30-fold, both p < 0.001). Following treatment with 19 kDa, expression of hCAP: 1) correlated with 25OHD levels in serum culture supplements (R = 0.649, p < 0.001); 2) was significantly enhanced by exogenous 25OHD (5 nM); and 3) was significantly enhanced with serum from vivo vitamin D-supplemented patients. These data suggest that a key role of vitamin D in innate immunity is to maintain localized production of antibacterial hCAP following TLR activation of monocytes. abstract_id: PUBMED:29066724 Structural basis for the bacterial membrane insertion of dermcidin peptide, DCD-1L. Human dermcidin (DCD) is an antimicrobial peptide secreted constitutively by sweat glands. The anionic derivative, DCD-1L, comprises of the N-terminal 47 residues of DCD and one additional leucine residue. A previous NMR structure of DCD-1L in 50% TFE showed a partial helical conformation, and its crystal structure in the presence of Zn2+ outlined a hexameric linear α-helical bundle. Three different models to describe membrane insertion were proposed but no conclusion was drawn. In the current study, the NMR structure of DCD-1L in SDS micelles showed an "L-shaped" molecule with three fully formed α-helices connected by flexible turns. Formation of these helices in DCD-1L in the presence of POPG vesicles suggests that the acidic C-terminal region of DCD-1L can suppress the binding of DCD-1L to POPG vesicles at basic but not acidic pH. Mutation of charged residues on the N-terminal and C-terminal regions of DCD-1L cause differences in POPG binding, suggesting distinct functional roles for these two regions. Charged residues from these two regions are also found to differentially affect Zn2+ coordination and aggregation of DCD-1L in the absence or presence of SDS, as monitored by 1D NMR. Our data agrees with one of the three models proposed. abstract_id: PUBMED:21576948 Defective sweating responses in atopic dermatitis. While sweat is thought to be one of the important factors provoking exacerbations of clinical symptoms in atopic dermatitis (AD), little attention has been drawn to a beneficial role of sweat in the development of AD lesions. However, if the permeability barrier and antimicrobial barrier dysfunction represents the primary event in the development of AD, an evaluation of sweating responses in AD is a logical place to look for changes that predispose to the disease. In this regard, there have been conflicting data regarding whether sweating responses are impaired, normal or enhanced in AD patients. Consistent with the results of most recent studies, our recent study showed that most AD patients exhibit a defective ability to deliver sweat to the skin surface in response to thermal stress. Despite such defective sweating responses observed in the most part, a marked augmentation in the sweating response with delayed kinetics can be paradoxically detected in some sweating glands of these AD patients, indicating compensatory hyperhidrosis. Dermcidin, a new antimicrobial peptide exclusively produced by sweat glands, was abundantly detected not only in the sweat glands and ducts, and the lumen, but also in the dermal tissues adjacent to the sweat glands. These results indicate that the sweat may be retained in the lumen or pour into the dermal tissues, thereby causing inflammation. Thus, chronic inflammation in AD may be caused in part by a dysfunction of the sweat delivery system. abstract_id: PUBMED:23332763 Folding and self-assembly of the TatA translocation pore based on a charge zipper mechanism. We propose a concept for the folding and self-assembly of the pore-forming TatA complex from the Twin-arginine translocase and of other membrane proteins based on electrostatic "charge zippers." Each subunit of TatA consists of a transmembrane segment, an amphiphilic helix (APH), and a C-terminal densely charged region (DCR). The sequence of charges in the DCR is complementary to the charge pattern on the APH, suggesting that the protein can be "zipped up" by a ladder of seven salt bridges. The length of the resulting hairpin matches the lipid bilayer thickness, hence a transmembrane pore could self-assemble via intra- and intermolecular salt bridges. The steric feasibility was rationalized by molecular dynamics simulations, and experimental evidence was obtained by monitoring the monomer-oligomer equilibrium of specific charge mutants. Similar "charge zippers" are proposed for other membrane-associated proteins, e.g., the biofilm-inducing peptide TisB, the human antimicrobial peptide dermcidin, and the pestiviral E(RNS) protein. abstract_id: PUBMED:6096669 Cloning and analysis of pif, replication and leading regions of the F plasmid. We describe the molecular cloning of BglII fragments of the hybrid plasmid pRS5 (pSC101 and EcoRI fragments of F; f7, f5, f3 and f6). The clones isolated were examined for the expression of F-specified replication, incompatibility, mobilization and inhibition of T7 bacteriophage multiplication. Proteins directed by the BglII clones were labelled in Escherichia coli K12 maxicells and analyzed by SDS-polyacrylamide gel electrophoresis. The sizes of previously reported proteins, encoded by the replication, incompatibility and leading regions encompassed by these plasmids have been confirmed in this study. In addition, the results demonstrate that a pif gene, which encodes an 80,000 dalton polypeptide essential for the inhibition T7 phage multiplication, is located on the BglII fragment that spans the junction of EcoRI fragments f7 and f5. Answer: The provided abstracts do not contain information directly related to goal-directed upper limb movements by children with and without Developmental Coordination Disorder (DCD) or insights into perceptuo-motor dysfunction in this context. The term "DCD" in the abstracts refers to "dermcidin," which is an antimicrobial peptide involved in various physiological and pathological processes, including cancer cell survival and proliferation (PUBMED:21643625, PUBMED:16685272, PUBMED:17626247, PUBMED:28940999, PUBMED:26101262, PUBMED:19299728, PUBMED:29066724, PUBMED:21576948, PUBMED:23332763, PUBMED:6096669). Developmental Coordination Disorder, on the other hand, is a condition characterized by poor coordination and clumsiness, which is not discussed in the provided abstracts. Therefore, the question cannot be answered based on the given abstracts.
Instruction: Can unannounced standardized patients assess professionalism and communication skills in the emergency department? Abstracts: abstract_id: PUBMED:19673703 Can unannounced standardized patients assess professionalism and communication skills in the emergency department? Objectives: The authors piloted unannounced standardized patients (USPs) in an emergency medicine (EM) residency to test feasibility, acceptability, and performance assessment of professionalism and communication skills. Methods: Fifteen postgraduate year (PGY)-2 EM residents were scheduled to be visited by two USPs while working in the emergency department (ED). Multidisciplinary support was utilized to ensure successful USP introduction. Scores (% well done) were calculated for communication and professionalism skills using a 26-item, behaviorally anchored checklist. Residents' attitudes toward USPs and USP detection were also surveyed. Results: Of 27 USP encounters attempted, 17 (62%) were successfully completed. The detection rate was 44%. Eighty-three percent of residents who encountered a USP felt that the encounter did not hinder daily practice and did not make them uncomfortable (86%) or suspicious of patients (71%). Overall, residents received a mean score of 60% for communication items rated "well done" (SD +/- 28%, range = 23%-100%) and 53% of professionalism items "well done" (SD +/- 20%, range = 23%-85%). Residents' communication skills were weakest for patient education and counseling (mean = 43%, SD +/- 31%), compared with information gathering (68%, SD +/- 36% and relationship development (62%, SD +/- 32%). Scores of residents who detected USPs did not differ from those who had not. Conclusions: Implementing USPs in the ED is feasible and acceptable to staff. The unpredictability of the ED, specifically resident schedules, accounted for most incomplete encounters. USPs may represent a new way to assess real-time resident physician performance without the need for faculty resources or the bias introduced by direct observation. abstract_id: PUBMED:28888419 Clinical Skills and Professionalism: Assessing Orthopaedic Residents With Unannounced Standardized Patients. Objective: We developed a series of orthopedic unannounced standardized patient (USP) encounters for the purpose of objective assessment of residents during clinic encounters. Design: Consecutive case-series. Setting: NYU-Langone Multi-center Academic University Hospital System. Participants: NYU-Langone/Hospital for Joint Diseases Orthopedic Surgery residents; 48 consecutive residents assessed. Methods: Four orthopedic cases were developed. USPs presented themselves as patients in outpatient clinics. Residents were evaluated on communication skills (information gathering, relationship development, and education and counseling). USPs globally rated whether they would recommend the resident. Results: Forty-eight USP encounters were completed over a 2-year period. Communication skills items were rated at 51% (±30) "well done." Education and counseling skills were rated as the lowest communication domain at 33% (±33). Residents were globally recommended based on communication skills in 63% of the encounters recommended in 70% of encounters based on both professionalism and medical competence. Conclusions: The USP program has been useful in assessing residents' clinical skills, interpersonal and communications skills, and professionalism. Use of USP in orthopedic surgery training programs can be an objective means for trainee assessment. abstract_id: PUBMED:24797843 A standardized patient model to teach and assess professionalism and communication skills: the effect of personality type on performance. Introduction: Teaching and assessing professionalism and interpersonal communication skills can be more difficult for surgical residency programs than teaching medical knowledge or patient care, for which many structured educational curricula and assessment tools exist. Residents often learn these skills indirectly, by observing the behavior of their attendings when communicating with patients and colleagues. The purpose of this study was to assess the results of an educational curriculum we created to teach and assess our residents in professionalism and communication. Methods: We assessed resident and faculty prior education in delivering bad news to patients. Residents then participated in a standardized patient (SP) encounter to deliver bad news to a patient's family regarding a severe burn injury. Residents received feedback from the encounter and participated in an education curriculum on communication skills and professionalism. As a part of this curriculum, residents underwent assessment of communication style using the Myers-Briggs type inventory. The residents then participated in a second SP encounter discussing a severe pulmonary embolus with a patient's family. Results: Resident performance on the SP evaluation correlated with an increased comfort in delivering bad news. Comfort in delivering bad news did not correlate with the amount of prior education on the topic for either residents or attendings. Most of our residents demonstrated an intuitive thinking style (NT) on the Myers-Briggs type inventory, very different from population norms. Discussion: The lack of correlation between comfort in delivering bad news and prior education on the subject may indicate the difficulty in imparting communication and professionalism skills to residents effectively. Understanding communication style differences between our residents and the general population can help us teach professionalism and communication skills more effectively. With the next accreditation system, residency programs would need to demonstrate that residents are acquiring these skills in their training. SP encounters are effective in teaching and assessing these skills. abstract_id: PUBMED:32175469 Professionalism and Ethics: A Standardized Patient Observed Standardized Clinical Examination to Assess ACGME Pediatric Professionalism Milestones. Introduction: The ethical skills fundamental to medical practice encompass a large portion of the Accreditation Council for Graduate Medical Education (ACGME) professionalism milestones. Yet many ethical practices are difficult to reduce to milestone frameworks given the variety of traditions of moral reasoning that clinician-trainees and their colleagues might properly employ. Methods: We developed an observed standardized clinical examination (OSCE) simulation with standardized patients to assess the ethical skills captured in professionalism milestones in pediatrics. The OSCE included four vignettes based on actual cases that presented problems without a correct answer. Residents discussed ethically challenging issues with standardized patients and were evaluated on specific ethical tenets contained in the professionalism milestones. Our assessment guide for preceptors offered content for debriefing and assessment. We piloted this OSCE with seven preceptors and 17 pediatric residents in two different medical settings. Results: Residents all agreed that the four cases were realistic. All but two residents agreed that OSCEs like this one are an appropriate or objective way of assessing the ACGME professionalism milestones. All preceptors reported that they strongly agreed the assessment improved their ability to assess the professionalism milestones. Discussion: This OSCE offers a structured method to assess professionalism milestones and a forum to discuss ethical problem solving. It can also be used solely as a training exercise in ethical decision making and having difficult conversations. abstract_id: PUBMED:32159265 The communication skills and quality perceived in an emergency department: The patient's perspective. Aim: To examine the influence of health care provider's communication skills on the quality of care perceived by Emergency Department patients. Background: Communication between patients and health care providers in the context of Emergency Department is challenging and can potentially have a negative impact on the quality of care perceived by patients. Design: Cross-sectional descriptive study conducted in the Emergency Department at the University Clinical Hospital Virgen de la Arrixaca of Murcia, Spain. Method: Data were collected from 6 to 9 June 2016. Different instruments were used to evaluate the perception of patients on the communication skills and quality of care at the Emergency Department. Results: The sample was composed of 200 patients, with an average age of 44.1 (SD = 18.3), of which 106 (53%) were men. The multiple linear regression analysis showed the association between communication skills (respect, problem solving, and nonverbal communication) and age with the quality of care perceived by the patients. Conclusions: We found that the communication skills of the health care providers were explanatory variables of the quality of care perceived in an Emergency Department. abstract_id: PUBMED:27453844 Physicians' professionalism at primary care facilities from patients' perspective: The importance of doctors' communication skills. Background: Professionalism is the core duty of a doctor to be responsible to the society. Doctors' professionalism depicts an internalization of values and mastery of professionals' standards as an important part in shaping the trust between doctors and patients. Professionalism consists of various attributes in which current literature focused more on the perspective of the health professionals. Doctors' professionalism may influence patients' satisfaction, and therefore, it is important to know from the patients' perspectives what was expected of medical doctors' professionalism. Objective: This study was conducted to determine the attributes of physician professionalism from the patient's perspective. Materials And Methods: This was a qualitative research using a phenomenology study design. In-depth interviews were conducted with 18 patients with hypertension and diabetes who had been treated for at least 1 year in primary care facilities in the city of Yogyakarta, Indonesia. The results of the interview were transcribed, encoded, and then classified into categories. Results: Communication skills were considered as the top priority of medical doctors' attributes of professionalism in the perspectives of the patients. Conclusion: This study revealed that communication skill is the most important aspects of professionalism which greatly affected in the process of health care provided by the primary care doctors. Doctor-patient communication skills should be intensively trained during both basic and postgraduate medical education. abstract_id: PUBMED:31723524 Focused Teaching Improves Medical Student Professionalism and Data Gathering Skills in the Emergency Department. Introduction: Leaders in medical education have developed milestones and core competencies in an attempt to ensure that relational skills, such as communication and professionalism, are emphasized in addition to the usual skills of medical knowledge, data gathering, and emergency stabilization during students' emergency medicine (EM) medical education. Providers facile in each of these areas have better patient outcomes, patient experiences, and decreased incidence of malpractice cases. The authors attempted to demonstrate that by deliberate teaching of these skills during an EM medical student clerkship, students could significantly improve their clinical performance. Methods: This prospective, randomized, single-blinded cohort study was performed at an academic, tertiary, urban ED to investigate the effects of a one-on-one preceptor shift on the clinical performance of fourth-year medical students. Students were randomized into two groups and assessed by pre- and post-intervention objective structured clinical encounters (OSCEs) with standardized patients (SPs) at weeks one and three. A crossover design was employed so that students in the control group participated in a preceptor shift after their second OSCE. Measurements were based on a five-point Likert scale assessment linked to early EM milestones as defined by the Accreditation Council on Graduate Medical Education (ACGME). Results: The mean improvement in total overall score was significantly greater in the intervention group: 4.31 versus 2.57 (Cohen's d = 0.57, p = 0.029). When each milestone was assessed individually, students in the intervention group improved significantly in data gathering (Cohen's d = 0.47, p = 0.048) and professionalism (Cohen's d = 0.66, p = 0.011). There was a nonstatistically significant improvement for the intervention compared to control group in emergency management and communication skills. There was no improvement for either group in medical knowledge. Conclusion: A one-on-one preceptor shift can result in a statistically significant improvement in data gathering and professionalism skills as measured by OSCEs. abstract_id: PUBMED:30429936 Standardized Patients to Assess Resident Interpersonal Communication Skills and Professional Values Milestones. It has been a challenge to assess communication and professional values Milestones in emergency medicine (EM) residents using standardized methods, as mandated by the Accreditation Council for Graduate Medical Education (ACGME). This paper outlines an innovative method of assessing these Milestones using an established instructional method. EM faculty mapped the communication and professional values Milestones to an existing communication and interpersonal skills scale. We identified six communication-focused scenarios: death notification; informed consent; medical non-compliance; medical error; treatment refusal; and advanced directives. In a pilot, 18 EM residents completed these six standardized patient (SP) encounters. Our experience suggests SP encounters can support standardized direct observation of residents' achievement of ACGME Milestones. Further effort can be made to create a tailored, behaviorally-anchored tool that uses the Milestones as the conceptual framework. abstract_id: PUBMED:33457615 Role of Communication, Professionalism, and Clinical Care Skills of Postgraduate Students on Patients Recall Visits in Dental School-An Observational Study. Preservation of periodontal health after periodontal therapy is paramount for the complete elimination of periodontal diseases. In most of the Dental Schools, recall appointments are considerably low, and in particular, to the Department of Periodontics, the compliance to recall visits by the patients diagnosed with periodontal disease is still inadequate. Faculty from the Department of Periodontics framed new criteria to follow in the comprehensive clinics by the postgraduate students. The criteria include communication, intraoral examination skills, and professionalism toward dental patients. Faculty in each comprehensive dental clinic observed the clinical encounters of postgraduate students with patients and provided the feedback. The study conducted from May 2018 to April 2019, and the patients attended were 1164 of 1544. Unattended patients were 380 of 1544. Effective evaluation of the feedback provided by faculty and communication with both the postgraduate students and unattended patients resulted in further improvement in recall, that is, 151 of 380 patients. Reframed criteria, direct observation of the postgraduate students by faculty during patients interaction, assessment of feedback forms, and immediately modifying the student's way of communication allowed maximum recall visits to the Department of Periodontics. abstract_id: PUBMED:17996838 Residents' asthma communication skills in announced versus unannounced standardized patient exercises. Objective: The aim of this study was to test the hypothesis that pediatric residents would display similar levels of asthma interpersonal and communication skills in announced versus unannounced adolescent standardized patient (SP) encounters. Methods: We conducted a prospective repeat measures experimental study at a pediatric residency program at an inner-city children's hospital. A cohort of residents (N = 18) was subjected at random to 3 SP exercises: announced and being directly observed by faculty, announced and not observed by faculty, and unannounced and unobserved. Six adolescent SPs were trained to complete checklists that included items like asthma daytime and nighttime symptoms, exercise-induced symptoms, triggers, and asthma education. For the unannounced exercises, SPs were inserted into residents' regularly scheduled clinics. Standardized patients rated residents immediately following each exercise. Residents were rated by faculty following the observed encounter. Faculty rating validated SP ratings on the observed encounter. Differences in proportions of categorical variables were tested by chi-square analyses. Results: Fifty-four resident-SP encounters were analyzed. Residents consistently displayed significantly lower levels of desired behaviors in interpersonal and communication skills in the unannounced SP encounters on 10 of 14 checklist items. For example, residents asked about exercise-induced symptoms 90% of the time in announced/observed encounters versus 95% in announced/unobserved encounters versus 72% in unannounced/unobserved encounters (P = .001). There were no significant differences in residents' behaviors in the announced SP exercises (whether observed or unobserved). Conclusion: In this study, residents demonstrated lower levels of asthma communication skills during unannounced SP exercises. By using unannounced SPs, we were able to assess residents' interpersonal and communication skills in real clinical settings. Answer: Yes, unannounced standardized patients (USPs) can assess professionalism and communication skills in the emergency department (ED). A pilot study involving postgraduate year-2 emergency medicine residents showed that implementing USPs in the ED is feasible and acceptable to staff. The study found that USPs could successfully complete encounters and assess real-time resident physician performance without the need for faculty resources or the bias introduced by direct observation. Residents received mean scores for communication and professionalism, indicating areas of strength and weakness, such as patient education and counseling being a weaker skill compared to information gathering and relationship development. The study concluded that USPs represent a new way to assess these skills in the ED environment (PUBMED:19673703). Additionally, other studies have supported the use of USPs in various medical training settings to assess and improve residents' clinical skills, interpersonal and communication skills, and professionalism. For instance, a study involving orthopedic surgery residents used USPs to objectively assess residents during clinic encounters, finding that communication skills could be rated and that residents were globally recommended based on these skills in a significant number of encounters (PUBMED:28888419). Another study used standardized patient encounters to teach and assess professionalism and communication, finding that these encounters were effective in teaching and assessing these skills (PUBMED:24797843). Furthermore, an observed standardized clinical examination (OSCE) simulation with standardized patients was developed to assess ethical skills captured in professionalism milestones in pediatrics, with residents and preceptors agreeing that such cases were realistic and that OSCEs were an appropriate way to assess professionalism milestones (PUBMED:32175469). In summary, the literature suggests that USPs and standardized patient encounters are valuable tools for assessing and improving professionalism and communication skills in medical residents, including those working in emergency departments.
Instruction: Six hundred and ten breech versus 12,405 cephalic deliveries at term: is there any difference in the neonatal outcome? Abstracts: abstract_id: PUBMED:12648858 Six hundred and ten breech versus 12,405 cephalic deliveries at term: is there any difference in the neonatal outcome? Objectives: To compare neonatal morbidity of breech and cephalic deliveries at term. Study Design: Cohort study of 610 consecutive singleton breech presentations and 12,405 consecutive singleton cephalic presentations in term between 1992-1998. Five hundred and fourteen breech and 11,989 cephalic presentations were candidates for vaginal delivery, of which 407 (79%) breeches and 11,265 (94%) cephalic delivered vaginally. Results: Neonatal intensive care admissions were significantly greater for breech than cephalic vaginal deliveries (2.7% versus 0.25%, P = 0.000), but newborn intensive care admission and mortality were equally distributed between the two groups. Conclusions: A low caesarean rate is possible (21% beech and 6% cephalic). Neonatal morbidity was equal in the two populations. Admission to neonatal intensive care was significantly more frequent for caesarean section than for vaginal delivery in the cephalic group and equal in the breech group. This study justifies our obstetrical policy and the realisation of a trial in several centres similar in terms of perinatal management. abstract_id: PUBMED:17674277 Conservative management in breech deliveries leads to similar results compared with cephalic deliveries. Objective: To compare the difference in neonatal mortality and morbidity between breech and cephalic presentations at term. Methods: This was a retrospective matched cohort study in two centers between July 1998 and April 2000, including all breech deliveries between 37(+0) and 41(+6) weeks, except cases with multiple gestations and antepartum intrauterine deaths. All breech presentations were matched with two cephalic presentations. Onset of labor and route of delivery were recorded, and neonatal data were categorized into variables belonging to serious morbidity or moderate morbidity. Results: One thousand one hundred and nineteen deliveries were included. Three hundred and seventy-three babies were in breech position and 746 in cephalic position. The gestational age and birth weight of the babies in the breech group were lower than in the cephalic group (p < 0.001). Congenital abnormalities occurred more often in the breech group (p < 0.005). An elective cesarean section was performed in 23.3% of breech presentations versus 3.5% of cephalic presentations (p < 0.001). Emergency cesarean sections were done in 29.2% of breech presentations versus 8.8% of cephalic presentations (p < 0.001). Children born in breech presentation had lower Apgar scores after 1 minute (p < 0.0001), but 5-minute Apgar scores were the same in both groups (p = 0.22). Children born in breech presentation received significantly more resuscitation than children born in cephalic presentation (p < 0.001). In both groups no perinatal mortality occurred. No differences were observed in percentages of children with serious or moderate neonatal morbidity between the breech and cephalic lies. Conclusions: Although the numbers are small, this study shows that the conservative (vaginal) approach in selected fetuses in breech position can be safely pursued with neonatal results similar to fetuses in cephalic presentation. abstract_id: PUBMED:24753933 Improving external cephalic version for foetal breech presentation. Objectives: If success rate of external cephalic version (ECV) increases, the rate of primary caesarean sections -declines. The aims of this retrospective cohort study were to evaluate the ECV and to identify factors associated with the success rate of ECV for breech presentation at term. The second aim of this study was to analyse the outcome of labour of all patients with a foetus in breech near term. Methods: All women with a foetus in breech near or at term were included. Logistic regression analyses were -performed to identify the association between patient characteristics and success rate of ECV. Results: The overall rate of successful ECV's was 19%. Foetal and maternal complications after ECV were negligible. BMI, type of breech and amount of amniotic fluid were significantly correlated with a successful ECV. The rate of primary caesarean sections for the group of patients who underwent an ECV was lower than the rate in the group who did not (52.9% vs. 79.6%). The rate of spontaneous deliveries was increased after ECV (36% versus 12%). After successful ECV the rate of spontaneous deliveries was 75%; after unsuccessful ECV 26.8%. Conclusion: The overall rate of successful ECV was low (19%). BMI, type of breech and amount of amniotic fluid were significantly correlated with a successful ECV. The rate of primary caesarean sections was significantly lower in patients with ECV (52.9% versus 79.6%). The rate of spontaneous deliveries was significantly higher (36% -versus 12%). abstract_id: PUBMED:28848185 Comparison of Fetomaternal Outcome between 47 Deliveries Following Successful External Cephalic Version for Breech Presentation and 7456 Deliveries Following Spontaneous Cephalic Presentation. Background: Achieving a cephalic position after a successful external cephalic version (ECV) is desired to result in delivery and fetal outcomes that are similar to those of deliveries following spontaneous cephalic presentation. Methods: We performed a retrospective cohort study including patients with successful ECV following fetal breech position (ECV cohort, n = 47) or with a singleton spontaneous cephalic pregnancy at ≥37 weeks of gestational age (control group, n = 7,456) attempting a vaginal delivery between 2010 and 2013 at the University Hospital Ulm. The mode of delivery and fetal outcome parameters were compared between these 2 groups using nonparametric statistics. Results: ECV cohort and control group did not differ with respect to maternal age, parity, gestational age at birth, and fetal gender. There were no significant differences between the 2 groups with regard to all parameters indicating fetal outcome. However, the rate of cesarean sections was higher after successful ECV compared to spontaneous cephalic presentation (27.7 vs. 12.8%, OR 2.615). Conclusion: While vaginal delivery is less likely to happen after a successful ECV compared to spontaneous cephalic singleton pregnancies, fetal outcome parameters showed no difference between the 2 groups. Physicians should be counseling and encouraging women to attempt ECV, as it is a safe and effective procedure. abstract_id: PUBMED:22593649 An audit of singleton breech deliveries in a hospital with a high rate of vaginal delivery. The term breech trial (TBT) has brought about radical changes but it is debatable whether it provides unequivocal evidence regarding the practice of breech deliveries. There is a need to publish the data of a study that was performed before the era of the TBT in a hospital where there was a high rate of breech vaginal delivery. The objectives were to ascertain the incidence, mode of delivery and fetal outcome in singleton breech deliveries. The study design was a retrospective cohort study where 165 consecutive breech and 165 controls (cephalic) were included. Statistical analysis, used were Chi squared and Fischer's exact test. P<0.05 is taken as the level of significance. The incidence of breech deliveries was found to be 3% and has remained fairly constant but the rate of breech vaginal delivery has fallen and the CS rates have increased. Even though more breech compared to controls were significantly sectioned, majority of the breeches {n=137 (83%)} were planned for vaginal delivery and in these patients two-thirds attained vaginal delivery. There was 1 fetal death in the CS group compared to 12 deaths in the vaginally delivered breech. However, most death in the breech delivered vaginally are unavoidable. In conclusion, there is a high rate of breech vaginal delivery in this series of patients and most perinatal deaths were not related to the mode of delivery. abstract_id: PUBMED:26037909 Neonatal outcome of singleton term breech deliveries in Norway from 1991 to 2011. Introduction: The objective of this study was to examine the association between planned mode of delivery and neonatal outcomes in breech deliveries. Material And Methods: In this retrospective cohort study we studied singleton term breech deliveries in Norway from 1991 to 2011 (n = 30 861) using the Medical Birth Registry of Norway. We compared planned vaginal delivery with planned cesarean delivery across two time periods: from 1 January 1991 to 31 October 2000 (first period) and from 1 November 2000 to 31 December 2011 (second period). Intrapartum and neonatal deaths were validated against source data in medical records, autopsy reports, and other relevant documents. The main outcome measures were intrapartum and neonatal mortality within the first 28 days of life, 5-min Apgar-scores <7 and <4, neonatal intensive care unit stays ≥4 days, respiratory morbidity, and intracranial bleeding disorders. Results: Rate of planned cesarean delivery increased from 34.4 to 51.3% over the period. Simultaneously, early neonatal mortality rate (0-6 days) declined (from 0.10% to 0.04%, p = 0.04). During the second period, 30.7% of term breech presentations were delivered vaginally. Eight deaths in the planned vaginal vs. four in the planned cesarean groups were observed (OR 2.11 95% CI 0.64-7.01). Neonatal morbidity outcomes were significantly worse in planned vaginal deliveries compared with planned cesarean deliveries in both periods. Conclusion: Overall intrapartum and neonatal mortality decreased during the entire period. Higher mortality in planned vaginal delivery relative to planned cesarean delivery in the second period was not statistically significant. However, neonatal morbidity was significantly higher in planned vaginal than planned cesarean deliveries in both periods. This warrants continuous surveillance of breech deliveries. abstract_id: PUBMED:32406174 Cephalic extraction versus breech extraction in second-stage caesarean section: a retrospective study. Objective: Our primary objective was to compare maternal and neonatal outcomes based on the attempted mode of extraction. Our secondary objective was to compare the outcomes based on the actual mode of extraction. Design: A retrospective cohort study at a single tertiary centre between the years 2011 and 2019. Setting: The study included 1028 cases of term pregnancies in vertex presentation that were delivered by caesarean section at the second stage of delivery. Population: Patients were grouped according to the attempted mode of extraction: attempted cephalic extraction (674) and breech extraction (354). A subgroup analysis was conducted, comparing successful cephalic extraction (615), failed cephalic extraction (59) and breech extraction (354). Methods: Both maternal and neonatal complication rates were compared between the groups. Results: There were significantly more uterine incision extensions (27.4 versus 11.9%, P < 0.001) and increased need for blood transfusion (10.7 versus 6.2%, P = 0.018) in the cephalic extraction compared with the breech extraction group. The highest rate of uterine incision extensions (45.8%) and need for blood transfusion (22%) was in the subgroup of failed cephalic extraction. The rate of adverse neonatal outcomes was comparable between the two groups. However, in the subgroup of failed cephalic extraction, there were higher rates of low 1-minute Apgar scores, NICU hospitalisation and limb fractures compared with successful cephalic extractions and breech extractions (P = 0.016, 0.01, <0.001, respectively). Conclusions: Breech extraction in second-stage caesarean section is associated with fewer maternal complications compared wiith attempted cephalic extraction without increasing neonatal complication rates. Tweetable Abstract: In breech versus cephalic extraction, breech extraction was found to have better outcomes in second-stage caesarean section. abstract_id: PUBMED:23183715 Delivery mode and neonatal outcome after a trial of external cephalic version (ECV): a prospective trial of vaginal breech versus cephalic delivery. Objective: To examine the delivery mode and neonatal outcome after a trial of external cephalic version (ECV) procedures. Study Design: This is an interim analysis of an ongoing larger prospective off-centre randomised trial, which compares a clinical hypnosis intervention against neuro-linguistic programming (NLP) of women with a singleton breech foetus at or after 37(0/7) (259 days) weeks of gestation and normal amniotic fluid index. Main outcome measures were delivery mode and neonatal outcome. Results: On the same day after the ECV procedure two patients (2 %), who had unsuccessful ECVs, had Caesarean sections (one due to vaginal bleeding and one due to pathological CTG). After the ECV procedure 40.4 % of women had cephalic presentation (n = 38) and 58.5 % (n = 55) remained breech presentation. One patient remained transverse presentation (n = 1; 1.1 %). Vaginal delivery was observed by 73.7 % of cephalic presentation (n = 28), whereas 26.3 % (n = 10) had in-labour Caesarean sections. Of those, who selected a trial of vaginal breech delivery, 42.4 % (n = 14) delivered vaginally and 57.6 % (n = 19) delivered via Caesarean section. There is a statistically significant difference between the rate of vaginal birth between cephalic presentation and trial of vaginal breech delivery (p = 0.009), however, no difference in neonatal outcome was observed. Conclusion: ECV is a safe procedure and can reduce not only the rate of elective Caesarean sections due to breech presentation but also the rate of in-labour Caesarean sections even if a trial of vaginal breech delivery is attempted. abstract_id: PUBMED:9605443 Perinatal outcome of singleton term breech deliveries. Objective: To assess neonatal morbidity and mortality in singleton term infants delivered in breech presentation and to find a possible correlation between outcome and mode of delivery. Study Design: Case study of 306 singleton, term (37-42 weeks), breech deliveries, that took place between 1989 and 1994 in one perinatal centre. Results: 170 infants were delivered vaginally, 72 by elective and 64 by secondary cesarean section. Even after application of strict selection criteria -- i.e. prior pelvic assessment by staff obstetricians, an estimated birth weight of 2500-4000 g -- and with staff supervision, vaginal delivery turned out to be associated with a significantly higher incidence of low umbilical artery pH values and neonatal care unit admissions as compared to elective cesarean section. Five infants suffered mechanical trauma. One neonatal death occurred in the vaginal delivery group. Conclusion: The results of this retrospective study of 306 singleton term breech deliveries imply that even after strict selection of patients, vaginal delivery is associated with increased neonatal morbidity in comparison to elective cesarean section. abstract_id: PUBMED:27321719 Mode of childbirth and neonatal outcome after external cephalic version: A prospective cohort study. Objective: to assess the mode of childbirth and adverse neonatal outcomes in women with a breech presentation with or without an external cephalic version attempt, and to compare the mode of childbirth among women with successful ECV to women with a spontaneous cephalic presentation. Design: prospective matched cohort study. Setting: 25 clusters (hospitals and its referring midwifery practices) in the Netherlands. Data of the Netherlands perinatal registry for the matched cohort. Participants: singleton pregnancies from January 2011 to August 2012 with a fetus in breech presentation and a childbirth from 36 weeks gestation onwards. Spontaneous cephalic presentations (selected from national registry 2009 and 2010) were matched in a 2:1 ratio to cephalic presentations after a successful version attempt. Matching criteria were maternal age, parity, gestational age at childbirth and fetal gender. Main outcomes were mode of childbirth and neonatal outcomes. Measurements And Findings: of 1613 women eligible for external cephalic version, 1169 (72.5%) received an ECV attempt. The overall caesarean childbirth rate was significantly lower compared to women who did not receive a version attempt (57% versus 87%; RR 0.66 (0.62-0.70)). Women with a cephalic presentation after ECV compared to women with a spontaneous cephalic presentation had a decreased risk for instrumental vaginal childbirth (RR 0.52 (95% CI 0.29-0.94)) and an increased risk of overall caesarean childbirth (RR 1.7 (95%CI 1.2-2.5)). Key Conclusions: women who had a successful ECV are at increased risk for a caesarean childbirth but overall, ECV is an important tool to reduce the caesarean rate. Implication For Practice: ECV is an important tool to reduce the caesarean section rates. Answer: The comparison between 610 breech and 12,405 cephalic deliveries at term indicates that there are differences in neonatal outcomes between the two groups. Neonatal intensive care admissions were significantly greater for breech than cephalic vaginal deliveries (2.7% versus 0.25%, P = 0.000), but newborn intensive care admission and mortality were equally distributed between the two groups (PUBMED:12648858). This suggests that while breech deliveries at term may result in a higher rate of neonatal intensive care admissions, the overall mortality rates do not differ significantly from cephalic deliveries. Additionally, another study found that children born in breech presentation had lower Apgar scores after 1 minute, but 5-minute Apgar scores were the same in both groups. Children born in breech presentation also received significantly more resuscitation than children born in cephalic presentation (PUBMED:17674277). This indicates that breech deliveries may be associated with initial challenges post-delivery, but outcomes tend to equalize shortly after birth. Furthermore, a study on external cephalic version (ECV) found that the rate of primary caesarean sections was significantly lower in patients with ECV (52.9% versus 79.6%), and the rate of spontaneous deliveries was significantly higher (36% versus 12%) (PUBMED:24753933). This suggests that successful ECV can lead to a reduction in caesarean section rates and an increase in spontaneous deliveries, potentially influencing neonatal outcomes favorably. In summary, while there are some differences in neonatal outcomes between breech and cephalic deliveries at term, particularly in terms of initial neonatal intensive care admissions and resuscitation requirements, the overall mortality rates do not differ significantly. Successful ECV may help to improve these outcomes by reducing the need for caesarean sections and increasing the likelihood of spontaneous deliveries.
Instruction: Do we need a new definition of the overactive bladder syndrome? Abstracts: abstract_id: PUBMED:33220814 Definition, epidemiology and impact of non-neurogenic overactive bladder Introduction: Overactive bladder (OAB) is a common condition with a significant functional impact in patients. Objective: To synthesize current knowledge on the definition, epidemiology and impact of OAB. Method: A systematic literature review based on Pubmed, Embase, Google Scholar was conducted in June 2020. Results: OAB is defined by urinary urgency, usually accompanied by frequency and nocturia, with or without urgency urinary incontinence, in the absence of urinary tract infection or other obvious pathology. In France, OAB affects 14 % of the general population and this prevalence increases with age. This condition is a source of major deterioration in patients' quality of life with a physical (falls, fractures, sleep disorders, fatigue), psychic (anxiety, depression) social (limitation of leisure, isolation) and economic impact. Conclusion: The definition of OAB is standardized. OAB is a frequent condition and has significant functional consequences with a notable deterioration in quality of life. abstract_id: PUBMED:24838519 Do we need a new definition of the overactive bladder syndrome? ICI-RS 2013. Aim And Methods: Overactive bladder syndrome (OAB) has a symptom-based definition. Following a presentation of issues, the definition was subjected to expert discussion at the International Consultation on Incontinence Research Society to identify key issues. Results: OAB is a widely used term; it is a pragmatic approach to categorizing a recognized group of patients, and is understood by the patients, however, expert opinion suggested several issues for which additional evidence should be sought. Naming an organ (bladder) in the condition may suggest underlying mechanism, when contributory aspects may lie outside the bladder. No severity thresholds are set, which can cause uncertainty. Urgency is prominent in the definition, but may not be prominent in patients whose adaptive behavior reduces their propensity to urgency. OAB can co-exist with other common conditions, such as benign prostate enlargement (BPE), stress incontinence or nocturnal polyuria. Consensus led by the International Continence Society can be attempted for aspects such as "fear of leakage." To develop a new definition, more substantive evidence is needed for key elements, and until such evidence is available, full redefinition is not appropriate. Thus, the medical profession should accept constructive compromise and work supportively. Conclusions: The ICI-RS proposes that the terminology is slightly rephrased as: "overactive bladder syndrome (OAB) is characterized by urinary urgency, with or without urgency urinary incontinence, usually with increased daytime frequency and nocturia, if there is no proven infection or other obvious pathology." More substantive changes would require additional scientific evidence. Strengths, limitations, and practicalities of the definition of OAB were discussed at the ICIRS meeting 2013. Following a presentation of issues, the definition was subjected to expert discussion. abstract_id: PUBMED:16804729 Overactive bladder--new definition, more patients!? N/A abstract_id: PUBMED:26318394 Definition of botulinum toxin failure in neurogenic detrusor overactivity: Preliminary results of the DETOX survey Objective: There is currently no guideline regarding the management of neurogenic detrusor overactivity (NDO) refractory to intra-detrusor botulinum toxin injections. The primary objective of the present study was to find a consensus definition of failure of botulinum toxin intra-detrusor injections for NDO. The secondary objective was to report current trends in the managment of NDO refractory to botulinum toxin. Methods: A survey was created, based on data drawn from current literature, and sent via e-mail to all the experts form the Group for research in neurourology in french language (GENULF) and from the comittee of neurourology of the French urological association (AFU). The experts who did not answer to the first e-mail were contacted again twice. Main results from the survey are presented and expressed as numbers and proportions. Results: Out of the 42 experts contacted, 21 responded to the survey. Nineteen participants considered that the definition of failure should be a combination of clinical and urodynamics criteria. Among the urodynamics criteria, the persistence of a maximum detrusor pressure>40 cm H2O was the most supported by the experts (18/21, 85%). According to the vast majority of participants (19/21, 90.5%), the impact of injections on urinary incontinence should be included in the definition of failure. Regarding the management, most experts considered that the first line treatment in case of failure of a first intra-detrusor injection of Botox(®) 200 U should be a repeat injection of Botox(®) at a higher dosage (300 U) (15/20, 75%), regardless of the presence or not of urodynamics risk factors of upper tract damage (16/20, 80%). Conclusion: This work has provided a first overview of the definition of failure of intra-detrusor injections of botulinum toxin in the management of NDO. For 90.5% of the experts involved, the definition of failure should be clinical and urodynamic and most participants (75%) considered that, in case of failure of a first injection of Botox(®) 200 U, repeat injection of Botox(®) 300 U should be the first line treatment. abstract_id: PUBMED:20303099 Development, validation and testing of an epidemiological case definition of interstitial cystitis/painful bladder syndrome. Purpose: No standard case definition exists for interstitial cystitis/painful bladder syndrome for patient screening or epidemiological studies. As part of the RAND Interstitial Cystitis Epidemiology study, we developed a case definition for interstitial cystitis/painful bladder syndrome with known sensitivity and specificity. We compared this definition with others used in interstitial cystitis/painful bladder syndrome epidemiological studies. Materials And Methods: We reviewed the literature and performed a structured, expert panel process to arrive at an interstitial cystitis/painful bladder syndrome case definition. We developed a questionnaire to assess interstitial cystitis/painful bladder syndrome symptoms using this case definition and others used in the literature. We administered the questionnaire to 599 women with interstitial cystitis/painful bladder syndrome, overactive bladder, endometriosis or vulvodynia. The sensitivity and specificity of each definition was calculated using physician assigned diagnoses as the reference standard. Results: No single epidemiological definition had high sensitivity and high specificity. Thus, 2 definitions were developed. One had high sensitivity (81%) and low specificity (54%), and the other had the converse (48% sensitivity and 83% specificity). These values were comparable or superior to those of other epidemiological definitions used in interstitial cystitis/painful bladder syndrome prevalence studies. Conclusions: No single case definition of interstitial cystitis/painful bladder syndrome provides high sensitivity and high specificity to identify the condition. For prevalence studies of interstitial cystitis/painful bladder syndrome the best approach may be to use 2 definitions that would yield a prevalence range. The RAND Interstitial Cystitis Epidemiology interstitial cystitis/painful bladder syndrome case definitions, developed through structured consensus and validation, can be used for this purpose. abstract_id: PUBMED:17899707 Overactive bladder--definition, epidemiology, pathogenesis The overactive bladder is characterized by the following symptoms: frequency, urgency, nocturia and urge incontinence, substantially affecting the quality of life of millions of people throughout the world, with recent findings estimating the prevalence in adults at about 16.4%. The symptoms are associated with significant social, psychological, occupational, domestic, physical, and sexual problems. This article reviews current findings regarding definition, epidemiology, and pathogenesis of the overactive bladder. abstract_id: PUBMED:9426746 Definition of overactive bladder and epidemiology of urinary incontinence. Objectives: To review the definition of the overactive bladder and to summarize the results of epidemiologic studies on this specific disorder as well as urinary incontinence (UI) in general. Methods: From a literature search covering the time period from 1954 through 1995, 48 epidemiologic studies and several other publications dealing with the prevalence and natural history of UI were reviewed. A meta-analysis of reported data was performed with respect to incontinence definitions, investigation methods, home country of survey, sex, and age groups. Results: Differences in definitions of incontinence, target populations, and study design in different investigations resulted in inhomogeneity and difficulties of comparing the available data. By grouping the studies with respect to similarities in the above-mentioned criteria and analyzing the results for each group of studies, an attempt was made to understand the great variation of reported results. Differences in prevalence of incontinence were identified for all examined groups of studies and for distinct ethnic populations. Scarce information about incidence, spontaneous remission rates, and risk factors was used to elucidate the natural history of UI in women and men. Conclusions: Although generally accepted definitions of the overactive bladder specifically and UI in general are highly desirable, they have not yet been established. Such definitions should comprise aspects of severity and demonstrability of the condition, bother factor, and impact on quality of life. Moreover, basic requirements for epidemiologic surveys of incontinence, such as validation of questionnaire results, need to be defined and standardized to create a sensible basis for useful epidemiologic studies in the future. abstract_id: PUBMED:31185473 Cure of Underactive and Overactive Bladder Symptoms in Women by 1,671 Apical Sling Operations Gives Fresh Insights into Pathogenesis and Need for Definition Change. Introduction: To check evidence that symptoms identical with those constituting "underactive bladder" (UAB) and "overactive bladder" (OAB) are caused by apical prolapse and cured by repair thereof. Material And Methods: After repair of apical prolapse by mesh tape reinforcement of lax uterosacral ligaments (USL) data form 1,671 women were retrospectively examined to determine the presence of OAB and UAB symptoms and to check, how many were cured surgically. Thereby 3 different techniques were performed: elevate (n = 277), "Posterior IVS" (n = 1,049), and TFS cardinal (CL)/USL (n = 345). Results: Symptoms identical with those comprising UAB and OAB were cured in up to 80% of cases following surgical repair of the CL/USL complex. Conclusions: These symptoms may be consistent with symptoms of the posterior fornix syndrome, which comprises 4 main symptoms: micturition difficulties, urge/frequency, nocturia, chronic pelvic pain, all consequent on USL laxity. Surgical cure of OAB and UAB is inconsistent with existing definitions, which imply pathogenesis of the detrusor muscle itself. A reconsideration and reformulation of existing definitions may be required. Altering UAB definition to "bladder emptying difficulties" and return to former definitions for OAB such as "detrusor" or "bladder instability" may help to restore compatibility with surgical cure of these conditions. abstract_id: PUBMED:12493342 Definition and epidemiology of overactive bladder. The Standardisation Subcommittee of the International Continence Society (ICS) now recognizes overactive bladder (OAB) as a "symptom syndrome suggestive of lower urinary tract dysfunction." It is specifically defined as "urgency, with or without urge incontinence, usually with frequency and nocturia." The ICS definition was not formulated until January 2001 and was not formally approved until September 2001. Therefore, collection and discussion of the epidemiologic characteristics of OAB are somewhat hampered by the different definitions of this condition used by different investigators. Most communications that appeared before 2000 more often described characteristics of incontinence rather than OAB, and the estimates of OAB prevalence within those studies varied significantly. Until recently, little definite epidemiologic information was available on the prevalence and comorbidities of OAB. An important challenge in treating OAB is to increase awareness of this significant problem worldwide and to impress on other specialists and primary care physicians the importance of identifying this clinical problem and managing it in a way that will maximize quality-of-life improvement while minimizing morbidity. abstract_id: PUBMED:12425863 Incidence and prevalence of overactive bladder. Recent changes in terminology, diagnosis, and therapy have refocused attention on overactive bladder (OAB). This symptom syndrome is highly prevalent worldwide and significantly impairs the quality of life of those who suffer from it. Accurate epidemiologic incidence and prevalence studies of OAB have been hampered in the past by, among other issues, a generalized lack of agreement regarding definition of the disorder, and consequently, accurate case finding. This obstacle resulted in considerably wide estimates in the reported incidence and prevalence of OAB in the literature. A new symptom-based definition of OAB, formally adopted by the Standardization Committee at a recent International Continence Society meeting, should provide a framework for future epidemiologic studies. Current estimates of incidence, prevalence, effects on quality of life, and societal costs may need to be reassessed based on these new data. Answer: The question of whether a new definition of overactive bladder syndrome (OAB) is needed has been a topic of discussion among experts. The current definition of OAB is symptom-based and is characterized by urinary urgency, with or without urgency urinary incontinence, usually with increased daytime frequency and nocturia, in the absence of proven infection or other obvious pathology (PUBMED:33220814; PUBMED:24838519). This definition is widely used and understood by patients, but there are several issues that have been raised. One concern is that naming an organ (bladder) in the condition may suggest an underlying mechanism, when contributory aspects may lie outside the bladder. Additionally, no severity thresholds are set in the current definition, which can cause uncertainty. Urgency is prominent in the definition, but may not be prominent in patients whose adaptive behavior reduces their propensity to urgency. OAB can also co-exist with other common conditions, such as benign prostate enlargement (BPE), stress incontinence, or nocturnal polyuria (PUBMED:24838519). Experts have proposed that the terminology be slightly rephrased, but more substantive changes would require additional scientific evidence. Until such evidence is available, a full redefinition is not deemed appropriate. The medical profession is encouraged to accept constructive compromise and work supportively with the current definition (PUBMED:24838519). In conclusion, while there is recognition of certain limitations and areas for improvement in the current definition of OAB, there is not yet a consensus or sufficient evidence to warrant a complete overhaul of the definition. The current definition remains standardized and is used to identify and manage the condition, which has a significant impact on patients' quality of life (PUBMED:33220814). Future research and evidence may lead to refinements or changes in the definition, but for now, the focus is on working within the existing framework while acknowledging its limitations (PUBMED:24838519).
Instruction: Recognition of psychogenic non-epileptic seizures: a curable neurophobia? Abstracts: abstract_id: PUBMED:24563816 Psychogenic nonepileptic seizures. Objective: All patients who exhibit seizure-like behavior cannot be evaluated by video-electroencephalography if their routine EEGs are negative, as this would be impractical and cost-prohibitive. The present article reviews a decision-making process that can be used for determining if further neurological evaluation is needed, the differential diagnoses and potential comorbidities involved when making this determination, and an approach to conveying the psychogenic nonepileptic seizure diagnosis to the patient that may help reduce symptom frequency. Design: Literature review. Conclusion: Psychogenic seizures are not caused by abnormal brain electrical activity. The symptoms of psychogenic seizures usually reflect a psychological conflict or a psychiatric disorder. However, psychogenic seizures are not "purposely" produced by the patient, and the patient is not aware that the seizures are non-epileptic, so the patient may become very anxious over having these symptoms. The presentation of the differential diagnosis should be done early in the course of treatment for better patient acceptance, and treatment options should be presented early in the evaluation period. abstract_id: PUBMED:34663489 Recognition of Psychogenic Versus Epileptic Seizures Based on Videos. Objective: Ictal semiology interpretation for differentiating psychogenic nonepileptic seizures (PNESs) and epileptic seizures (ESs) is important for the institution of appropriate treatment. Our objective was to assess the ability of different health care professionals (HCPs) or students to distinguish PNES from ES based on video-recorded seizure semiology. Methods: This study was designed following the Standards for Reporting of Diagnostic Accuracy Studies (STARD) guidelines. We showed in a random mix 36 videos of PNES or ES (18 each) and asked 558 participants to classify each seizure. The diagnostic accuracy of various groups of HCPs or students for PNES versus ES was assessed, as well as the effect of patient age and sex. Measures of diagnostic accuracy included sensitivity, specificity, and area under the curve (AUC). Results: The descending order of diagnostic accuracy (AUC) was the following (p ≤ 0.001): (1) neurologists and epileptologists; (2) neurology residents; (3) other specialists and nurses with experience in epilepsy; and (4) undergraduate medical students. Although there was a strong trend toward statistical difference, with AUC 95% confidence intervals (CIs) that were not overlapping, between epileptologists (95% CI 93, 97) compared to neurologists (95% CI 88, 91), and neurologists compared to electroencephalography technicians (95% CI 82, 87), multiple pairwise comparisons with the conservative Tukey-Kramer honest significant difference test revealed no statistical difference (p = 0.25 and 0.1, respectively). Patient age and sex did not have an effect on diagnostic accuracy in neurology specialists. Conclusion: Visual recognition of PNES by HCPs or students varies overall proportionately with the level of expertise in the field of neurology/epilepsy. abstract_id: PUBMED:32621480 The problem of comorbidity of epilepsy and psychogenic paroxysms A review of publications over the last two decades is presented. Psychogenic paroxysms develop in approximately 12% of patients with epilepsy. The analysis of social and demographic data, history details, semiological features and results of electrophysiological and neuroimaging studies does not unequivocally support the comorbidity of epilepsy and psychogenic paroxysms. The pathogenetic mechanisms of the development of comorbidity are various and depend on the presence of pharmacoresistance, psychological traumas in the past, intellectual disability etc. Video-EEG-monitoring is the gold standard in the diagnosis of comorbidity of epilepsy and psychogenic paroxysms. Treatment of such cases includes anticonvulsants and cognitive-behavioral therapy. abstract_id: PUBMED:22842714 Recognition of psychogenic non-epileptic seizures: a curable neurophobia? Background: Diagnosing psychogenic non-epileptic seizures (PNES) remains challenging. The majority of 'PNES status' cases are likely to be seen in the emergency department or similar non-specialised units, where patients are initially assessed and managed by physicians of varying expertise in neurology. Methods: 216 participants including medical students and doctors of all grades from a wide range of medical disciplines were shown video recordings of six patients with PNES and six other patients with convulsive epileptic seizures (ES). Participants were asked to choose between PNES and ES as a diagnosis and to rate their confidence in each diagnosis, both before and after a 15-minute teaching presentation on PNES and ES. Results: Pre-teaching sensitivity for diagnosing PNES was 0.77, specificity 0.55. The positive predictive value (PPV) of diagnosing PNES was 0.63, and was 0.7 for ES. Diagnostic accuracy increased with increasing clinical grades (p=0.022), as did clinical confidence (p<0.0005). Clinical accuracy and clinical confidence increased post-teaching (p<0.0005). Sensitivity for diagnosing PNES post-teaching improved to 0.88, specificity to 0.67. The PPV of diagnosing PNES increased to 0.72, and to 0.84 for ES. Conclusions: Diagnosing PNES can be improved by clinical experience in neurology and focussed teaching interventions. abstract_id: PUBMED:38481629 Takotsubo cardiomyopathy triggered by frequent psychogenic nonepileptic seizures: A case report. Background: Coexistence of Takotsubo cardiomyopathy and psychogenic nonepileptic seizures has rarely been reported. Herein, we report a case of Takotsubo cardiomyopathy triggered by psychogenic nonepileptic seizures. Case Presentation: A 50-year-old woman with a 22-year history of dissociative and panic disorders and a 7-year history of seizures increasing in frequency was admitted due to cardiogenic shock. Based on the left ventriculography and electroencephalography findings, she was diagnosed with Takotsubo cardiomyopathy and psychogenic nonepileptic seizures. Seizures were controlled using antipsychotic agents, resulting in improved cardiac function, and she was discharged. However, she died of cardiopulmonary arrest 9 days after discharge. Conclusion: Since controlling psychogenic epileptic seizures is difficult, Takotsubo cardiomyopathy triggered by psychogenic nonepileptic seizures may have poor prognosis, requiring careful management and close monitoring. abstract_id: PUBMED:38235768 An update on psychogenic nonepileptic seizures. Purpose Of Review: The understanding of psychogenic nonepileptic seizures (PNES) has advanced steadily over recent decades. This update summarizes new insights from the last three years. Recent Findings: The process of diagnosing PNES has shifted from the exclusion of epilepsy to one based on the recognition of typical clinical features. While the diagnosis cannot rely on any single feature in isolation, a range of semiological features characterising PNES are now recognised and a number of studies hint at the potential for machine learning and AI to improve the diagnostic process. Advances in data processing and analysis may also help to make sense of the heterogeneity of PNES populations demonstrated by recent studies focussing on aetiology and patient subgroups. It is now clear that PNES are associated with high rates of mental and physical comorbidities and premature death, highlighting that they are only one manifestation of a complex disorder extending beyond the nervous system and the seizures themselves. Summary: PNES are now understood as a manifestation of dysfunction in interacting brain networks. This understanding provides an explanation for the psychopathological and semiological heterogeneity of PNES patient populations. New insights into medical comorbidities and increased rates of premature death call for more research into associated pathological processes outside the nervous system. abstract_id: PUBMED:31929715 Children with Psychogenic Nonepileptic Seizures. A large multicenter retrospective cohort study was conducted by researchers from the Pediatric Health Information System hospital network to determine differences in demographics, clinical characteristics, testing, treatment, and healthcare use between children aged 8-20 years with epilepsy (n = 13,241) and those with psychogenic nonepileptic seizures (PNES) secondary to conversion disorder (n = 399). abstract_id: PUBMED:37350201 Non-epileptic psychogenic seizures Psychogenic non-epileptic seizures resemble epileptic seizures but their pathophysiological mechanisms differ. Despite being quite frequent, they are often misunderstood, regularly leading to a misdiagnosis and therapeutic errors. The objective of this article is to provide a short review of the current knowledge about this pathology. We will detail the main epidemiological data of psychogenic non-epileptic seizures. We will then discuss the diagnostic approach, emphasizing the semiological differences that exist between psychogenic non-epileptic seizures and epileptic seizures. Then, we will detail the main therapeutic actions and discuss the importance of communication between clinicians, the patient and his/her family. Finally, we will discuss the long-term prognosis of this pathology, which remains quite poor. abstract_id: PUBMED:27195218 Psychogenic nonepileptic seizures mimicking gelastic seizures: A description of two cases. Psychogenic nonepileptic seizures (PNES) are sudden, involuntary seizure-like attacks that, unlike epileptic seizures, are not related to electrographic ictal discharges and are psychological in nature. Psychogenic nonepileptic seizures presenting symptoms mimic a wide array of nervous system dysfunctions, as they involve changes in behavior, motor activity, sensation, cognitive, and autonomic functions. Spontaneous paroxysms of laughing resembling gelastic seizure have only exceptionally been reported as main symptom of PNES. Here, we describe the cases of two patients with a prolonged history of laughter attacks mistaken for epilepsy and unresponsive to AED treatment. Brain MRI and interictal EEG were unremarkable. Video-EEG monitoring allowed us to document the spontaneous and suggestion-induced habitual episodes that were then diagnosed as PNES. abstract_id: PUBMED:27486953 Comparison of semiologies between tilt-induced psychogenic nonsyncopal collapse and psychogenic nonepileptic seizures. We sought to characterize the clinical features of tilt-induced psychogenic nonsyncopal collapse (PNSC) from a cohort of young patients and to compare the semiologies between PNSC and EEG-confirmed psychogenic nonepileptic seizures (PNES). A PNSC diagnosis was made if a clinical event occurred during tilt-table testing that the patient regarded as fainting, but neither hypotension nor EEG changes were present. A diagnosis of PNSC was made in 17.6% of all patients referred during the 15-month study period. Cohorts with psychogenic nonsyncopal collapse (n=40) and PNES (n=40) did not differ in age (15.5±2.2 versus 14.6±2.7, p=.11) or female gender (80% versus 72.5%, p=.43). Psychogenic nonsyncopal collapse events were briefer than PNES events (median: 45 versus 201.5s, p<.001). Negative motor signs (head drop, body limpness) predominated in PNSC (85% versus 20%, p<.001), while the positive motor signs of convulsion occurred more often with PNES (90% versus 30%, p<.001). Behavioral arrest (25% versus 32.5%, p=.46) and eye closure (85% versus 72.5%, p=.21) did not differ between PNSC and PNES. Patients with PNSC were more likely to be tearful before (30% versus 7.5%, p=.02) and after (62.5% versus 7.5%, p<.001) an event. In conclusion, although overlap exists, the features of PNSC generally appear similar to neurally mediated syncope, while the features of PNES generally appear similar to epileptic seizures. Psychogenic nonsyncopal collapse and PNES likely represent similar disorders that differ primarily by clinical semiologies and referral patterns. Answer: The recognition of psychogenic non-epileptic seizures (PNES) can be challenging, and there is a term "neurophobia" that refers to the fear or avoidance of neurological topics by medical professionals. This term suggests that some clinicians may have a reluctance or difficulty in diagnosing and managing neurological conditions, including PNES. However, the literature suggests that the ability to recognize PNES can be improved with clinical experience and focused teaching interventions (PUBMED:22842714). A study showed that before a teaching presentation on PNES and epileptic seizures (ES), the sensitivity for diagnosing PNES was 0.77 and specificity was 0.55, with diagnostic accuracy increasing with clinical grades. After the teaching intervention, sensitivity for diagnosing PNES improved to 0.88 and specificity to 0.67, indicating that education can significantly improve the recognition of PNES among medical professionals (PUBMED:22842714). Moreover, the understanding of PNES has advanced, with the diagnosis shifting from the exclusion of epilepsy to one based on the recognition of typical clinical features. There is a range of semiological features characterizing PNES, and recent studies suggest that machine learning and AI could potentially improve the diagnostic process (PUBMED:38235768). In conclusion, while there may be a "neurophobia" regarding the recognition of PNES, it appears to be a curable condition through targeted educational interventions and the use of advanced diagnostic tools. The improvement in diagnostic accuracy following teaching interventions suggests that with proper training and experience, healthcare professionals can become more adept at distinguishing PNES from epileptic seizures, leading to better patient outcomes (PUBMED:22842714; PUBMED:38235768).
Instruction: 3-dimensional facial simulation in orthognathic surgery: is it accurate? Abstracts: abstract_id: PUBMED:24627841 Unilateral intraoral vertical ramus osteotomy based on preoperative three-dimensional simulation surgery in a patient with facial asymmetry. Preoperative surgical simulation in orthognathic surgery has progressed in recent years; the movement of the mandible can be anticipated through three-dimensional (3D) simulation surgery before the actual procedure. In this case report, the mandible was moved to the intended postoperative occlusion through preoperative surgical 3D simulation. Right-side condylar movement change was very slight in the surgical simulation, suggesting the possibility of mandibular surgery that included only left-side ramal osteotomy. This case report describes a patient with a mild asymmetric facial profile in which the mandibular menton had been deviated to the right and the lips canted down to the left. Before surgery, three-dimensional surgical simulation was used to evaluate and confirm a position for the condyle as well as the symmetrical postoperative state of the face. Facial asymmetry was resolved with minimal surgical treatment through unilateral intraoral vertical ramus osteotomy on the left side of the mandible. It would be a valuable complement for the reduction of the surgical treatment if one could decide with good predictability when an isolated intraoral vertical ramus osteotomy can be done without a compensatory osteotomy on the contralateral side. abstract_id: PUBMED:29606563 Does two-dimensional vs. three-dimensional surgical simulation produce better surgical outcomes among patients with class III facial asymmetry? The aim of this study was to compare the outcomes of traditional two-dimensional planning (2DP) and three-dimensional surgical simulation (3DS) in the surgical correction of skeletal class III with facial asymmetry. This retrospective cohort study included 37 consecutive adult Taiwanese patients. Preoperative and postoperative three-dimensional cephalometric measurements were obtained from cone beam computed tomography scans. The outcome variables were the differences in preoperative and postoperative linear and angular measurements and the differences between the two groups after surgery. When the surgical result was compared between the 2DP and 3DS groups, significant differences were found for four cephalometric variables: the distance from gonion on the non-deviated side to the midsagittal plane (MSP), mid-gonion to the MSP, upper first molar on the non-deviated side to the Frankfort horizontal plane, and the yaw angle. In the 3DS group, mandibular symmetry was achieved because the centre between the bilateral gonions was improved, and because there was no significant difference in the horizontal gonion (Go to the MSP) between the deviated and non-deviated sides after surgery. 3DS provides all the necessary information for planned surgical movements for the correction of facial asymmetry; it should be considered during surgical planning to improve surgical outcomes, particularly the achievement of bilateral mandibular contour symmetry. abstract_id: PUBMED:23642546 3-dimensional facial simulation in orthognathic surgery: is it accurate? Purpose: The purpose of this study was to measure the accuracy of 3D computer simulation of soft tissue changes after orthognathic surgery. Materials And Methods: Consecutive patients who underwent orthognathic surgery were studied by photogrammetric facial scanning and cone-beam computed tomography before and after surgery. The photogrammetric scan was then fused to the cone-beam computed tomogram, creating a patient-specific image. The surgery was simulated in 3D form and the simulated soft tissue face was compared with the actual facial scan obtained 6 months postoperatively. Absolute millimeter differences between the simulated and actual postoperative changes in selected cephalometric skin markings were computed. Results: The study was composed of 23 subjects (mean age, 31 yr; 13 women and 10 men). Eighteen different cephalometric landmarks were measured (total, 28). For 15 landmarks, the difference between actual and simulated measurements was smaller than 0.5 mm. Only 3 landmarks had a difference of 0.5 mm, and these were in the region of the labial landmarks. Conclusion: Based on the present study, 3-dimensional computer surgical simulation of the soft tissue of the face for routine orthognathic surgery is accurate enough for routine clinical use. abstract_id: PUBMED:26878021 CAD/CAM splint based on soft tissue 3D simulation for treatment of facial asymmetry. Background: Most cases of facial asymmetry involve yaw deformity, and determination of the yaw correction level is very difficult. Methods: We use three-dimensional soft tissue simulation to determine the yaw correction level. This three-dimensional simulation is based on the addition of cephalometric prediction to gradual yaw correction. Optimal yaw correction is determined visually, and an intermediate splint is fabricated with computer-aided design and computer-aided manufacturing. Application of positioning devices and the performance of horseshoe osteotomy are advisable. Results: With this procedure, accurate repositioning of jaws was confirmed and patients obtained fairly good facial contour. Conclusions: This procedure is a promising method for a widespread, predictable treatment of facial asymmetry. abstract_id: PUBMED:28071714 Three-dimensional surgical simulation improves the planning for correction of facial prognathism and asymmetry: A qualitative and quantitative study. Traditional planning method for orthognathic surgery has limitations of cephalometric analysis, especially for patients with asymmetry. The aim of this study was to assess surgical plan modification after 3-demensional (3D) simulation. The procedures were to perform traditional surgical planning, construction of 3D model for the initial surgical plan (P1), 3D model of altered surgical plan after simulation (P2), comparison between P1 and P2 models, surgical execution, and postoperative validation using superimposition and root-mean-square difference (RMSD) between postoperative 3D image and P2 simulation model. Surgical plan was modified after 3D simulation in 93% of the cases. Absolute linear changes of landmarks in mediolateral direction (x-axis) were significant and between 1.11 to 1.62 mm. The pitch, yaw, and roll rotation as well as ramus inclination correction also showed significant changes after the 3D planning. Yaw rotation of the maxillomandibular complex (1.88 ± 0.32°) and change of ramus inclination (3.37 ± 3.21°) were most frequently performed for correction of the facial asymmetry. Errors between the postsurgical image and 3D simulation were acceptable, with RMSD 0.63 ± 0.25 mm for the maxilla and 0.85 ± 0.41 mm for the mandible. The information from this study could be used to augment the clinical planning and surgical execution when a conventional approach is applied. abstract_id: PUBMED:33656188 Addressing the Pandemic Training Deficiency: Filling the Void with Simulation in Facial Reconstruction. Objective/hypothesis: To assess the use of a three-dimensional (3D) printed, multilayer facial flap model for use in trainee education as an alternative method of teaching surgical techniques of facial reconstruction. Study Design: Cohort study. Methods: A 3D printed facial flap simulator was designed from a computed tomography scan and manufactured out of silicone for low-cost, high-fidelity simulation. This simulator was tested by a group of Otolaryngology-Head and Neck Surgery trainees at a single institution. The simulator group was compared to a control group who completed an exercise on a traditional paper facial flap exercise. Both groups underwent didactic lectures prior to completing their respective exercises. Pre- and post-exercise Likert scale surveys measuring experience, understanding, effectiveness, and realism were completed by both groups. Central tendency, variability, and confidence intervals were measured to evaluate the outcomes. Results: Trainees completing the facial flap simulator reported a statistically significant (p < 0.05) improvement in overall expertise in facial flap procedures, design of facial flaps, and excision of standing cutaneous deformities. No statistically significant improvement was seen in the control group. Conclusions: Trainees found the facial flap simulator to be an effective and useful training tool with a high level of realism in surgical education of facial reconstruction. Surgical simulators can serve as an adjunct to trainee education, especially during extraordinary times such as the novel coronavirus disease 2019 pandemic, which significantly impacted surgical training. Level Of Evidence: NA Laryngoscope, 131:E2444-E2448, 2021. abstract_id: PUBMED:10719485 Three-dimensional facial model reconstruction and plastic surgery simulation. Facial model reconstruction and surgical simulation are essential to plastic surgery in today's medicine. Both can help surgeons to design appropriate repair plans and procedures prior to actual surgery. In this paper, we exploit a metamorphosis technique in our new design. First, using metamorphosis and vision techniques, we can establish three-dimensional facial models from a given photo. Second, we design several morphing operators, including augmentation, cutting, and lacerating. Experiments show that the proposed algorithms can successfully create acceptable facial models and generate realistically visual effects of surgical simulation. abstract_id: PUBMED:26319058 Three-dimensional visualization of the human face using DICOM data and its application to facial contouring surgery using free anterolateral thigh flap transfer. One of the main challenges faced by surgeons performing reconstructive surgery in cases of facial asymmetry due to hemifacial atrophy or tumor surgery is the restoration of the natural contour of the face. Soft-tissue augmentation using free-flap transfer is one of the most commonly used methods for facial reconstruction. The most important part of a successful reconstruction is the preoperative assessment of the volume, position, and shape of the flap to be transplanted. This study focuses on three cases of facial deformity due to hemifacial progressive atrophy or tumor excision. For the preoperative assessment, digital imaging and communications in medicine (DICOM) data obtained from computed tomography was used and applied to a three-dimensional (3D) picture software program (ZedView, LEXI, Tokyo, Japan). Using computer simulation, a mirror image of the unaffected side of the face was applied to the affected side, and 3D visualization was performed. Using this procedure, a postoperative image of the face and precise shape, position, and amount of the flap that was going to be transferred was simulated preoperatively. In all cases, the postoperative shape of the face was acceptable, and a natural shape of the face could be obtained. Preoperative 3D visualization using computer simulation was helpful for estimating the reconstructive procedure and postoperative shape of the face. Using free-flap transfer, this procedure facilitates the natural shape after reconstruction of the face in facial contouring surgery. abstract_id: PUBMED:30633150 Three-dimensional printing of facial contour based on preoperative computer simulation and its clinical application. Facial contouring is a complex procedure performed to alter tissue contents and restore facial appearance. However, it is difficult to measure the amount of the tissue volume that is needed. This study demonstrated the use of preoperative computer simulation (PCS) and 3-dimensional (3D) printing in contouring procedure to maximize outcomes.Three-dimensional surface imaging (3DSI) or computed tomography imaging (CTI) data were reconstructed into a 3D model by Mimics software. PCS was performed by simulating the changes in bone and soft tissue. The stimulating volume change was calculated by Boolean operations. Finally, the virtual model was exported into 3D printer to produce physical templates to guide surgical plan. PCS and actual postoperative results were compared using objective rating scales and by cephalometrical measurements.With the direct guidance of PCS and 3D templates, contouring procedure was performed accurately. Satisfactory facial contouring was achieved with less operative time. As the plastic surgery panel rated, 45.8% of the 3DSI results and 41.7% of the CTI results were identical with the actual outcome, and 0% of them was poor. There were no significant differences in patient satisfaction between the PCS of 3DSI and CTI.Preoperative computer simulation is an accurate method for designing contour adjustment plans, and can be an efficient and reliable predictor of outcomes with customized templates. abstract_id: PUBMED:32442635 Three-dimensional acquisition technologies for facial soft tissues - Applications and prospects in orthognathic surgery. The management of patients with dento-maxillofacial deformities is based on assessments of the dental occlusion - facial skeleton - soft tissues triad. As societal demands and surgical practices have evolved, facial soft tissues have moved to the forefront of considerations in orthognathic surgery. Techniques are therefore required to analyze facial soft tissues objectively and reproducibly, for diagnosis, preoperative planning, and follow-up. Several technologies are currently capable of providing three-dimensional (3D) models of the face, either by 3D reconstruction of traditional computed tomography or cone beam computed tomography data, or directly by stereophotogrammetry, laser scanning or structured light scanning. Multimodal image registration techniques allow bone base, dental occlusion and facial soft tissue information to be combined in a 3D virtual patient. Three-dimensional cephalometric analysis of the facial skeleton and skin is now perfectly integrated in virtual planning and is gradually gaining in automation and accuracy. Photorealistic 3D simulations allow optimal soft tissue planning and facilitate physician-patient communication. Finally, these facial modeling techniques facilitate post-operative studies of soft tissues, which generally involve comparisons of volumetric data. There are many research avenues to pursue and technical improvements are to be expected, particularly through the development of big data and artificial intelligence approaches. Answer: Three-dimensional facial simulation in orthognathic surgery has been found to be accurate enough for routine clinical use. A study that measured the accuracy of 3D computer simulation of soft tissue changes after orthognathic surgery found that for the majority of cephalometric landmarks measured, the difference between actual and simulated measurements was smaller than 0.5 mm. Only three landmarks had a difference of 0.5 mm, and these were in the region of the labial landmarks. This suggests that 3D computer surgical simulation of the soft tissue of the face is sufficiently precise for clinical application in routine orthognathic surgery (PUBMED:23642546). Moreover, other studies have shown that 3D surgical simulation improves surgical planning and outcomes. For instance, a study comparing traditional two-dimensional planning (2DP) and three-dimensional surgical simulation (3DS) in the surgical correction of skeletal class III with facial asymmetry found that 3DS provided all the necessary information for planned surgical movements and should be considered during surgical planning to improve surgical outcomes, particularly the achievement of bilateral mandibular contour symmetry (PUBMED:29606563). Another study reported that surgical plan modifications after 3D simulation occurred in 93% of cases, with significant changes in landmark positions and rotations, indicating that 3D planning can lead to more accurate corrections of facial asymmetry (PUBMED:28071714). In summary, the evidence suggests that 3D facial simulation is accurate and beneficial in orthognathic surgery, aiding in the precision of soft tissue predictions and improving surgical planning and outcomes.
Instruction: Does plate adaptation affect stability? Abstracts: abstract_id: PUBMED:27547243 Metatarsophalangeal joint stability: a systematic review on the plantar plate of the lesser toes. Background: Instability of the metatarsophalangeal (MTP) joints of the lesser toes (digiti 2-5) is increasingly being treated by repair of the plantar plate (PP). This systematic review examines the anatomy of the plantar plate of the lesser toes, and the relation between the integrity of the plantar plates of the lesser toes and lesser MTP joint stability. Methods: The databases of Embase.com, Medline (Ovid), Web of Science, Scopus, Cochrane, Pubmed not medline, Cinahl (ebsco), ProQuest, Lilacs, Scielo and Google Scholar were searched in June 2015 from inception. Studies were included if they were in English, contained primary data, and had a focus on plantar plate anatomy of the lesser toes or on the relationship between integrity of the plantar plate and MTP joint (in)stability. Study characteristics were extracted into two main tables and descriptive anatomical and histological data were summarized into one schematic 3D drawing of the plantar plate. Results: Nine studies were included in this systematic review, of which five addressed plantar plate anatomy as such and four focused directly and indirectly on plantar plate integrity related to MTP joint stability. Conclusion: This is the first systematic review regarding plantar plate anatomy related to MTP joint stability of the lesser toes. This review iterates the importance of plantar plate anatomy and integrity for MTP joint stability, and it delineates the lack of primary data regarding plantar plate anatomy of the lesser toes and MTP joint stability. abstract_id: PUBMED:20804977 Auditory adaptation in vocal affect perception. Previous research has demonstrated perceptual aftereffects for emotionally expressive faces, but the extent to which they can also be obtained in a different modality is unknown. In two experiments we show for the first time that adaptation to affective, non-linguistic vocalisations elicits significant auditory aftereffects. Adaptation to angry vocalisations caused voices drawn from an anger-fear morphed continuum to be perceived as less angry and more fearful, while adaptation to fearful vocalisations elicited opposite aftereffects (Experiment 1). We then tested the link between these aftereffects and the underlying acoustics by using caricatured adaptors. Although caricatures exaggerated the acoustical and affective properties of the vocalisations, the caricatured adaptors resulted in aftereffects which were comparable to those obtained with natural vocalisations (Experiment 2). Our findings suggest that these aftereffects cannot be solely explained by low-level adaptation to acoustical characteristics of the adaptors but are likely to depend on higher-level adaptation of neural representations of vocal affect. abstract_id: PUBMED:36081030 Unsupervised Domain Adaptive Corner Detection in Vehicle Plate Images. Rectification of vehicle plate images helps to improve the accuracy of license-plate recognition (LPR). It is a perspective-transformation process to project images as if taken from the front geometrically. To obtain the projection matrix, we require the (x, y) coordinates of four corner positions of plates in images. In this paper, we consider the problem of unsupervised domain adaptation for corner detection in plate images. We trained a model with plate images of one country, the source domain, and applied a domain adaptation scheme so that the model is able to work well on the plates of a different country, the target domain. For this study, we created a dataset of 22,096 Korea plate images with corner labels, which are source domain, and 6762 Philippines, which are target domain. To address this problem, we propose a heatmap-based corner-detection model, which outperforms existing scalar-regression methods, and an image classifier for mixed image of source and target images for domain adaptation. The proposed approach achieves better accuracy, which is 19.1% improvement if compared with baseline discriminator-based domain adaptation scheme. abstract_id: PUBMED:24622982 Stability of cervical spine after one-level corpectomy using different numbers of screws and plate systems. Anterior corpectomy and reconstruction using a plate with locking screws are standard procedures for the treatment of cervical spondylotic myelopathy. Although adding more screws to the construct will normally result in improved fixation stability, several issues need to be considered. Past reports have suggested that increasing the number of screws can result in the increase in spinal rigidity, decreased spine mobility, loss of bone and, possibly, screw loosening. In order to overcome this, options to have constrained, semi-constrained or hybrid screw and plate systems were later introduced. The purpose of this study is to compare the stability achieved by four and two screws using different plate systems after one-level corpectomy with placement of cage. A three-dimensional finite-element model of an intact C1-C7 segment was developed from computer tomography data sets, including the cortical bone, soft tissue and simulated corpectomy fusion at C4-C5. A spinal cage and an anterior cervical plate with different numbers of screws and plate systems were constructed to a fit one-level corpectomy of C5. Moment load of 1.0 N m was applied to the superior surface of C1, with C7 was fixed in all degrees of freedom. The kinematic stability of a two-screw plate was found to be statistically equivalent to a four-screw plate for one-level corpectomy. Thus, it can be a better option of fusion and infers comparable stability after one-level anterior cervical corpectomy, instead of a four-screw plate. abstract_id: PUBMED:22675409 Biomechanical in vitro - stability testing on human specimens of a locking plate system against conventional screw fixation of a proximal first metatarsal lateral displacement osteotomy. Introduction: The aim of this study was to examine resistance to angulation and displacement of the internal fixation of a proximal first metatarsal lateral displacement osteotomy, using a locking plate system compared with a conventional crossed screw fixation. Materials And Methodology: Seven anatomical human specimens were tested. Each specimen was tested with a locking screw plate as well as a crossed cancellous srew fixation. The statistical analysis was performed by the Friedman test. The level of significance was p = 0.05. Results: We found larger stability about all three axes of movement analyzed for the PLATE than the crossed screws osteosynthesis (CSO). The Friedman test showed statistical significance at a level of p = 0.05 for all groups and both translational and rotational movements. Conclusion: The results of our study confirm that the fixation of the lateral proximal first metatarsal displacement osteotomy with a locking plate fixation is a technically simple procedure of superior stability. abstract_id: PUBMED:32206665 Long-term Stability after Reduction of Mandible Fracture by Keyhole Plate: Evaluation at the Time of Plate Removal. Background: Various types of miniplates have been developed and used for the reduction of facial bone fractures. We introduced Yang's Keyhole (YK) plate, and reported on its short-term stability. The purpose of this study was to evaluate the long-term stability of the YK plate, as a follow-up study, by examining the patients who had used the YK plate among the patients with the reduction of mandible fractures and who visited for plate removal. Methods: We reviewed the medical records of 16 patients who underwent mandibular fracture fixation using a YK plate (group I) and 17 patients who underwent mandibular fracture fixation using a conventional plate (group II). Assessment was then made on malunion, occlusal stability, discomfort during the application, and clinical symptoms. Results: From January 2015 to December 2017, a total of 36 patients underwent mandibular fracture surgery using a YK plate. A total of 16 patients received plate removal. Among them, 15 were male and 1 female. The average age was 26 years. The applied surgical sites were the 12 on mandibular angle, 4 on mandibular symphysis, and 2 on subcondyle. The application period of YK plate was an average of 335 days. During the same period, 45 people underwent surgery on the conventional plate. A total of 17 patients received plate removal. Among them, 15 were male and 2 females. The average age was 36 years. The applied surgical sites were the 8 on mandibular angle, 4 on mandibular symphysis, and 2 on subcondyle. The application period of the conventional plate was an average of 349 days. No malocclusion occurred at the time of removal, and occlusion was stable. No patient complained of joint disease or discomfort. Conclusion: The YK plate system, in which the screw was first inserted and the plate was applied, for clinical convenience did not cause any particular problem and no significant difference from the conventional plate. abstract_id: PUBMED:34851745 Small directional treadmill perturbations induce differential gait stability adaptation. Introducing unexpected perturbations to challenge gait stability is an effective approach to investigate balance control strategies. Little is known about the extent to which people can respond to small perturbations during walking. This study aimed to determine how subjects adapted gait stability to multidirectional perturbations with small magnitudes applied on a stride-by-stride basis. Ten healthy young subjects walked on a treadmill that either briefly decelerated belt speed ("stick"), accelerated belt speed ("slip"), or shifted the platform medial-laterally at right leg mid-stance. We quantified gait stability adaptation in both anterior-posterior and medial-lateral directions using margin of stability and its components, base of support, and extrapolated center of mass. Gait stability was disrupted upon initially experiencing the small perturbations as margin of stability decreased in the stick, slip, and medial shift perturbations and increased in the lateral shift perturbation. Gait stability metrics were generally disrupted more for perturbations in the coincident direction. Subjects employed both feedback and feedforward strategies in response to the small perturbations, but mostly used feedback strategies during adaptation. Subjects primarily used base of support (foot placement) control in the lateral shift perturbation and extrapolated center of mass control in the slip and medial shift perturbations. These findings provide new knowledge about the extent of gait stability adaptation to small magnitude perturbations applied on a stride-by-stride basis and reveal potential new approaches for balance training interventions to target foot placement and center of mass control.NEW & NOTEWORTHY Little is known about if and how humans can adapt to small magnitude perturbations experienced on a stride-by-stride basis during walking. Here, we show that even small perturbations disrupted gait stability and that subjects could still adapt their reactive balance control. Depending on the perturbation direction, subjects might prefer adjusting their foot placement over their center of mass and vice versa. These findings could help potentially tune balance training to target specific aspects of balance. abstract_id: PUBMED:27566618 Rotational stability in screw-fixed scaphoid fractures compared to plate-fixed scaphoid fractures. Background: The literature describes the treatment of scaphoid fractures comparing the volar and dorsal approaches, the advantages and disadvantages of percutaneous screw fixation, as well as the treatment of scaphoid nonunions using different types of cancellous or corticocancellous bone grafts. Yet, to date no studies are available comparing the outcome of rotational stability in screw-fixed scaphoid fractures to angular stable systems. The purpose of this study is to provide reliable data about rotational stability in stabilised scaphoid fractures and to gain information about the rigidity and the stability of the different types of fixation. Methods: Three groups of different stabilisation methods on standardised scaphoid B2 fractures were tested for rotational stability. Stabilisation was achieved using one or two cannulated compression screws (CCS) or angular stable plating. We performed ten repetitive cycles up to 10°, 20° and 30° rotation, measuring the maximum torque and the average dissipated work at angle level. Results: Our study showed that rotational stability using a two CCS fixation is significantly (p < 0.05) higher than single CCS fixation. Using the angular stable plate system was also superior to the single CCS (p < 0.05). There was, however, no significant difference between two CCS fixation and angular stable plate fixation. Conclusion: Even though indications of using screws or plate systems might be different and plate osteosynthesis may be preferable in treatment of dislocated or comminuted fractures as well as for nonunions, our study showed a better rotational stability by choosing more than just one screw for osteosynthesis. Angular stable plating of scaphoid fractures also provides more rotational stability than single CCS fixation. The authors therefore hypothesise higher union rates in scaphoid fractures using more stable fixation systems. abstract_id: PUBMED:35270606 Variability of Postural Stability and Plantar Pressure Parameters in Healthy Subjects Evaluated by a Novel Pressure Plate. Background: Postural stability and plantar pressure parameters can be assessed by baropodometry; nevertheless, they are often affected by low repeatability. The aim of the study was to test the accuracy and repeatability of a novel resistive sensor pressure plate and to establish the most reliable baropodometric parameters. Methods: Accuracy and repeatability of the FM12050 BTS-Bioengineering plate measurements were assessed by using different weights in static conditions across three sessions. Subsequently, 20 healthy subjects were assessed by 30-s stabilometric analysis in bipedal standing with open eyes across four trials in two sessions, morning and afternoon. Results: Pressure plate repeatability in measuring the static weights was very high, and plate measurements were correlated to the scale measurements (Pearson’s coefficient = 0.99). Percentage of load distribution between left and right foot and in rearfoot and forefoot regions showed the largest repeatability (coefficient of variation < 5%) across trials. Eventually, median and percentiles (25−75%) were reported for each parameter. Conclusions: This study helped to assess the accuracy and repeatability of a novel pressure plate in static conditions and to define the most reliable parameters for the assessment of postural stability and foot morphology. The present healthy-subject stabilometric dataset may be used as reference data in the evaluation of pathological populations. abstract_id: PUBMED:12420268 Does plate adaptation affect stability? A biomechanical comparison of locking and nonlocking plates. Purpose: The purpose of this investigation was to determine whether the degree of plate adaptation and effects of locking influenced the mechanical behavior of the plate/screw/substrate system for 2.0-mm monocortical superior border plates and 2.4-mm reconstruction plates secured at the inferior border intended to stabilize simulated mandibular angle fractures. Materials And Methods: A total of 130 polyurethane synthetic mandible replicas (Synbone, Landquart, Switzerland) were used in this investigation. Five controls each, for incisal edge and molar loading, as well as 5 samples each for 2.4-mm locking and nonlocking reconstruction plates and 2.0-mm locking and nonlocking monocortical superior border plates, intimately adapted (0.0-mm offset), 1.0-mm offset and 2.0-mm offset were subjected to loading at the incisal edge and molar region with an Instron 1331 (Instron Corp, Canton, MA) servohydraulic mechanical testing unit. Load/displacement data were recorded, and yield load, yield displacement, and stiffness were determined. Mean and standard deviation values were calculated. Statistically significant differences were determined for the effects of locking and degree of plate adaptation using a 1-way analysis of variance (P <.05). For differences within categories and among groups, a Sheffé multiple-comparison test was performed. First-order polynomial best-fit curves were created for each group to further evaluate and compare the mechanical behavior. Results: There were no statistically significant differences (P <.05) for yield load, yield displacement, and stiffness within the 2.4 and the 2.0 locking categories for both molar and incisal edge loading. For the 2.4 nonlocking category, there were statistically significant differences for yield load, yield displacement, and stiffness between the 0.0-mm offset group and both the 1.0-mm and 2.0-mm offset groups for both molar and incisal edge loading but not between the 1.0-mm and 2.0-mm groups. For the 2.0 nonlocking category, there were statistically significant differences for yield load, yield displacement, and stiffness between both the 0.0-mm and 1.0-mm offset groups and the 2.0-mm offset group for both molar and incisal edge loading but not between the 0.0-mm and 1.0-mm groups. Conclusions: The degree of adaptation (amount of offset) affected the mechanical behavior of the nonlocking systems evaluated. It did not affect the locking systems. Failure occurred as an "all-or-nothing" pattern. Answer: Yes, plate adaptation does affect stability. A biomechanical comparison of locking and nonlocking plates showed that the degree of plate adaptation influenced the mechanical behavior of the plate/screw/substrate system intended to stabilize simulated mandibular angle fractures. Specifically, there were no statistically significant differences in yield load, yield displacement, and stiffness within the locking plate categories for both molar and incisal edge loading. However, for nonlocking plates, significant differences were observed between intimately adapted plates (0.0-mm offset) and those with 1.0-mm and 2.0-mm offsets for both molar and incisal edge loading, but not between the 1.0-mm and 2.0-mm offset groups. The study concluded that the degree of adaptation affected the mechanical behavior of the nonlocking systems evaluated, but it did not affect the locking systems, with failure occurring in an "all-or-nothing" pattern (PUBMED:12420268).
Instruction: Endovascular treatment for chronic atherosclerotic occlusive mesenteric disease: is stenting superior to balloon angioplasty? Abstracts: abstract_id: PUBMED:21104621 Endovascular treatment for chronic atherosclerotic occlusive mesenteric disease: is stenting superior to balloon angioplasty? Background: Endovascular treatment is an increasingly used therapeutic option in patients with chronic atherosclerotic occlusive mesenteric disease. Purpose of this study was evaluation of patency and mortality in patients treated with visceral artery percutaneous transluminal angioplasty (PTA) or stenting including follow-up. Patients And Methods: A retrospective review of 17 consecutive patients (4 women, 13 men) with endovascular treatment for symptomatic chronic mesenteric ischemia from 1998 to 2004 was performed. Mean follow-up period was 42 months. Patient demographics, interventional details, primary and/or secondary patency and mortality were recorded. Cumulative mortality and patency rates were determined using Kaplan-Meier life table analysis. Results: Twenty-six interventions (PTA alone n=13, PTA and stenting n=13) were performed in 17 patients. Interventions were performed in the superior mesenteric artery (n=13) and celiac artery (n=13). The re-intervention rate was 30% (6/26). Re-interventions were performed for the superior mesenteric artery (n=4) and celiac artery (n=2). Cumulative overall 1-year results were primary patency rate 81%, secondary patency rate 94%, and survival rate 82%. Cumulative 10-year results were primary patency rate 73%, secondary patency rate 94%, and survival rate 65%. The 10-year secondary patency rate was 100% in patients post initial stenting and 86% in patients post initial PTA. Conclusions: Long-term follow-up post endovascular treatment for chronic mesenteric ischemia demonstrated a considerable overall secondary patency rate of 94%. However, the long-term secondary patency rate was higher in patients post initial stenting compared to PTA alone. abstract_id: PUBMED:11782687 Chronic mesenteric ischemia: diagnosis and treatment with balloon angioplasty and stenting. Background: The goal of our research was to evaluate the efficacy of endovascular treatment in stenosis of the superior mesenteric artery and the celiac trunk. Material/methods: During the period 1996-2001, 6 patients (1 woman, 5 men, ages 46 to 73) were referred to our department with abdominal angina. Angiograms were performed in order to establish the reasons. All these patients presented with postprandial abdominal pain and weight loss. Angiography showed stenosis in the superior mesenteric artery in 2 patients, in the celiac trunk in 1 patient, and in both vessels in 3 patients. Typical balloon angioplasty (PTA) was performed with a balloon catheter, 5-8 mm in diameter, inflated up to 18 atm. If more than 30% stenosis remained after PTA, the patient was referred for stenting. Results: In 5 of the 6 patients (83%) PTA was successful. In one patient (17%) with poor results from PTA, a Perflex stent (diameter 7 mm) was implanted in the superior mesenteric artery. The follow-up protocol included clinical and Doppler ultrasonographic examination at 6, 12, and 18 months after surgery. A good clinical and ultrasound outcome was found in 6 patients at 6 and 12 months, and in 4 patients at 18 months. Conclusions: Balloon angioplasty and stent placement seem to be efficient and safe methods of treatment for abdominal angina in stenosis of the superior mesenteric artery and celiac trunk. abstract_id: PUBMED:12522894 Chronic mesenteric ischemia due to superior mesenteric artery stenosis. Percutaneous therapy as alternative treatment Chronic mesenteric arterial ischemia is an uncommon condition associated with a high morbidity and mortality. It is most commonly caused by atherosclerotic occlusive disease. Patients may suffer epigastric or periumbilical post-prandial pain ten to thirty minutes after eating. A case of chronic mesenteric ischemia is presented due to superior mesenteric artery stenosis. The diagnosis was performed with doppler sonography and angiography and was treated with percutaneous transluminal angioplasty. The patient became completely asymptomatic. The purpose of this report is to present the case, clinic and radiological features and to describe the percutaneous procedure. We believe that percutaneous treatment offers an improvement in this pathology with a low complication rate, decreasing the admission days and increasing patient comfort degree. abstract_id: PUBMED:31474341 Endovascular treatment of chronic mesenteric ischemia Introduction: Atherosclerosis occurring in the digestive arteries is rare and often asymptomatic. When it becomes symptomatic, surgical care is indicated. Conventional procedures are giving way to improved endovascular techniques applied to the mesenteric arteries. The aim of this single-center study was to evaluate short- and mid-term outcome after endovascular revascularization of the mesenteric arteries. Methods: We report a retrospective study about patients who underwent endovascular treatment of chronic mesenteric ischemia between 2013 and 2018. Results: Our population consisted of 10 patients. The average age was 60 years [range 45-78]. Clinical symptomatology associated abdominal pain and weight loss. All patients underwent computed tomographic angiography (CTA). Severe stenosis (>70%) involved the superior mesenteric artery (SMA) in ten cases, the celiac trunk in four cases and the inferior mesenteric artery in three. The procedure was performed under local anesthesia in all cases. The superior mesenteric artery was revascularized in all cases and the celiac trunk in two. Transluminal angioplasty was followed by deployment of a stent in all cases. The postoperative course was satisfactory. Outcome was good with all patients being symptom-free at one month. Our average follow-up was three years [range 1-5]. All patients underwent a Duplex ultrasound every six months. Recurrence of symptomatology was reported in two patients at 18 months and 24 months. The first patient underwent CTA that showed superior mesenteric artery and celiac trunk stent stenosis. The patient underwent a second transluminal angioplasty with a drug eluting balloon. The second patient was admitted to the emergency room for acute mesenteric ischemia related to acute thrombosis of the superior mesenteric artery stent. Laparotomy enabled extensive resection of the small bowel and aorto-mesenteric venous antegrade bypass, but the patient died the same day. Conclusion: Endovascular treatment has an important role to play in the management of chronic mesenteric ischemia. It is associated with a high rate of technical success. Patients should be carefully followed-up because of the mid-term risk of recurrent symptoms associated with intra-stent restenosis or thrombosis. abstract_id: PUBMED:38002726 Advancements in Treatment Strategies for Chronic Mesenteric Ischemia: A Comprehensive Review. Chronic mesenteric ischemia (CMI) arises from the inability to achieve adequate intestinal blood flow after meals, leading to an imbalance between oxygen and metabolite supply and demand. The true incidence of CMI remains uncertain. However, the occurrence of mesenteric artery occlusive disease (MAOD) is relatively common among the elderly population. Delays in diagnosing CMI can often be attributed to several factors, including the variability in patient symptoms and the range of potential causes for chronic abdominal pain with weight loss. Mikkelson pioneered the introduction of a surgical treatment for occlusive lesions of the superior mesenteric artery (SMA) in 1957. The inaugural performance of endovascular revascularization (ER) for visceral vessels took place in 1980. The literature has documented two types of endovascular revascularization (ER) methods: percutaneous transluminal angioplasty (PTA) and primary stenting (PMAS). Despite the limited quality of available evidence, the consensus among experts is strongly in favor of PMAS over PTA alone for the treatment of atherosclerotic mesenteric artery stenosis. There are several key areas of focus for chronic mesenteric ischemia (CMI) treatment. Randomized controlled trials comparing different stent types, such as covered stents versus bare metal stents, are needed to evaluate efficacy, patency rates, and long-term outcomes in CMI patients. abstract_id: PUBMED:17662218 Endovascular therapy as an alternative treatment in chronic mesenteric ischemia Introduction: Chronic mesenteric ischemia (CMI) is an uncommon disorder. Traditional treatment consists of open surgical (OS) revascularization. We report a case of CMI treated with percutaneous angioplasty and stenting. Case Report: A 77-year-old-woman reported a history of postprandial abdominal pain, weight loss, asthenia, and anorexia. On physical examination, the only relevant sign was systolic murmur. Oral panendoscopy showed chronic atrophic gastritis and patchy intestinal metaplasia with Helicobacter pylori colonization; rectal sigmoidoscopy showed colonic diverticula. Abdominal ultrasound color-flow imaging confirmed superior mesenteric artery (SMA) stenosis > 70%. Angiography confirmed proximal SMA subocclusion and celiac trunk stenosis of 50%. Percutaneous angioplasty and stenting were carried out. The patient made an immediate recovery and remains without postprandial pain. Discussion: CMI due to atherosclerotic occlusive disease was first reported in 1936 and is a relatively uncommon disorder. This entity usually occurs in patients over 60 years of age with other atherosclerotic symptoms. CMI presents with postprandial abdominal pain, anorexia due to <<fear of food>>, and weight loss. Despite high perioperative morbidity and mortality rates, OS revascularization has traditionally been the treatment of choice. The most frequent indications include pain relief, improvement of nutritional status, and prevention of intestinal infarction. Endovascular therapy seems to be a valid alternative, mainly in high-risk patients, with good immediate and medium-term results. abstract_id: PUBMED:28376703 Symptomatic Delayed Aortic Dissection After Superior Mesenteric Artery Stenting for Chronic Mesenteric Ischemia. Chronic mesenteric ischemia most commonly occurs secondary to atherosclerotic disease of the mesenteric arteries. Patients are often older than 60 years and can present with postprandial abdominal pain, nausea, diarrhea, and significant weight loss. Symptomatic disease has traditionally been managed with open surgical repair, but endovascular strategies, such as percutaneous angioplasty and stenting, have emerged as the mainstays of therapy. Complications from stenting include plaque embolization, thrombosis, perforation, or dissection of the mesenteric arteries. We present a patient with symptomatic acute aortic dissection 18 months after celiac and superior mesenteric artery stent placement for chronic mesenteric ischemia. abstract_id: PUBMED:19370299 Chronic mesenteric ischemia: efficacy and outcome of endovascular therapy. Chronic mesenteric ischemia is a rare condition caused by occlusive disease of the mesenteric vessels and manifested most commonly as abdominal pain. While the traditional therapy in symptomatic patients has been surgery, recent improvements in interventional devices and refinement in techniques have increased the popularity of endovascular treatment. The high procedural success and the low complication rate make the catheter-based approach an interesting alternative to surgery. Percutaneous angioplasty and stenting is now recognized as a minimally invasive means of obtaining good long-term results and is consequently suggested for the primary treatment of chronic mesenteric ischemia. This article presents a review of the literature on indications and technical aspects of endovascular treatment, with emphasis on short- and long-term outcomes. abstract_id: PUBMED:27581131 Endovascular Treatment of Chronic Mesenteric Ischemia in the Setting of Occlusive Superior Mesenteric Artery Lesions. Background: Endovascular intervention is considered the first-line treatment for chronic mesenteric ischemia (CMI) when feasible. Two-vessel revascularization is most definitive when celiac (CA) and superior mesenteric arteries (SMA) are diseased, but single-vessel intervention may be performed in patients with 2-vessel disease due to anatomic/technical factors. We evaluated anatomic predictors of clinical outcomes associated with endovascular treatment of CMI among patients with occlusive SMA lesions. Methods: Patients with CMI treated with endovascular revascularization over 10 years were identified. Patients with SMA occlusions were selected for analysis. Between-group comparisons based on inclusion of an SMA revascularization were evaluated using t-test and chi-squared test. Freedom from symptomatic recurrence or repeat intervention was analyzed using proportional hazards regression. Results: Fifty-four patients with CMI were analyzed. Sixteen (29.6%) patients had CA-only intervention, and 38 (70.4%) patients had SMA revascularization with or without CA intervention. No significant differences in demographics or comorbidity were identified between groups. In the CA-only intervention group, 8 of the 16 (50%) patients developed symptomatic recurrence compared with 8 of the 31 (21.1%) patients whose intervention included the SMA. Patients treated without SMA intervention also had decreased freedom from both symptomatic recurrence (hazard ratio [HR] 3.2, 95% confidence interval [CI] 1.2-8.6, P = 0.016) and repeat intervention (HR 5.5, 95% CI 1.8-16.3, P = 0.001). Conclusions: Among patients with CMI and occlusive SMA lesions, SMA revascularization appears to be the key determinant for symptomatic outcomes and repeat intervention. Patient counseling should include potential future need for surgical revascularization if endovascular SMA treatment cannot be accomplished. abstract_id: PUBMED:18620112 Long-term outcomes of endoluminal therapy for chronic atherosclerotic occlusive mesenteric disease. Percutaneous interventions for symptomatic chronic mesenteric arterial atherosclerosis are rapidly gaining popularity. This study evaluates the long-term anatomic and functional outcomes of endovascular therapy for chronic atherosclerotic occlusive mesenteric arterial disease at a tertiary referral academic medical center. A retrospective analysis of records from patients who underwent endovascular mesenteric arterial interventions between 1984 and 2006 for chronic mesenteric ischemia was performed. Cases of acute ischemia or cases with associated bowel resection were excluded. Results were standardized to current Society for Vascular Surgery (SVS) criteria. Kaplan-Meier survival analyses were performed to assess time-dependent outcomes. Factor analyses were performed using either a multivariate model for fixed variables or a Cox proportional hazard model for time-dependent variables. Data are presented as mean +/- SEM. Thirty-one patients (84% female, average age 70, range 43-90, years) with 41 visceral vessel interventions were identified. Indications for intervention included weight loss >10 kg (61%) and/or postprandial pain (94%). The median SVS comorbidity score was 15 (range 10-24). All had three-vessel athero-occlusive mesenteric disease with a median of two vessels occluded on angiography. The median number of vessels revascularized was two. In all cases stenosis, and not occlusion, was treated. The 90-day mortality was 20% and the major morbidity was 6%. While primary and assisted patency rates for the interventions at 7 years were 69 +/- 8% and 72 +/- 9%, respectively (mean +/- SE, n > or = 10), cumulative freedom from recurrent symptoms was only 56 +/- 10%. Twenty percent of the vessels developed restenosis at a median interval of 0.29 years (range 0.3-2.8), with a freedom from restenosis of 79 +/- 8% at 5 years. Fifty percent of these, all with recurrent symptoms, were reintervened successfully with balloon angioplasty and resolution of their symptoms. There was no significant difference between the celiac and superior mesenteric artery outcomes. The correlation of recurrent symptoms and restenosis was significant (p < 0.001). Endoluminal therapy for chronic mesenteric ischemia carries a low morbidity and mortality in a high-risk population. While anatomic patency remains high, long-term therapeutic benefit is not achieved. In its present iteration, endovascular therapy for mesenteric ischemia should be limited to those patients without an open surgical option. Answer: The endovascular treatment for chronic atherosclerotic occlusive mesenteric disease involves both percutaneous transluminal angioplasty (PTA) and stenting. The evidence from the provided abstracts suggests that stenting may be superior to balloon angioplasty alone in certain aspects. One study found that the long-term secondary patency rate was higher in patients post initial stenting compared to PTA alone, with a 10-year secondary patency rate of 100% in patients post initial stenting versus 86% in patients post initial PTA (PUBMED:21104621). Another study reported that balloon angioplasty and stent placement seem to be efficient and safe methods of treatment for abdominal angina in stenosis of the superior mesenteric artery and celiac trunk, with a good clinical and ultrasound outcome found in patients at 6, 12, and 18 months after surgery (PUBMED:11782687). Furthermore, advancements in treatment strategies for chronic mesenteric ischemia indicate a strong consensus among experts in favor of primary stenting over PTA alone for the treatment of atherosclerotic mesenteric artery stenosis (PUBMED:38002726). This is supported by the fact that endovascular treatment has an important role in the management of chronic mesenteric ischemia and is associated with a high rate of technical success, although patients should be carefully followed-up because of the mid-term risk of recurrent symptoms associated with intra-stent restenosis or thrombosis (PUBMED:31474341). In summary, while both PTA and stenting are used in the endovascular treatment of chronic mesenteric ischemia, the evidence suggests that stenting may offer better long-term patency rates and is generally favored by experts over PTA alone. However, careful follow-up is necessary due to the potential for restenosis or thrombosis after stenting (PUBMED:31474341, PUBMED:38002726).
Instruction: Intensity modulated radiotherapy in early stage Hodgkin lymphoma patients: is it better than three dimensional conformal radiotherapy? Abstracts: abstract_id: PUBMED:22857015 Intensity modulated radiotherapy in early stage Hodgkin lymphoma patients: is it better than three dimensional conformal radiotherapy? Background: Cure rate of early Hodgkin Lymphoma are high and avoidance of late toxicities is of paramount importance. This comparative study aims to assess the normal tissue sparing capability of intensity-modulated radiation therapy (IMRT) versus standard three-dimensional conformal radiotherapy (3D-CRT) in terms of dose-volume parameters and normal tissue complication probability (NTCP) for different organs at risk in supradiaphragmatic Hodgkin Lymphoma (HL) patients. Methods: Ten HL patients were actually treated with 3D-CRT and all treatments were then re-planned with IMRT. Dose-volume parameters for thyroid, oesophagus, heart, coronary arteries, lung, spinal cord and breast were evaluated. Dose-volume histograms generated by TPS were analyzed to predict the NTCP for the considered organs at risk, according to different endpoints. Results: Regarding dose-volume parameters no statistically significant differences were recorded for heart and origin of coronary arteries. We recorded statistically significant lower V30 with IMRT for oesophagus (6.42 vs 0.33, p = 0.02) and lungs (4.7 vs 0.1 p = 0.014 for the left lung and 2.59 vs 0.1 p = 0.017 for the right lung) and lower V20 for spinal cord (17.8 vs 7.2 p = 0.02). Moreover the maximum dose to the spinal cord was lower with IMRT (30.2 vs 19.9, p <0.001). Higher V10 with IMRT for thyroid (64.8 vs 95, p = 0.0019) and V5 for lungs (30.3 vs 44.8, p = 0.03, for right lung and 28.9 vs 48.1, p = 0.001 for left lung) were found, respectively. Higher V5 and V10 for breasts were found with IMRT (V5: 4.14 vs 20.6, p = 0.018 for left breast and 3.3 vs 17, p = 0.059 for right breast; V10: 2.5 vs 13.6 p = 0.035 for left breast and 1.7 vs 11, p = 0.07 for the right breast.) As for the NTCP, our data point out that IMRT is not always likely to significantly increase the NTCP to OARs. Conclusions: In HL male patients IMRT seems feasible and accurate while for women HL patients IMRT should be used with caution. abstract_id: PUBMED:26630566 Cardiac Exposure in the Dynamic Conformal Arc Therapy, Intensity-Modulated Radiotherapy and Volumetric Modulated Arc Therapy of Lung Cancer. Purpose: To retrospectively evaluate the cardiac exposure in three cohorts of lung cancer patients treated with dynamic conformal arc therapy (DCAT), intensity-modulated radiotherapy (IMRT), or volumetric modulated arc therapy (VMAT) at our institution in the past seven years. Methods And Materials: A total of 140 lung cancer patients were included in this institutional review board approved study: 25 treated with DCAT, 70 with IMRT and 45 with VMAT. All plans were generated in a same commercial treatment planning system and have been clinically accepted and delivered. The dose distribution to the heart and the effects of tumor laterality, the irradiated heart volume and the beam-to-heart distance on the cardiac exposure were investigated. Results: The mean dose to the heart among all 140 plans was 4.5 Gy. Specifically, the heart received on average 2.3, 5.2 and 4.6 Gy in the DCAT, IMRT and VMAT plans, respectively. The mean heart doses for the left and right lung tumors were 4.1 and 4.8 Gy, respectively. No patients died with evidence of cardiac disease. Three patients (2%) with preexisting cardiac condition developed cardiac disease after treatment. Furthermore, the cardiac exposure was found to increase linearly with the irradiated heart volume while decreasing exponentially with the beam-to-heart distance. Conclusions: Compared to old technologies for lung cancer treatment, modern radiotherapy treatment modalities demonstrated better heart sparing. But the heart dose in lung cancer radiotherapy is still higher than that in the radiotherapy of breast cancer and Hodgkin's disease where cardiac complications have been extensively studied. With strong correlations of mean heart dose with beam-to-heart distance and irradiated heart volume, cautions should be exercised to avoid long-term cardiac toxicity in the lung cancer patients undergoing radiotherapy. abstract_id: PUBMED:34358649 A Systematic Review on Intensity Modulated Radiation Therapy for Mediastinal Hodgkin's Lymphoma. Background: Secondary malignant neoplasms (SMNs) and cardiovascular diseases induced by chemotherapy and radiotherapy represent the main cause of excess mortality for early-stage Hodgkin lymphoma patients, especially when the mediastinum is involved. Conformal radiotherapy techniques such as Intensity-Modulated Radiation Therapy (IMRT) could allow a reduction of the dose to the organs-at-risk (OARs) and therefore limit long-term toxicity. Methods: We performed a systematic review of the current literature regarding comparisons between IMRT and conventional photon beam radiotherapy, or between different IMRT techniques, for the treatment of mediastinal lymphoma. Results And Conclusions: IMRT allows a substantial reduction of the volumes of OARs exposed to high doses, reducing the risk of long-term toxicity. This benefit is conterbalanced by the increase of volumes receiving low doses, that could potentially increase the risk of SMNs. Treatment planning should be personalized on patient and disease characteristics. Dedicated techniques such as "butterfly" VMAT often provide the best trade-off. abstract_id: PUBMED:26744079 Evolution of radiation techniques in the treatment of mediastinal lymphoma: from 3D conformal radiotherapy (3DCRT) to intensity-modulated RT (IMRT) using helical tomotherapy (HT): a single-centre experience and review of the literature. Objective: To evaluate radiation techniques and their toxicity in the treatment of Hodgkin's lymphoma (HL) and non-Hodgkin's lymphoma (NHL) with mediastinal disease over a 10-year period. Methods: Between 2003 and 2015, 173 patients with Stage I-III nodal lymphoma were treated in our institution: some of these patients were irradiated for HL or NHL with mediastinal disease. Some of the patients were treated by three-dimensional conformal radiotherapy (3DCRT), others by intensity-modulated radiotherapy (IMRT). Results: We studied 26 males and 43 females with a median age of 26 years. The median follow-up was 43 months. 49 patients were treated by 3DCRT and 20 patients by IMRT. The median dose received by patients treated for NHL was 40 Gy (range: 36-44 Gy), and the median dose received by patients with HL was 30 Gy (range: 30-36 Gy). Between 2003 and 2006, 16 patients were treated by 3DCRT vs 0 patients by IMRT. Between 2007 and 2009, 16 patients received 3DCRT and one patient received IMRT. Between 2010 and 2015, 19 patients received IMRT, and no patients received 3DCRT. 11 of the 20 (55%) patients treated by IMRT and 35 of the 49 (71.4%) patients treated by 3DCRT experienced acute toxicity. Among the patients treated by 3DCRT, one patient experienced Grade 1 radiation pneumonitis and two patients experienced Grade 1 acute mucositis. No late toxicity was observed in patients treated by IMRT. Conclusion: Improvement of radiation techniques for HL and NHL appears to have improved acute and late clinical safety. Longer follow-up is necessary to evaluate very late toxicity. Advances In Knowledge: Improvement of radiation techniques for HL and NHL appears to improve the tolerance. abstract_id: PUBMED:23619032 Estimated risk of cardiovascular disease and secondary cancers with modern highly conformal radiotherapy for early-stage mediastinal Hodgkin lymphoma. Background: Hodgkin lymphoma (HL) survivors have an increased morbidity and mortality from secondary cancers and cardiovascular disease (CD). We evaluate doses with involved node radiotherapy (INRT) delivered as 3D conformal radiotherapy (3D CRT), volumetric modulated arc therapy (VMAT), or proton therapy (PT), compared with the extensive Mantle Field (MF). Patients And Methods: For 27 patients with early-stage, mediastinal HL, treated with chemotherapy and INRT delivered as 3D CRT (30 Gy), we simulated an MF (36 Gy), INRT-VMAT and INRT-PT (30 Gy). Dose to the heart, lungs, and breasts, estimated risks of CD, lung (LC) and breast cancer (BC), and corresponding life years lost (LYL) were compared. Results: 3D CRT, VMAT or PT significantly lower the dose to the heart, lungs and breasts and provide lower risk estimates compared with MF, but with substantial patient variability. The risk of CD is not significantly different for 3D CRT versus VMAT. The risk of LC and BC is highest with VMAT. For LYL, PT is the superior modern technique. Conclusions: In early-stage, mediastinal HL modern radiotherapy provides superior results compared with MF. However, there is no single best radiotherapy technique for HL-the decision should be made at the individual patient level. abstract_id: PUBMED:16169675 Is intensity-modulated radiotherapy better than conventional radiation treatment and three-dimensional conformal radiotherapy for mediastinal masses in patients with Hodgkin's disease, and is there a role for beam orientation optimization and dose constraints assigned to virtual volumes? Purpose: To evaluate the role of beam orientation optimization and the role of virtual volumes (VVs) aimed at protecting adjacent organs at risk (OARs), and to compare various intensity-modulated radiotherapy (IMRT) setups with conventional treatment with anterior and posterior fields and three-dimensional conformal radiotherapy (3D-CRT). Methods And Materials: Patients with mediastinal masses in Hodgkin's disease were treated with combined modality therapy (three to six cycles of adriamycin, bleomycin, vinblastine, and dacarbazine [ABVD] before radiation treatment). Contouring and treatment planning were performed with Somavision and CadPlan Helios (Varian Systems, Palo Alto, CA). The gross tumor volume was determined according to the prechemotherapy length and the postchemotherapy width of the mediastinal tumor mass. A 10-mm isotropic margin was added for the planning target volume (PTV). Because dose constraints assigned to OARs led to unsatisfactory PTV coverage, VVs were designed for each patient to protect adjacent OARs. The prescribed dose was 40 Gy to the PTV, delivered according to guidelines from International Commission on Radiation Units and Measurements Report No. 50. Five different IMRT treatment plans were compared with conventional treatment and 3D-CRT. Results: Beam orientation was important with respect to the amount of irradiated normal tissues. The best compromise in terms of PTV coverage and protection of normal tissues was obtained with five equally spaced beams (5FEQ IMRT plan) using dose constraints assigned to VVs. When IMRT treatment plans were compared with conventional treatment and 3D-CRT, dose conformation with IMRT was significantly better, with greater protection of the heart, coronary arteries, esophagus, and spinal cord. The lungs and breasts in women received a slightly higher radiation dose with IMRT compared with conventional treatments. The greater volume of normal tissue receiving low radiation doses could be a cause for concern. Conclusions: The 5FEQ IMRT plan with dose constraints assigned to the PTV and VV allows better dose conformation than conventional treatment and 3D-CRT, notably with better protection of the heart and coronary arteries. Of concern is the "spreading out" of low doses to the rest of the patient's body. abstract_id: PUBMED:22116023 Intensity-modulated radiotherapy and involved-node concept in patients with Hodgkin lymphoma: experience of the Gustave-Roussy Institute Purpose: To assess the clinical outcome of the involved-node radiotherapy concept with the use of intensity modulated radiotherapy (IMRT) in patients with localized supradiaphragmatic Hodgkin lymphoma. Patients And Methods: Patients with early-stage supradiaphragmatic Hodgkin lymphoma were treated with chemotherapy prior to irradiation. Radiation treatments were delivered using the involved-node radiotherapy (INRT) concept according to the EORTC guidelines. Intensity modulated radiotherapy was performed free-breathing. Results: Forty-seven patients with Hodgkin lymphoma (44 patients with primary Hodgkin lymphoma and three patients with recurrent disease) entered the study from January 2003 to December 2010. The median age was 31 years (range 17 to 62). Thirty patients had stage I-IIA, 14 had stage I-IIB disease and three had relapse. Forty-two patients received three to six cycles of adriamycin, bleomycin, vinblastine and dacarbazine (ABVD). The median radiation dose to patients was 36 Gy (range: 20-40). Protection of various organs at risk was satisfactory. The median follow-up was 57.4 months (range: 5.4-94.3). For patients with primary Hodgkin lymphoma, the 5-year survival and 5-year progression-free survival rates were 96% (95% confidence interval: 80-99) and 92% (95% confidence interval: 78-97), respectively. None of the three patients with recurrent disease has relapsed. Recurrences occurred in three patients: one was in-field relapse and two were visceral recurrences. Grade 3 acute lung toxicity (transient pneumonitis) occurred in one case. Conclusion: Our results suggest that patients with localized Hodgkin lymphoma can be safely and efficiently treated using the involved node irradiation concept and intensity modulated irradiation. abstract_id: PUBMED:19386423 Dosimetric comparison of three different involved nodal irradiation techniques for stage II Hodgkin's lymphoma patients: conventional radiotherapy, intensity-modulated radiotherapy, and three-dimensional proton radiotherapy. Purpose: To compare the dose distribution to targeted and nontargeted tissues in Hodgkin's lymphoma patients using conventional radiotherapy (CRT), intensity-modulated RT (IMRT), and three-dimensional proton RT (3D-PRT). Methods And Materials: CRT, IMRT, and 3D-PRT treatment plans delivering 30 cobalt Gray equivalent (CGE)/Gy to an involved nodal field were created for 9 Stage II Hodgkin's lymphoma patients (n = 27 plans). The dosimetric endpoints were compared. Results: The planning target volume was adequately treated using all three techniques. The IMRT plan produced the most conformal high-dose distribution; however, the 3D-PRT plan delivered the lowest mean dose to nontarget tissues, including the breast, lung, and total body. The relative reduction in the absolute lung volume receiving doses of 4-16 CGE/Gy for 3D-PRT compared with CRT ranged from 26% to 37% (p < .05), and the relative reduction in the absolute lung volume receiving doses of 4-10 CGE/Gy for 3D-PRT compared with IMRT was 48-65% (p < .05). The relative reduction in absolute total body volume receiving 4-30 CGE/Gy for 3D-PRT compared with CRT was 47% (p < .05). The relative reduction in absolute total body volume receiving a dose of 4 CGE/Gy for 3D-PRT compared with IMRT was 63% (p = .03). The mean dose to the breast was significantly less for 3D-PRT than for either IMRT or CRT (p = .03) The mean dose and absolute volume receiving 4-30 CGE/Gy for the heart, thyroid, and salivary glands were similar for the three modalities. Conclusion: In this favorable subset of Hodgkin's lymphoma patients without disease in or below the hila, 3D-PRT significantly reduced the dose to the breast, lung, and total body. These observed dosimetric advantages might improve the clinical outcomes of Hodgkin's lymphoma patients by reducing the risk of late radiation effects related to low-to-moderate doses in nontargeted tissues. abstract_id: PUBMED:28188697 A case study evaluating deep inspiration breath-hold and intensity-modulated radiotherapy to minimise long-term toxicity in a young patient with bulky mediastinal Hodgkin lymphoma. Radiotherapy plays an important role in the treatment of early-stage Hodgkin lymphoma, but late toxicities such as cardiovascular disease and second malignancy are a major concern. Our aim was to evaluate the potential of deep inspiration breath-hold (DIBH) and intensity-modulated radiotherapy (IMRT) to reduce cardiac dose from mediastinal radiotherapy. A 24 year-old male with early-stage bulky mediastinal Hodgkin lymphoma received involved-site radiotherapy as part of a combined modality programme. Simulation was performed in free breathing (FB) and DIBH. The target and organs at risk were contoured on both datasets. Free breathing-3D conformal (FB-3DCRT), DIBH-3DCRT, FB-IMRT and DIBH-IMRT were compared with respect to target coverage and doses to organs at risk. A 'butterfly' IMRT technique was used to minimise the low-dose bath. In our patient, both DIBH (regardless of mode of delivery) and IMRT (in both FB and DIBH) achieved reductions in mean heart dose. DIBH improved all lung parameters. IMRT reduced high dose (V20), but increased low dose (V5) to lung. DIBH-IMRT was chosen for treatment delivery. Advanced radiotherapy techniques have the potential to further optimise the therapeutic ratio in patients with mediastinal lymphoma. Benefits should be assessed on an individualised basis. abstract_id: PUBMED:33237825 Comparing second cancer risk for multiple radiotherapy modalities in survivors of hodgkin lymphoma. Objectives: To assess if excess absolute risk (EAR) of radiation-induced solid cancer can be used to rank radiotherapy plans for treatment of Hodgkin lymphoma (HL) in a statistically significant way. Methods: EAR models, calibrated with data from the Life Span Study and HL survivors, have been incorporated into a voxelised risk-calculation software, which is used to compare four treatment modalities planned for five virtual HL patients. Organ-specific parameters are generated repeatedly in a Monte Carlo fashion to model their uncertainties. This in turn enables a quantitative estimation of the EAR uncertainties. Results: Parameter-driven uncertainties on total EAR are around 13%, decreasing to around 2-5% for relative EAR comparisons. Total EAR estimations indicate that intensity modulated proton therapy decreases the average risk by 40% compared to the intensity modulated radiation therapy plan, 28% compared to the volumetric modulated arc therapy plan whereas the three-dimensional conformal radiation therapy plan is equivalent within the uncertainty. Conclusion: Relative EAR is a useful metric for distinguishing between radiotherapy plans in terms of second cancer risk. Advances In Knowledge: Relative EAR is not dominated by model or parameter uncertainties and can be used to guide the choice of radiotherapy for HL patients. Answer: Intensity-modulated radiotherapy (IMRT) has been compared to three-dimensional conformal radiotherapy (3D-CRT) in the treatment of early-stage Hodgkin lymphoma (HL) patients, with various studies highlighting both the advantages and potential concerns associated with IMRT. IMRT has been shown to be capable of sparing normal tissue more effectively than 3D-CRT in certain respects. For example, IMRT has been associated with statistically significant lower dose volumes (V30) for the oesophagus and lungs, as well as a lower V20 for the spinal cord when compared to 3D-CRT. Additionally, the maximum dose to the spinal cord was lower with IMRT. These findings suggest that IMRT can be more accurate and feasible, particularly for male HL patients (PUBMED:22857015). However, IMRT also presents some challenges. It has been found to deliver higher volumes of low doses (V5 and V10) to the thyroid and breasts, which could potentially increase the risk of secondary malignant neoplasms (SMNs) and cardiovascular diseases. This is particularly concerning given that these late toxicities are the main cause of excess mortality for early-stage HL patients, especially when the mediastinum is involved (PUBMED:34358649). Therefore, while IMRT allows a substantial reduction of the volumes of organs at risk (OARs) exposed to high doses, the increase in volumes receiving low doses needs to be carefully considered (PUBMED:26744079). In terms of clinical outcomes, patients with localized supradiaphragmatic Hodgkin lymphoma treated with IMRT following the involved-node radiotherapy concept have shown satisfactory protection of various OARs and good survival rates, suggesting that IMRT can be safely and efficiently used (PUBMED:22116023). Overall, the decision to use IMRT over 3D-CRT should be made on an individual patient level, taking into account the specific disease characteristics and the potential for long-term toxicities. While IMRT has shown benefits in terms of normal tissue sparing at higher doses, the increased exposure to lower doses and the associated risks must be weighed against these benefits (PUBMED:23619032).
Instruction: Evidence into practice, experimentation and quasi experimentation: are the methods up to the task? Abstracts: abstract_id: PUBMED:36446436 Critically Appraising Evidence, Step 3: Quantitative Evidence-Quasi-Experimental or Nonrandomized Experimental Studies. Critical appraisal of the evidence is the third step in the evidence-based practice process. This column, the third in a multipart series to describe the critical appraisal process, focuses on critical appraisal of quasi-experimental or nonrandomized experimental studies. abstract_id: PUBMED:18444194 The reputation of medical education research: quasi-experimentation and unresolved threats to validity. Background: Medical education research is often criticized for its methodological flaws. This raises questions about the prospect of evidence-based medical education practice. Critics call for more rigorous research with randomization, greater control, and tight execution. But randomization and control are rarely achieved in applied field settings, the site of most medical education research. Consequently, research in medical education has relied on quasi-experimentation, which by definition is compromised methodologically. Summary: So is medical education research doomed to a reputation for flawed research that undermines the promise of evidence-based practice? We think not. We argue in this article that the problem is due to incomplete and uncritical use of quasi-experimentation. This practice ignores a critical step in the quasi-experimental process, a step that follows the design and execution of the research, that is, ruling out threats to validity posed by methodological flaws. Conclusion: Validity threats must be evaluated critically when drawing research conclusions, to establish a credible evidence base for medical education practice. abstract_id: PUBMED:31152433 The dual-task practice advantage: Empirical evidence and cognitive mechanisms. Practice of two simultaneous component tasks in dual-task situations leads to an improvement in dual-task performance. The present paper reviews empirical evidence for this practice-related improvement and discusses its underlying cognitive mechanisms. In particular, the robustness of the phenomenon of dual-task practice advantage (DTPA) is evaluated. This phenomenon is described as an advantage in dual-task performance at the end of dual-task practice (i.e., simultaneous task practice) in comparison to the dual-task performance after single-task practice (i.e., separate task practice). The review investigates the existence of empirical evidence of the DTPA phenomenon and specifications of its mechanisms in the context of the allocation and scheduling hypothesis and the integration hypothesis. Evidence for the existence of the DTPA phenomenon and the allocation and scheduling hypothesis was demonstrated in dual tasks with (1) combinations of two sensorimotor tasks, (2) combinations of one continuous task and one sensorimotor task, (3) combinations of two working-memory tasks, and (4) combinations of one motor task and one cognitive task. In contrast, the findings demonstrate that the DTPA phenomenon in dual tasks with (5) combinations of two long-term memory retrieval tasks can instead be explained by the integration hypothesis. The relevance of these findings is discussed in relation to real-world problems. abstract_id: PUBMED:33058281 Impact of the implementation of best practice guidelines on nurse's evidence-based practice and on nurses' work environment: Research protocol. Aim: To determine the impact of the Best Practice Spotlight Organization® initiative on nurses' perception of their work environment and their attitudes to evidence-based practice. Design: Quasi-experimental, multicentre study. The intervention is the participation in Best Prectice Spotilight Organizations to implement Best Practice Guidelines. Methods: The study will include seven centres in the interventional group and 10 in the non-equivalent control group, all of them belonging to the Spanish national health system. The Practice Environment Scale of the Nursing Work Index, and the Health Sciences Evidence-Based Practice Questionnaire will be administered to a sample of 1,572 nurses at the beginning of the programme and at 1 year. This 3-year study started in April 2018 and will continue until December 2021. Statistical analyses will be carried out using the SPSS 25.0. This project was approved by the Drug Research Ethics Committee of the Parc de Salut Mar and registered in Clinical Trials. Discussion: The study findings will show the current state of nurses' perception of their work environment and attitudes to evidence-based practice, and possible changes in these parameters due to the programme. Impact: The findings could provide a strong argument for health policymakers to scale up the Best Practice Spotlight Organization® initiative in the Spanish national health system. abstract_id: PUBMED:32429332 Moving Evidence into Practice by Advanced Practice Nurses in Hospitalization Wards. Protocol for a Multicentre Quasi-Experimental Study in Three Hospitals in Spain. Evidence-based practice (EBP) combined with quality of care improves patient outcomes. However, there are still difficulties for its implementation in daily clinical practice. This project aims to evaluate the impact of the incorporation of the Advanced Practice Nurse (APN) role on the implementation of EBP at three levels: context, nurses' perceptions, and clinical outcomes. Mixed-methods study in two phases is proposed. Phase 1: a quasi-experimental design where five APNs are included in five hospitalization wards that are compared with another five similar wards without APNs. Variables from Practice-Environment-Scale-Nursing-Work-Index, Health-Science-Evidence-Based-Practice-Questionnaire, and Advanced-Practice-Nursing-Competency-Assessment-Instrument are used. Clinical outcomes are followed-up with monthly. A descriptive and exploratory analysis is performed. Phase 2: an exploratory qualitative design through focus groups at the intervention wards after one year of APNs implementation. Explicative data are gathered to explain the progression of change and how actors perceive and attribute triggers, barriers, and facilitators for change. An inductive thematic analysis is performed. The inclusion of APN in hospitalization context is insufficiently studied. It is hoped that these figures provide solutions to the multiple barriers in the development of EBP in these sceneries and contribute to resolve the gap between research results and healthcare practice. abstract_id: PUBMED:9578853 Evidence into practice, experimentation and quasi experimentation: are the methods up to the task? Objective: Methodological review of evaluations of interventions intended to help health professionals provide more effective and efficient health care, motivated by the current experience of NHS Research and Development in England. Emphasis upon the forms of research appropriate to different stages in the development and evaluation of interventions, the use of experimental and quasi experimental designs, the methods used in systematic reviews and meta analyses. Method: A proposed development process is derived from that used in the development of drugs. The strengths and weaknesses of different experimental and quasi experimental designs are derived from published methodological literature and first principles. Examples are drawn from the literature. Results: Like pharmaceuticals, implementation interventions need to go through several stages of development before they are evaluated in designed experiments. Where there are practical reasons that make random allocation impossible in quantitative evaluations, quasi experimental methods may provide useful information, although these studies are open to bias. It is rare for a single study to provide a complete answer to important questions, and systematic reviews of all available studies should be undertaken. Meta analytic techniques go some way towards countering the low power of many existing studies, reduce the risk of bias, and avoid the subjective approaches that may be found in narrative reviews. Conclusions: The initiative taken by NHS Research and Development in examining methods to promote the uptake of research findings is welcome, but will only prove helpful if careful attention is paid to the different stages of the development process, and different research approaches are used appropriately at different stages. abstract_id: PUBMED:20669538 Ethical issue in animal experimentation In the 1970s, under pressure from certain sections of society and thanks to initiatives by several scientific research teams, committees charged with improving the conditions of laboratory animals started to be created, first in the United States and subsequently in Europe. This led to the development of an ethical approach to animal experimentation, taking into account new scientific advances. In addition to the legislation designed to provide a legal framework for animal experimentation and to avoid abuses, this ethical approach, based on the concept that animals are sentient beings, encourages greater respect of laboratory animals and the implementation of measures designed to reduce their suffering. Now, all animal experiments must first receive ethical approval--from in-house committees in the private sector and from regional committees for public institutions. Very recently, under the impetus of the French ministries of research and agriculture, the National committee for ethical animal experimentation published a national ethical charter on animal experimentation, setting the basis for responsible use of animals for scientific research and providing guidelines for the composition and functioning of ethics committees. Inspired by the scientific community itself this ethical standardization should help to assuage--but not eliminate--the reticence and hostility expressed by several sections of society. abstract_id: PUBMED:38409499 Practice effects on dual-task order coordination and its sequential adjustment. When the performance of two tasks overlaps in time, performance impairments in one or both tasks are common. Various theoretical explanations for how component tasks are controlled in dual-task situations have been advanced. However, less attention has been paid to the issue of how two temporally overlapping tasks are appropriately coordinated in terms of their order. The current study focuses on two specific aspects of this task-order coordination: (1) the potential effects of practice on task-order coordination performance and (2) its relationships with cognitive meta-control mechanisms that adjust this coordination. These aspects were investigated in a visual-auditory dual-task combination with randomly changing task orders across trials after four sessions of dual-task practice (N = 24) and single-task practice (N = 24). The results demonstrated that task-order coordination improves during dual-task practice, and in contrast to the effects of single-task practice. Practice, on the other hand, did not show substantial evidence of an effect on the adjustment of task-order coordination. This practice-related dissociation is consistent with the assumption that (1) task-order coordination and (2) its sequential adjustment are separable sets of processes. abstract_id: PUBMED:32086004 No evidence for the reduction of task competition and attentional adjustment during task-switching practice. Performance in task switching experiments is worse when the current stimulus is associated with different responses in the two tasks (i.e., incongruent condition) than when it is associated with the same response (i.e., congruent condition). This congruency effect reflects some sort of application of the irrelevant task's stimulus-response translation rules. Manipulating the recency and the proportion of congruent and incongruent trials results in a modulation of the congruency effect (i.e., Congruency Sequence Effect, CSE, and Proportion Congruency Effect, PCE, respectively), suggesting attentional adjustment of processing weights. Here, we investigated the impact of task switching practice on the congruency effect and the modulation thereof by (a) re-analyzing the data of a task switching experiment involving six consecutive sessions and (b) conducting a novel four-session experiment in which the proportions of congruent and incongruent trials were manipulated. Although practice appeared to reduce the reaction times overall and the task switch costs (i.e., slower reaction times after task switches than after task repetitions) to an asymptotic level, the congruency effect as well as its modulations remained remarkably constant. These findings thus do not provide evidence that conflict effects between tasks and attentional adjustment are affected by task switching practice. abstract_id: PUBMED:16719673 Evidence-based practice in psychology. The evidence-based practice movement has become an important feature of health care systems and health care policy. Within this context, the APA 2005 Presidential Task Force on Evidence-Based Practice defines and discusses evidence-based practice in psychology (EBPP). In an integration of science and practice, the Task Force's report describes psychology's fundamental commitment to sophisticated EBPP and takes into account the full range of evidence psychologists and policymakers must consider. Research, clinical expertise, and patient characteristics are all supported as relevant to good outcomes. EBPP promotes effective psychological practice and enhances public health by applying empirically supported principles of psychological assessment, case formulation, therapeutic relationship, and intervention. The report provides a rationale for and expanded discussion of the EBPP policy statement that was developed by the Task Force and adopted as association policy by the APA Council of Representatives in August 2005. Answer: The methods of experimentation and quasi-experimentation are crucial for translating evidence into practice, but they come with their own sets of strengths and limitations. Quasi-experimental or nonrandomized experimental studies are often used in fields where randomization and control are difficult to achieve, such as medical education research (PUBMED:18444194). While these methods can provide useful information, they are susceptible to bias and methodological flaws, which can undermine the credibility of the evidence base for practice (PUBMED:18444194; PUBMED:9578853). Quasi-experimental studies, such as those evaluating the impact of best practice guidelines on nurses' work environments and attitudes towards evidence-based practice (PUBMED:33058281), or the incorporation of Advanced Practice Nurses (APNs) on the implementation of evidence-based practice (PUBMED:32429332), are valuable for assessing interventions in real-world settings. However, it is essential to critically evaluate threats to validity and rigorously analyze the data to draw reliable conclusions (PUBMED:18444194). Empirical evidence from quasi-experimental studies can also shed light on cognitive mechanisms, such as the dual-task practice advantage, which demonstrates how practice in dual-task situations can lead to performance improvements (PUBMED:31152433; PUBMED:38409499). However, the persistence of certain effects, such as task competition and attentional adjustment during task-switching practice, may not be influenced by practice, indicating the complexity of cognitive processes and the need for nuanced research methods (PUBMED:32086004). In psychology, evidence-based practice integrates research, clinical expertise, and patient characteristics, emphasizing the importance of a comprehensive approach to evidence that includes not only empirical studies but also professional judgment and individual patient needs (PUBMED:16719673). Overall, while experimentation and quasi-experimentation are indispensable for advancing evidence-based practice, they must be applied with careful consideration of their limitations. Systematic reviews, meta-analyses, and critical appraisal of validity threats are necessary to establish a credible evidence base and ensure that the methods are up to the task of informing practice (PUBMED:9578853).
Instruction: Incidence of adrenal involvement and assessing adrenal function in patients with renal cell carcinoma: is ipsilateral adrenalectomy indispensable during radical nephrectomy? Abstracts: abstract_id: PUBMED:15705073 Incidence of adrenal involvement and assessing adrenal function in patients with renal cell carcinoma: is ipsilateral adrenalectomy indispensable during radical nephrectomy? Objective: To determine the value of ipsilateral adrenalectomy with radical nephrectomy, by investigating the clinical aspects of adrenal involvement and adrenocortical function in patients with renal cell carcinoma (RCC). Patients And Methods: The demographic, clinical and pathological data of adrenal involvement were reviewed in 247 patients with RCC. To evaluate adrenocortical function, 14 patients (adrenalectomy in eight, adrenal-sparing in six) had a rapid adrenocorticotropic hormone (ACTH) stimulation test before and 2 weeks after surgery. Results: There was adrenal involvement with RCC in seven of the 247 (2.8%) patients (a solitary adrenal metastatic tumour in four and direct extension into the adrenal gland in three). All adrenal involvement was detectable on abdominal computed tomography before surgery, and these patients had a large primary renal tumour of > pT2 and/or distant metastasis. Plasma cortisol levels declined significantly more in response to the rapid ACTH stimulation test in those treated by adrenalectomy than in those with spared adrenal glands at 2 weeks after surgery (P < 0.05), while there was no significant difference between the groups before surgery. Conclusions: These results suggest that unconditional ipsilateral adrenalectomy with radical nephrectomy for RCC should be avoidable, and thus preserve the reserve of adrenocortical function, as preoperative imaging, especially thin-slice multidetector helical computed tomography, can detect adrenal involvement with RCC in most cases. Unilateral adrenalectomy might cause an irreversible impairment of the reserve of adrenocortical function. abstract_id: PUBMED:2398555 On the necessity of adrenal extirpation combined with radical nephrectomy. Adrenal metastasis of renal cell carcinoma usually is an autopsy finding and seldom a clinical diagnosis. The incidence of ipsilateral suprarenal gland involvement also can be identified by histological examination of the organ with radical nephrectomy. The necessity of adrenalectomy performed in combination with nephrectomy is discussed. After a retrospective examination of our patients we discovered 8 adrenal metastases among 138 radical nephrectomies. Five patients who also had metastases in the lymph nodes (stages pN1 to pN2) died after an average of 26.4 months. Three patients without lymph node involvement are free of tumor. In these cases routine adrenalectomy must be regarded as a possible curative treatment. abstract_id: PUBMED:3944885 The adrenal gland and renal cell carcinoma: is ipsilateral adrenalectomy a necessary component of radical nephrectomy? The ipsilateral adrenal gland may be involved by renal cell carcinoma through direct invasion or embolic metastases, and usually it is removed as part of the radical nephrectomy specimen. We reviewed retrospectively 44 patients with stage A and 8 patients with stage B renal cell carcinoma, 25 of whom had undergone ipsilateral adrenalectomy. The 5-year and 9-year survival of these 25 patients was 79 and 65 per cent, respectively, and was not statistically different from the 78 per cent 5-year and 9-year survival of those 27 patients who did not undergo ipsilateral adrenalectomy. These findings suggest that the ipsilateral adrenal gland need not be removed routinely as part of perifascial nephrectomy for renal cell carcinoma. Those patients who are found to have a contralateral adrenal lesion during preoperative evaluation should be spared bilateral adrenalectomy if there is no ipsilateral adrenal gland involvement at exploration. Those patients with lower pole lesions also need not undergo routine ipsilateral adrenalectomy. Preoperative computerized tomography scans will help to select those tumors that can be managed by perifascial nephrectomy with sparing of the ipsilateral adrenal gland. abstract_id: PUBMED:23106893 Investigation on the indication of ipsilateral adrenalectomy in radical nephrectomy: a meta-analysis. Background: With a trend that renal tumors are being detected at an earlier stage, classical radical nephrectomy is being reconsidered. More conservative techniques are being proposed. To clarify the indication for synchronous adrenalectomy in radical nephrectomy for renal cell carcinoma which has been questioned since the 1980s, this study evaluates the role of adrenalectomy and recommends a new indication for adrenalectomy in renal cell carcinoma. Methods: A systemic search was performed, using PubMed and Google Scholar, of all English language studies published up to March 2012 that compared adrenalectomy with adrenal-sparing surgery, in surgery for renal cell carcinoma. We assessed preoperative imaging for adrenal involvement and the relationship of tumor location with adrenal metastases. Twenty-one studies (20 retrospective and 1 prospective) involving 11 736 patients were included. Results: The mean incidence of ipsilateral adrenal involvement from renal cell carcinoma was 4.5%. Synchronous adrenalectomy did not alter survival (hazard ratio (HR) = 0.89, 95% confidence interval (CI) 0.67 - 1.19, P = 0.43; odds ratio (OR) = 1.10, 95%CI 0.84 - 1.44, P = 0.49). Upper pole tumors were not associated with a higher incidence of ipsilateral adrenal metastases. Pooled preoperative imaging: sensitivity, specificity, positive predictive value and negative predictive value were 92% (95%CI 0.84 - 0.97), 95% (95%CI 0.93 - 0.96), 71.6% and 98.5% respectively. Conclusions: Adrenal involvement from renal cell carcinoma is rare, even in advanced tumours. Synchronous adrenalectomy does not offer any benefit, even for "high risk" patients. We suggest that only patients with a positive preoperative adrenal finding on preoperative imaging for a solitary adrenal metastasis should undergo adrenalectomy as part of the radical nephrectomy. abstract_id: PUBMED:19038420 Simultaneous adrenalectomy during radical nephrectomy for renal cell carcinoma will not cure patients with adrenal metastasis. Objectives: To present data from all patients with adrenal involvement after prolonged follow-up and to revise our advice given in 1999. In 1999, we published our results for a large series of patients with adrenal metastasis from renal cell carcinoma. Methods: The charts of 617 patients who had undergone radical nephrectomy with simultaneous adrenalectomy for renal cell carcinoma at the Department of Urology, Philipps-University Medical School, Marburg from 1985 to 1999 were retrospectively reviewed. In 1999, 23 of 617 patients (3.7%) were found to have adrenal metastasis. The 23 patients included 16 with unilateral ipsilateral adrenal metastasis only, 1 with unilateral contralateral metastasis, and 6 with bilateral adrenal involvement. The postoperative course of the 23 patients has been updated regarding progression and survival after surgery. Results: After a mean follow-up of 59.1 months (range 1.1-156.7), only 5 patients were still alive, all with progressive disease. With a mean interval to death of 41.7 months (range 1.1-126.0), 18 patients had died, 17 of whom had cancer progression. One patient died without signs of disease recurrence 49.1 months after radical nephrectomy and simultaneous ipsilateral adrenalectomy. The mean time to progression was 34.2 months (range 0-91.5). Conclusions: With these data available, we are now aware that we cannot cure patients with adrenal metastasis by incorporating simultaneous ipsilateral adrenalectomy into routine radical nephrectomy for renal cell carcinoma. The routine incorporation of ipsilateral adrenalectomy should, therefore, be abandoned. abstract_id: PUBMED:10647649 Is adrenalectomy a necessary component of radical nephrectomy? UCLA experience with 511 radical nephrectomies. Purpose: We determine the incidence and characteristics of adrenal involvement in localized and advanced renal cell carcinoma, and evaluate the role of adrenalectomy as part of radical nephrectomy. Materials And Methods: The records of 511 patients undergoing radical nephrectomy with ipsilateral adrenalectomy for renal cell carcinoma at our medical center between 1986 and 1998 were reviewed. Mean patient age was 63.2 years (range 38 to 85), and 78% of the subjects were males and 22% were females. Patients were divided into subgroups of 164 with localized (stage T1-2 tumor, group 1) and 347 with advanced (stage T3-4N01M01, group 2) renal cell carcinoma. Staging of tumors was performed according to the 1997 TNM guidelines. A retrospective review of preoperative computerized tomography (CT) of the abdomen was performed. Radiographic findings were subsequently compared to postoperative histopathological findings to assess the predictive value of tumor characteristics and imaging in determining adrenal metastasis. Results: Of the 511 patients 29 (5.7%) had adrenal involvement. Average size of the adrenal tumor was 3.86 cm. (standard deviation 1.89). Tumor stage correlated with probability of adrenal spread, with T4, T3 and T1-2 tumors accounting for 40%, 7.8% and 0.6% of cases, respectively. Upper pole intrarenal renal cell carcinoma most likely to spread was local extension to the adrenal glands, representing 58.6% of adrenal involvement. In contrast, multifocal, lower pole and mid region renal cell carcinoma tumors metastasized hematogenously, representing 32%, 7% and 4% of adrenal metastasis, respectively. The relationship between intrarenal tumor size (mean 8.9 cm., range 3 to 17) and adrenal involvement (independent of stage) was not statistically significant. Renal vein thrombus involvement was demonstrated in 8 of 12 cases (67%) with left and 2 of 9 (22%) with right adrenal involvement. Preoperative CT demonstrated 99.6% specificity, 99.4% negative predictive value, 89.6% sensitivity and 92.8% positive predictive value for adrenal involvement by renal cell carcinoma. Conclusions: With a low incidence of 0.6%, adrenal involvement is not likely in patients with localized, early stage renal cell carcinoma and adrenalectomy is unnecessary, particularly when CT is negative. In contrast, the 8.1% incidence of adrenal involvement with advanced renal cell carcinoma supports the need for adrenalectomy. Careful review of preoperative imaging is required to determine the need for adrenalectomy in patients at increased risk with high stage lesions, renal vein thrombus and upper pole or multifocal intrarenal tumors. With a negative predictive value of 99.4%, negative CT should decrease the need for adrenalectomy. In contrast, positive findings are less reliable given the relatively lower positive predictive value of this imaging modality. Although such positive findings may raise suspicion of adrenal involvement, they may not necessarily indicate adrenalectomy given the low incidence, unless renal cell carcinoma with risk factors, such as high stage, upper pole location, multifocality and renal vein thrombus, is present. abstract_id: PUBMED:12837419 Low incidence of ipsilateral adrenal involvement and recurrences in patients with renal cell carcinoma undergoing radical nephrectomy: a retrospective analysis of 393 patients. Objectives: To evaluate, in a retrospective analysis of the outcome of 393 consecutive patients undergoing radical nephrectomy, the advantages and disadvantages of concomitant ipsilateral adrenalectomy with this operation. Methods: The medical records, pathologic specimens, and preoperative and postoperative computed tomography scans of 165 patients with, and 228 patients without, concomitant adrenalectomy were reviewed. The incidence of adrenal involvement in the former patients and ipsilateral adrenal recurrence in the latter patients was evaluated. The influence of adrenalectomy on the disease-specific survival was also assessed by both univariate and multivariate analyses. Results: Of the 165 patients, only 5 (3.0%) had adrenal involvement. All of these cases were diagnosed as cT3 or greater preoperatively, and preoperative computed tomography detected 4 of these 5 cases. Of the 228 patients without adrenalectomy, no ipsilateral adrenal recurrence was observed at a mean follow-up of 65.2 months. Ipsilateral adrenalectomy did not confer a favorable prognosis on the patients. Conclusions: Our results indicate that the advantages of ipsilateral adrenalectomy in patients with normal findings on preoperative computed tomography are limited. Concomitant ipsilateral adrenalectomy is indicated in cases such as locally advanced tumors with uncertain preoperative imaging studies or those with apparent adhesion or inflammation around the adrenal gland at surgery, thus suggesting perinephric tumor involvement. abstract_id: PUBMED:12010320 Adrenal metastasis from renal cell carcinoma: significance of adrenalectomy. Background: The present study examined adrenal metastasis resulting from renal cell carcinoma (RCC), with the aim of assessing the need for routine ipsilateral adrenalectomy during radical nephrectomy. Methods: Ipsilateral and contralateral adrenal metastases were investigated in 256 patients with RCC who had undergone radical nephrectomy from 1977 to 1996 at the Tohoku University School of Medicine. Results: Twelve of the 256 patients (4.7%) had adrenal metastasis. Ten of these 12 patients had progressed to disseminated disease with very poor prognosis. Two patients who had solitary adrenal metastases remained disease-free for 21 and 7 years. Four patients showed metastases to the contralateral adrenal gland. Adrenal metastases in seven of 12 patients were identified by pre- or postoperative computed tomography (CT), and in another five macroscopically during surgery. Conclusions: Adrenalectomy was regarded as a possible curative treatment for patients with solitary adrenal metastasis. However, the incidence of this kind of metastasis was minimal and contralateral adrenal metastases may occur in RCC cases. We therefore believe that adrenalectomy should only be performed if radiographic evidence reveals metastases in the adrenal gland or if gross disease is present at the time of nephrectomy. abstract_id: PUBMED:19286216 The necessity of adrenalectomy at the time of radical nephrectomy: a systematic review. Purpose: We describe the literature base pertaining to adrenalectomy at radical nephrectomy and present a pragmatic approach based on primary tumor and disease characteristics. Materials And Methods: Literature searches were performed via the National Center for Biotechnology Information databases using various keywords. Articles that pertained to the concomitant use of adrenalectomy with radical nephrectomy were surveyed. Results: The incidence of solitary, synchronous, ipsilateral adrenal involvement, ie that which is potentially curable with ipsilateral adrenalectomy along with nephrectomy, is much lower than previously thought at 1% to 5%. Evidence to date supports increased size and T stage, multifocality, upper pole location and venous thrombosis as risk factors for adrenal involvement. Cross-sectional imaging is now accurate at demonstrating the absence of adrenal involvement but still carries a significant risk of false-positives. The morbidity of adrenalectomy is minimal except in those patients with metachronous contralateral adrenal metastasis in whom the impact of adrenal insufficiency can be devastating. Disease specific and overall survival of those undergoing radical nephrectomy, with or without adrenalectomy, are similar. The survival of patients with widespread metastatic disease is historically poor regardless of whether adrenalectomy is performed. There is evidence for a survival advantage in patients with isolated adrenal metastasis, although this group comprises no more than 2% of those undergoing surgery for renal tumors. Conclusions: The apparent benefit of ipsilateral adrenalectomy does not support it as a standard practice in all patients with normal imaging. However, it should be considered in select cases in which there are risk factors for adrenal involvement. abstract_id: PUBMED:19371896 Management of the adrenal gland during partial nephrectomy. Purpose: Nephron sparing surgery is an increasingly used alternative to Robson's radical nephroadrenalectomy. The indications for adrenalectomy in patients undergoing partial nephrectomy are not clearly defined and some surgeons perform it routinely for large and/or upper pole renal tumors. We analyzed initial management and oncological outcomes of adrenal glands after open partial nephrectomy. Materials And Methods: Institutional review board approval was obtained for this study. During partial nephrectomy the ipsilateral adrenal gland was resected if a suspicious adrenal nodule was noted on radiographic imaging, or if intraoperative findings indicated direct extension or metastasis. Results: Concomitant adrenalectomy was performed in 48 of 2,065 partial nephrectomies (2.3%). Pathological analysis revealed direct invasion of the adrenal gland by renal cell carcinoma (1), renal cell carcinoma metastasis (2), other adrenal neoplasms (3) or benign tissue (42, 87%). During a median followup of 5.5 years only 15 patients underwent subsequent adrenalectomy (0.74%). Metachronous adrenalectomy was ipsilateral (10), contralateral (2) or bilateral (3), revealing metastatic renal cell carcinoma in 11 patients. Overall survival at 5 years in patients undergoing partial nephrectomy with or without adrenalectomy was 82% and 85%, respectively (p = 0.56). Conclusions: Adrenalectomy should not be routinely performed during partial nephrectomy, even for upper pole tumors. We propose concomitant adrenalectomy only if a suspicious adrenal lesion is identified radiographically or invasion of the adrenal gland is suspected intraoperatively. Using these criteria adrenalectomy was avoided in more than 97% of patients undergoing partial nephrectomy. Even using such strict criteria only 13% of these suspicious adrenal nodules contained cancer. The rarity of metachronous adrenal metastasis and the lack of an observable benefit to concomitant adrenalectomy support adrenal preservation during partial nephrectomy except as previously outlined. Answer: The incidence of adrenal involvement in patients with renal cell carcinoma (RCC) undergoing radical nephrectomy is relatively low, with studies reporting rates ranging from 2.8% to 5.7% (PUBMED:15705073, PUBMED:10647649). Ipsilateral adrenalectomy has traditionally been performed during radical nephrectomy due to concerns about adrenal metastasis or direct invasion by the renal tumor. However, the necessity of this practice has been questioned, as the majority of adrenal involvement can be detected preoperatively using imaging techniques such as computed tomography (CT) scans (PUBMED:15705073, PUBMED:10647649, PUBMED:12837419). Several studies have found that ipsilateral adrenalectomy may not be indispensable for all patients undergoing radical nephrectomy for RCC. For instance, one study reported that plasma cortisol levels declined significantly more in patients treated by adrenalectomy than in those with spared adrenal glands at 2 weeks after surgery, suggesting that unilateral adrenalectomy might cause an irreversible impairment of the reserve of adrenocortical function (PUBMED:15705073). Another study concluded that adrenalectomy should only be performed if radiographic evidence reveals metastases in the adrenal gland or if gross disease is present at the time of nephrectomy (PUBMED:12010320). A meta-analysis indicated that synchronous adrenalectomy did not alter survival and that adrenal involvement from RCC is rare, even in advanced tumors (PUBMED:23106893). Furthermore, a systematic review suggested that the benefit of ipsilateral adrenalectomy does not support it as a standard practice in all patients with normal imaging, although it should be considered in select cases with risk factors for adrenal involvement (PUBMED:19286216). In conclusion, the current evidence suggests that routine ipsilateral adrenalectomy during radical nephrectomy for RCC may not be necessary for all patients, particularly when preoperative imaging does not indicate adrenal involvement. Adrenalectomy should be considered on a case-by-case basis, taking into account factors such as tumor characteristics and the presence of adrenal lesions on imaging.
Instruction: Ultrasonographically detected gallbladder polyps: a reason for concern? Abstracts: abstract_id: PUBMED:18793401 Ultrasonographically detected gallbladder polyps: a reason for concern? A seven-year follow-up study. Background: The management of coincidental detected gallbladder polyps (GP) is still nebulous. There are few published data regarding their long-term growth. Objective of the present study was to investigate the prevalence and growth of gallbladder polyps in a survey of unselected subjects from the general population of a complete rural community. Methods: A total of 2,415 subjects (1,261 women; 1,154 men) underwent ultrasound examination of the gallbladder, in November 1996 as part of a prospective study. Subjects in whom GP were detected at the initial survey underwent follow-up ultrasound examinations after 30 and 84 months. Results: At the initial survey gallbladder polyps were detected in 34 subjects (1.4%; females: 1.1%, range 14 to 74 years; males: 1.7%, range 19 to 63 years). Median diameter was 5 +/- 2.1 mm (range 2 to 10 mm) at the initial survey, 5 mm +/- 2.8 mm (range 2 to 12 mm) at 30 months and 4 +/- 2.3 mm (range 2 to 9 mm) at 84 months. At the time of first follow-up no change in diameter was found in 81.0% (n = 17), reduction in diameter in 4.8% (n = 1) and increase in diameter in 14.3% (n = 3). At the time of second follow-up no increase in polyp diameter was found in 76.9% (n = 10) and reduction in diameter in 7.7% (n = 1). No evidence of malignant disease of the gallbladder was found. Conclusion: Over a period of seven years little change was measured in the diameter of gallbladder polyps. There was no evidence of malignant disease of the gallbladder in any subject. abstract_id: PUBMED:18022445 The natural history and significance of ultrasonographically defined polypoid lesions of the gallbladder in children. Background/purpose: Gallbladder polyps are relatively common in adults and may increase the risk of gallbladder cancer. The increased use of high-quality abdominal ultrasonography has likely led to the increase detection of these lesions in the pediatric population. The aim of the present study was to address the significance of the ultrasonographic finding of a gallbladder polyp in children. Methods: This prospective case series of 4 children with ultrasonographically defined gallbladder polyps outlines their presentation and clinical course. A detailed review of the literature and a management approach has been provided. Results: In the 4 cases, polyps were followed for 2, 5, 72, and 84 months with a clinical follow-up of 7 to 11 years. Three underwent cholecystectomy and only 1 of 3 had a polypoid gallbladder lesion. One child, followed for 7 years, had no change in size, shape, or location of the polyp. This case series adds to the small group of children in the literature with polypoid gallbladder lesions and, to our knowledge, none have developed malignant transformation. Conclusions: Clearly more studies on such children are required to further add to our knowledge on the natural history and the appropriate management of these lesions. abstract_id: PUBMED:10759734 Surgical management of polypoid lesions of the gallbladder. Background: With improvements in ultrasonography more polypoid lesions of the gallbladder (PLGs) are being detected. The management of these is controversial. Methods: The demographic, radiological and pathological data of 38 patients with ultrasonographically detected PLGs were reviewed. A Medline search for such lesions was performed and a review of the literature is presented. Results: Thirty-four patients underwent cholecystectomy and four were advised against or declined operation. Of the 34 who had cholecystectomy, 11 had macroscopic and histopathologically proven PLGs. Of these, seven had cholesterol polyps, two had adenomas, one had a carcinoid tumour and one had an adenocarcinoma of the gallbladder. One patient had a histopathologically normal gallbladder. The remainder had chronic cholecystitis with or without gallstones. All of the patients with neoplastic lesions of the gallbladder had solitary polyps greater than 1.0 cm in diameter. Conclusion: A protocol for the management of ultrasonographically detected PLGs is proposed. In this protocol it is suggested that patients with a PLG should undergo surgery if they are symptomatic, or if the PLG is 1.0 cm or more in diameter. abstract_id: PUBMED:35770960 Prevalence and Clinical Significance of Incidental Focal 18F-FDG Uptake in Colon on PET/CT Imaging. Objectives: The present study aimed to identify the prevalence of focal uptake in the colon on 18fluorine-fluorodeoxyglucose (18F-FDG) positron emission tomography/computed tomography (PET/CT) studies performed for the evaluation of malignancies other than colon, to detect the rate of malignancy in incidental focal 18F-FDG avid colonic lesions and to investigate if any possible role of maximum standardized uptake value (SUVmax) values in the discrimination of malignant lesions from premalignant and benign ones exist. Methods: We retrospectively reviewed the files of 8,017 patients with known or suspected malignancy, who underwent whole-body 18F-FDG PET/CT at our institution during the period November 2017 to November 2019. Patients showing a single site of focally increased colonic 18F-FDG uptake that was more intense compared to liver uptake on 18F-FDG PET studies and referred to colonoscopy were enrolled in the study. Results: Fifty two patients (83.8%) had at least 1 corresponding lesion on colonoscopy, whereas in 10 patients no lesion was detected. Subsequent histopathological examinations revealed no corresponding lesion in 13 (13.7%), a benign lesion in 18 (18.9%), hyperplastic polyp in 10 (10.5%), low-grade polyp in 16 (16.8%), high-grade polyp in 29 (30.5%) and malignant lesion in 9 (9.5%) of the focal 18F-FDG uptake sites. According to histopathology results, statistically no significant difference was found between the SUVmax measurements of malignant and benign cases (p>0.05) but the average SUVmax measurements of malignant cases were found to be significantly higher than lower + high-grade cases (p<0.05) and hyperplastic polyp cases (p<0.01). Conclusion: In conclusion, any unexpected focal 18F-FDG uptake in 18F-FDG PET/CT studies is suspicious for malignancy and should be clarified by colonoscopy. The intensity of 18F-FDG uptake does not preclude the application of colonoscopy and histopathological verification of the lesion if there is any. abstract_id: PUBMED:27465099 The risk of malignancy in ultrasound detected gallbladder polyps: A systematic review. Introduction: Gallbladder polyps (GBPs) are a common incidental finding on ultrasound (US) examination. The malignant potential of GBPs is debated, and there is limited guidance on surveillance. This systematic review sought to assess the natural history of ultrasonographically diagnosed GBPs and their malignant potential. Methods: The keywords: "Gallbladder" AND ("polyp" OR "polypoid lesion") were used to conduct a search in four reference libraries to identify studies which examined the natural history of GBPs diagnosed by US. Twelve studies were eligible for inclusion in this review. Results: Of the 5482 GBPs reported, malignant GBPs had an incidence of just 0.57%. True GBPs had an incidence of 0.60%. Sixty four patients of adenomatous and malignant polyps were reported. Only in one patient was a malignant GBP reported to be <6mm. Risk factors associated with increased risk of malignancy were GBP >6mm, single GBPs, symptomatic GBPs, age >60 years, Indian ethnicity, gallstones and cholecystitis. Conclusion: With the reported incidence of GBP malignancy at just 0.57%, a management approach based on risk assessment, clear surveillance planning, and multi disciplinary team (MDT) discussion should be adopted. The utilization of endoscopic ultrasound(EUS) should be Only considered on the grounds of its greater sensitivity and specificity when compared to US scans. abstract_id: PUBMED:33153368 Morbidity and mortality after major large bowel resection of non-malignant polyp among participants in a population-based screening program. Background And Aims: Colonoscopy following positive fecal occult blood screening may detect non-malignant polyps deemed to require major large bowel resection. We aimed to estimate the major inpatient morbidity and mortality associated with major resection of non-malignant polyps detected at colonoscopy following positive guaiac fecal occult blood screening in Ontario's population-based colorectal screening program. Methods: We identified those without a diagnosis of colorectal cancer in the Ontario Cancer Registry ≤24 months following the date of colonoscopy prompted by positive fecal occult blood screening between 2008 and 2017, who underwent a major large bowel resection ≤24 months after the colonoscopy, with a diagnosis code for non-malignant polyp, in the absence of a code for any other large bowel diagnosis. We extracted records of major inpatient complications and readmissions ≤30 days following resection. We computed mortality within 90 days following resection. Results: For those undergoing colonoscopy ≤6 months following positive guaiac fecal occult blood screening, 420/127,872 (0.03%) underwent major large bowel resection for a non-malignant polyp. In 50/420 (11.9%), the resection included one or more rectosigmoid or rectal polyps, with or without a colonic polyp. There were one or more major inpatient complications or readmissions within 30 days in 117/420 (27.9%). Death occurred within 90 days in 6/420 (1.4%). Conclusions: Serious inpatient complications and readmissions following major large bowel resection for non-malignant colorectal polyps are common, but mortality ≤90 days following resection is low. These outcomes should be considered as unintended adverse consequences of population-based colorectal screening programs. abstract_id: PUBMED:9074188 Transvaginal Ultrasonography and Hysteroscopy in the Diagnosis of Endometrial Abnormalities Two hundred women (122 premenopausal, 78 postmenopausal) with abnormal uterine bleeding underwent transvaginal ultrasonography, hysteroscopy, and endometrial sampling by curettage or directed biopsy. The ultrasonographic findings were evaluated on the basis of the final diagnosis established by hysteroscopy and histologic examination. In 31 women hysteroscopy detected polyps or submucous myomas, which curettage failed to detect. Evaluating the endometrium ultrasonographically, we used cut-off points of 5 mm (single layer endometrial thickness, follicular phase) for the premenopausal patients and 4 mm (single layer endometrial thickness) for the postmenopausal women. Transvaginal ultrasound had a sensitivity of 88% in diagnosing endometrial abnormalities in premenopausal women and 96% in postmenopausal women. Given these results, transvaginal ultrasound seems to be an excellent first diagnostic method for evaluating the endometrium in the office. abstract_id: PUBMED:2952138 Radiography and ultrasonography in paranasal sinusitis. In order to analyze whether ultrasonography with a reasonable degree of confidence can replace radiography in the diagnosis of sinusitis, 138 patients with clinical signs of sinusitis were examined with both methods. It was found that maxillary sinus fluid was recognized ultrasonographically with a confidence that increased with the amount of fluid, judged from radiographic examinations. In a sub-group of 45 cases, fluid confirmed by maxillary sinus puncture was detected by ultrasonography in 35/45 sinuses (78%) and by radiology in 38/45 sinuses (84%). In patients with radiographically normal maxillary sinuses, the correlation to ultrasound was good. However, mucosal swelling and polyps or cysts observed at radiography were poorly demonstrated by ultrasonography. In addition, the ultrasound method was not reliable for frontal sinus diagnosis. It was concluded that ultrasonography can be recommended in maxillary sinusitis for follow-up of treatment and as a screening method before sinus radiography. abstract_id: PUBMED:16148644 Polyps in the exstrophic bladder. A cause for concern? Purpose: The role of environmental injury in carcinogenesis is widely recognized. Malignancy in exstrophic bladders has been reported most frequently in untreated adults and those undergoing surgical treatments which involve the mixing of fecal and urinary streams. The question of whether the closed exstrophic bladder has a similar potential for malignancy has not been resolved. The polypoid appearance of the exstrophic bladder template raises the concern of premalignant lesions. We characterized the histology of these lesions and analyzed their microscopic features with particular reference to predisposition for dysplasia. In doing so, we attempt to address the aforementioned question and set the stage for definitive quantification of the risk of malignancy in these patients with careful, long-term followup. Materials And Methods: Under institutional board review, the slides of 38 patients with classic bladder exstrophy who had polyps excised at the time of closure were reviewed by a single genitourinary pathologist (JIE). The most common findings were reported for polyps resected at primary and secondary closure, respectively, and a comparative analysis was performed. Results: Of the 38 cases 24 were primary closures and 14 were secondary closures. Six of the primary closures were delayed by 6 weeks or greater. The 2 basic types of polyps observed were fibrotic and edematous. Both types were associated with overlying reactive squamous metaplasia in approximately 50% of cases. Varying degrees of fixed on file Brunn's nests, cystitis cystica and cystitis glandularis were noted. Cystitis glandularis was observed in a significantly greater percentage of secondary closures (p = 0.0014). Conclusions: Although no dysplasia was noted, cystitis glandularis is associated with the development of adenocarcinoma of the bladder. The finding of cystitis glandularis suggests a more severe epithelial injury and it follows that the significant majority of these cases (10 of 14, 71.4%) were observed with polyps resected during secondary closure. These patients warrant future surveillance with urine cytology and cystoscopy as they enter adult life. abstract_id: PUBMED:26855926 The Colonoscopist's Expertise Affects the Characteristics of Detected Polyps. Background/aims: The influence of the endoscopist on the polyp detection rate (PDR) is underappreciated in clinical practice. Moreover, flat lesions or lesions of the proximal colon are more difficult to detect. Here, we evaluated the differences in the PDR and the characteristics of detected polyps according to the experience of the colonoscopist. Methods: We collected data on 2,549 patients who underwent screening colonoscopy performed by three fellows. The PDR was calculated according to the percentage of patients who had at least one polyp (method A) and according to the percentage of detected lesions (method B). The primary outcome included the change in the PDR, and the secondary outcome included the change in the characteristics of the detected polyps with increasing experience of the colonoscopist. Results: No proportional correlation was found between the PDR and increasing experience in colonoscopy with method A; however, with method B, the PDR increased after 400 colonoscopies (p=0.0209). With method B, the detection rates of small polyps (<5 mm) (p=0.0015) and polyps in proximal sites (p=0.0050) increased after 300 colonoscopies. Conclusions: Our study demonstrated that the quality of a colonoscopy, measured by using the PDR, may increase when performed by experienced fellows. Answer: Ultrasonographically detected gallbladder polyps (GPs) have been a subject of concern due to the potential risk of malignancy. However, the evidence from the provided abstracts suggests that the risk of malignancy in gallbladder polyps is relatively low, and the management of these polyps should be based on risk assessment and clear surveillance planning. A seven-year follow-up study on the general population showed that there was little change in the diameter of gallbladder polyps over time, and no evidence of malignant disease of the gallbladder was found in any subject (PUBMED:18793401). Similarly, in children, a case series indicated that none of the children with polypoid gallbladder lesions developed malignant transformation over a follow-up period of up to 11 years (PUBMED:18022445). A systematic review assessing the natural history of ultrasonographically diagnosed GBPs found that the incidence of malignant GBPs was only 0.57%. Risk factors associated with an increased risk of malignancy included polyps larger than 6mm, single GBPs, symptomatic GBPs, age over 60 years, Indian ethnicity, gallstones, and cholecystitis (PUBMED:27465099). Another study proposed a management protocol suggesting that patients with a PLG should undergo surgery if they are symptomatic or if the PLG is 1.0 cm or more in diameter (PUBMED:10759734). In conclusion, while ultrasonographically detected gallbladder polyps can be a cause for concern due to the potential risk of malignancy, the actual risk appears to be low based on the available evidence. Management decisions should be individualized, taking into account the size of the polyps, patient symptoms, and other risk factors. Regular surveillance and multidisciplinary team discussions are recommended for the management of these lesions.
Instruction: Gender and patient complaints: are they related? Abstracts: abstract_id: PUBMED:22936479 Gender (role) aspects in doctor-patient communication Aspects of gender and gender roles are important factors influencing the interactions between physicians and their patients. On the one hand, gender roles have an impact on the behavior of the patients, such as in health care utilization or use of preventive examinations. On the other hand, gender issues influence doctors' actions with respect to communication, diagnosis, and treatment. Here, a gender bias may lead to misdiagnosis and inadequate treatment. In this paper certain pertinent aspects of gender roles in the doctor-patient relationship are discussed and illustrated by empirical findings. abstract_id: PUBMED:20963672 Gender-specific aspects of the physician-patient interaction This article aims at shedding light on the role of physician and patient gender in the medical consultation. Because of the scarce amount of studies concentrating on gender aspects of the physician-patient interaction in rehabilitation or chronic disease, mostly results from general medicine are reported. Female physicians have a more emotional and less dominant communication style. Female patients bring up more psychosocial topics and disclose more information about themselves in general. Both female and male physicians give more information and apply a more partnership-oriented communication style when seeing a female patient. Female and male patients communicate more partnership-oriented with female physicians and share more psychosocial and medical information with them. Same-gender dyads seem beneficial most of the time for physician-patient communication. Mixed-gender dyads are more difficult, especially when a younger female physician sees a male patient. There is no single good communication style recommendable for all physicians. Rather, the research results presented should be applied to communication trainings for physicians. This could provide physicians with a flexible choice of communication styles to apply according to different situations. abstract_id: PUBMED:33069544 Association between patient-physician gender concordance and patient experience scores. Is there gender bias? Background: Patient satisfaction, a commonly measured indicator of quality of care and patient experience, is often used in physician performance reviews and promotion decisions. Patient satisfaction surveys may introduce gender-related bias. Objective: Examine the effect of patient and physician gender concordance on patient satisfaction with emergency care. Methods: We performed a cross-sectional analysis of electronic health record and Press Ganey patient satisfaction survey data of adult patients discharged from the emergency department (2015-2018). Logistic regression models were used to examine relationships between physician gender, patient gender, and physician-patient gender dyads. Binary outcomes included: perfect care provider score and perfect overall assessment score. Results: Female patients returned surveys more often (n=7 612; 61.55%) and accounted for more visits (n=232 024; 55.26%). Female patients had lower odds of perfect scores for provider score and overall assessment score (OR: 0.852, 95% CI: 0.790, 0.918; OR: 0.782, 95% CI: 0.723, 0.846). Female physicians had 1.102 (95% CI: 1.001, 1.213) times the odds of receiving a perfect provider score. Physician gender did not influence male patients' odds of reporting a perfect care provider score (95% CI: 0.916, 1.158) whereas female patients treated by female physicians had 1.146 times the odds (95% CI: 1.019, 1.289) of a perfect provider score. Conclusion: Female patients prefer female emergency physicians but were less satisfied with their physician and emergency department visit overall. Over-representation of female patients on patient satisfaction surveys introduces bias. Patient satisfaction surveys should be deemphasized from physician compensation and promotion decisions. abstract_id: PUBMED:33469761 Is Patient-Physician Gender Concordance Related to the Quality of Patient Care Experiences? Background: There is great interest in identifying factors that are related to positive patient experiences such as physician communication style. Documented gender-specific physician communication and patient behavior differences raise the question of whether gender concordant relationships (i.e., both the provider and patient share the same gender) might affect patient experiences. Objective: Assess whether patient experiences are more positive in gender concordant primary care relationships. Design: Statewide telephone surveys. Linear mixed regression models to estimate the association of CAHPS scores with patient gender and gender concordance. Subjects: Two probability samples of primary care Medicaid patients in Connecticut in 2017 (5/17-7/17) and 2019 (7/19-10/19). Main Measures: Clinician and Group Consumer Assessment of Healthcare Providers and Systems (CG-CAHPS) survey augmented with questions about aspects of care most salient to PCMH-designated organizations and two questions to assess access to mental health services. Key Results: There were no significant effects of gender concordance and differences in experiences by patient gender were modest. Conclusions: This study did not support the suggestion that patient and physician gender and gender concordance have an important effect on patient experiences. abstract_id: PUBMED:36459624 The Gender of the Sender: Assessing Gender Biases of Greetings in Patient Portal Messages. Purpose: The purpose of the study was to determine if the use of professional titles in patient electronic health record (EHR) messages varied by gender of the physician receiving the message and gender of the patient sending the message. Methods: We conducted a retrospective observational study evaluating 285,744 messages for a patient's greeting to their physician. Logistic regression mixed effects models were fit to estimate the relationship between title use and gender. Results: Female physicians received 189,442 (66%), and female patients sent 183,579 (64%) messages. Female physicians received an average of 1754 messages each (sd = 1615, median [IQR] = 1624 [255-3040]), which was significantly more than the average 1235 messages for males (sd = 1527, median [IQR] = 385 [103-1857], Mann-Whitney-Wilcoxon p-value = 0.006). Female patients were more likely to send messages using professional titles (OR = 1.37, CI = 1.28-1.47, p < 0.001). Female physicians were no more likely than male physicians to receive professional titles (OR = 1.06, CI = 0.89-1.27, p = 0.500). Conclusions: Female physicians received significantly more EHR messages than men, and female patients were more likely to use a professional title when addressing their physician, regardless of gender. Across all patients, physician gender did not influence the rate of professional title used. abstract_id: PUBMED:12639371 Physician gender, patient gender, and primary care. Background: Studies of the effects of physician gender on patient care have been limited by selected samples, examining a narrow spectrum of care, or not controlling for important confounders. We sought to examine the role of physician and patient gender across the spectrum of primary care in a nationally representative sample, large enough to examine the role of gender concordance and adjust for confounding variables. Methods: We examined the relationships between physician and patient gender using nationally representative samples (the U. S. National Ambulatory Medical Care Surveys from 1985 to 1992) of encounters of 41,292 adult patients with 1470 primary care physicians (internists, family physicians, and obstetrician/gynecologists). Factors examined included physician (age, gender, region, rural location), patient (age, gender, race, insurance), and visit characteristics (diagnoses, gender-specific and nonspecific prevention, duration, continuity, and disposition). Results: After multivariate adjustment, female physicians were more likely to see female patients, had longer visit durations, and were more likely to perform female prevention procedures and make some follow-up arrangements and referrals. Female physicians were slightly more likely to check patients blood pressure, but there were no significant differences in other nongender-specific prevention procedures or use of psychiatric diagnoses. Among encounters without breast or pelvic examinations, visit length was not related to physician gender, but length was longer in gender concordant visits than gender-discordant visits. Conclusions: Female physicians were more likely to deliver female prevention procedures, but few other physician gender differences in primary care were observed. Physician-patient gender concordance was a key determinant of encounters. abstract_id: PUBMED:27354840 Patient Preference for Physician Gender in the Emergency Department. Despite historical gender bias against female physicians, few studies have investigated patients' physician gender preference in the emergency department (ED) setting. We sought to determine if there is an association between ED patient demographics and physician gender preference. We surveyed patients presenting to an ED to determine association between patient demographics and patient physician gender preference for five ED situations: 1) 'routine' visit, 2) emergency visit, 3) 'sensitive' medical visit, 4) minor surgical/'procedural' visit, and 5) 'bad news' delivery. A total of 200 ED patients were surveyed. The majority of ED patients reported no physician gender preference for 'routine' visits (89.5 percent), 'emergent' visits (89 percent), 'sensitive' medical visits (59 percent), 'procedural' visits (89 percent) or when receiving 'bad news' (82 percent). In the setting of 'routine' visits and 'sensitive' medical visits, there was a propensity for same-sex physician preference. abstract_id: PUBMED:12661174 Physician gender and patient care. Since 1960, there has been a remarkable increase in the number of women graduating from United States medical schools. Concurrent with this increase, a large body of research has emerged on whether patient gender has an impact on disease diagnosis and therapy. From that research, a related question has emerged: Do patients receive different treatment based on the gender of their physician? In this article, we review the current literature regarding the effects of the gender of the primary care physician on patient care. Our purpose is to understand the potential implications of the changing demographics in the primary care specialties, and to propose a framework for further investigation. abstract_id: PUBMED:8424184 Reconceptualizing gender in physician-patient relationships. In this paper, I propose that existing research on how women physicians relate to patients differently from men might benefit from a reconceptualization of gender. Toward this end, I present an ethnomethodological perspective on gender as an accomplishment and show how it contrasts with conventional perspectives on 'sex differences' and 'sex roles'. I review results of existing research on women and men physicians' relationships with their patients, highlighting the inadequacies of conventional perspectives to address this topic. Finally, I recast results of my own research on 'doctors' orders' in an effort to illustrate the utility of this perspective to research on gender's effects on the physician-patient relationship. abstract_id: PUBMED:37341950 The Impact of Patient-Physician Racial and Gender Concordance on Patient Satisfaction with Outpatient Clinic Visits. Background: Patient and provider race and gender concordance (patient and physician identify as the same race/ethnicity or gender) may impact patient experience and satisfaction. Objective: We sought to examine how patient and physician racial and gender concordance effect patient satisfaction with outpatient clinical encounters. Furthermore, we examined factors that changed satisfaction among concordant and discordant dyads. Design: Consumer Assessment of Healthcare Provider and Systems (CAHPS) Patient Satisfaction Survey Scores were collected from outpatient clinical encounters between January 2017 and January 2019 at the University of California, San Francisco. Participants: Patients who were seen in the eligible time period, who voluntarily provided physician satisfaction scores. Providers with fewer than 30 reviews and encounters with missing data were excluded. Main Measures: Primary outcome was rate of top satisfaction score. The provider score (1-10 scale) was dichotomized as "top score (9-10)" and "low scores (<9)." Key Results: A total of 77,543 evaluations met inclusion criteria. Most patients identified as White (73.5%) and female (55.4%) with a median age of 60 (IQR 45, 70). Compared to White patients, Asian patients were less likely to give a top score even when controlling for racial concordance (OR: 0.67; CI 0.63-0.714). Telehealth was associated with increased odds of a top score relative to in-person visits (OR 1.25; CI 1.07-1.48). The odds of a top score decreased by 11% in racially discordant dyads. Conclusions: Racial concordance, particularly among older, White, male patients, is a nonmodifiable predictor of patient satisfaction. Physicians of color are at a disadvantage, as they receive lower patient satisfaction scores, even in race concordant pairs, with Asian physicians seeing Asian patients receiving the lowest scores. Patient satisfaction data is likely an inappropriate means of determining physician incentives as such may perpetuate racial and gender disadvantages. Answer: The relationship between gender and patient complaints is a multifaceted issue that can be influenced by various factors, including gender roles, communication styles, and patient-physician gender concordance. Gender roles can impact both patient behavior, such as healthcare utilization and preventive examinations, and physician actions in terms of communication, diagnosis, and treatment, potentially leading to gender bias and affecting patient satisfaction (PUBMED:22936479). Research has shown that female physicians may have a more emotional and less dominant communication style, and female patients tend to bring up more psychosocial topics and disclose more information about themselves. Both female and male physicians often provide more information and adopt a more partnership-oriented communication style when seeing a female patient. Gender concordance, where the physician and patient share the same gender, generally seems beneficial for communication, although mixed-gender dyads can be more challenging, especially when a younger female physician sees a male patient (PUBMED:20963672). Patient satisfaction surveys, which are commonly used in physician performance reviews, have shown that female patients may prefer female emergency physicians but are less satisfied with their physician and emergency department visit overall. This suggests that there might be a gender-related bias in patient satisfaction surveys, which could influence patient complaints (PUBMED:33069544). However, another study found no significant effects of gender concordance on patient experiences, indicating that gender and gender concordance may not have an important effect on patient experiences (PUBMED:33469761). Additionally, female physicians receive more electronic health record messages than male physicians, and female patients are more likely to use professional titles when addressing their physician, regardless of the physician's gender. This suggests that gender may influence the way patients communicate with their physicians, which could potentially relate to patient complaints (PUBMED:36459624). In summary, while gender roles and gender concordance can influence the dynamics of patient-physician interactions and communication, the relationship between gender and patient complaints is complex and may not be directly correlated. Factors such as communication styles, patient preferences, and societal biases all play a role in shaping patient satisfaction and the likelihood of complaints.
Instruction: Prevalence of abdominal obesity in Spanish children and adolescents. Do we need waist circumference measurements in pediatric practice? Abstracts: abstract_id: PUBMED:24475305 Prevalence of abdominal obesity in Spanish children and adolescents. Do we need waist circumference measurements in pediatric practice? Background: Evidence indicates that central adiposity has increased to a higher degree than general adiposity in children and adolescents in recent decades. However, waist circumference is not a routine measurement in clinical practice. Objective: This study aimed to determine the prevalence of abdominal obesity based on waist circumferences (WC) and waist to height ratio (WHtR) in Spanish children and adolescents aged 6 to 17 years. Further, the prevalence of abdominal obesity (AO) among normal and overweight individuals was analyzed. Design: Data were obtained from a study conducted from 1998 to 2000 in a representative national sample of 1521 children and adolescents aged 6 to 17 years (50.0% female) in Spain. WC and WHtR measurements were obtained in addition to BMI. AO was defined as WHtR ≥0.50 (WHtR-AO), sex and age specific WC≥90(th) percentile (WC-AO1), and sex and age specific WC cut-off values associated with high trunk fat measured by by dual-energy X-ray absorptiometry (WC-AO2). Results: IOTF- based overweight and obsity prevalence was 21.5% and 6.6% in children and 17.4% and 5.2% in adolescents, respectively. Abdominal obesity (AO) was defined as WHtR≥0.50 (WHtR-AO), sex- and age-specific WC≥90th percentile (WC-AO1), and sex- and age-specific WC cut-off values associated with high trunk fat measured by dual-energy X-ray absorptiometry (WC-AO2). The respective prevalence of WHtR-AO, WC-AO1, and WC-AO2 was 21.3% (24.6% boys; 17.9% girls), 9.4% (9.1% boys; 9.7% girls), and 26.8% (30.6% boys;22.9% girls) in children and 14.3% (20.0% boys; 8.7% girls), 9.6% (9.8% boys; 9.5% girls), and 21.1% (28.8% boys; 13.7% girls) in adolescents. Conclusion: The prevalence of AO in Spanish children and adolescents is of concern. The high proportion of AO observed in young patients who are normal weight or overweight indicates a need to include waist circumference measurements in routine clinical practice. abstract_id: PUBMED:27727475 Height-adjusted percentiles evaluated central obesity in children and adolescents more effectively than just waist circumference. Aim: We compared the prevalence of obesity based on both waist circumference for height and body mass index (BMI) in Iranian children and adolescents. Methods: Data on 13 120 children with a mean age of 12.45 ± 3.36 years (50.8% male) from the fourth Childhood and Adolescence Surveillance and Prevention of Adult Non-communicable Disease study were included. Measured waist circumference values were modelled according to age, gender and height percentiles. The prevalence of obesity was estimated using the 90th percentiles for both unadjusted and height-adjusted waist circumferences and compared with the World Health Organization BMI cut-offs. They were analysed further for short, average and tall children. Results: Waist circumference values increased steadily with age. For short and average height children, the prevalence of obesity was higher when height-adjusted waist circumference was used. For taller children, the prevalence of obesity using height-adjusted waist circumference and BMI was similar, but lower than the prevalence based on measurements unadjusted for height. Height-adjusted waist circumference and BMI identified different children as having obesity, with overlaps of 69.47% for boys and 68.42% for girls. Conclusion: Just using waist circumference underestimated obesity in some Iranian children and measurements should be adjusted for height. abstract_id: PUBMED:36070974 Reference Values for Waist Circumference and Waist-Height Ratio in Korean Children and Adolescents. Background: Abdominal obesity, which is a strong indicator of cardiometabolic risk, is widely evaluated using waist circumference (WC) and waist-height ratio (WHtR). In Korea, the reference values for WC for children and adolescents were published in 2007 and need to be revised. Moreover, there is no reference for WHtR. The aim of this study was to establish new reference values for WC and WHtR in Korean children and adolescents. Methods: Data of 20,033 subjects from the Korea National Health and Nutrition Examination Survey (2007-2019) were used. Tables for reference values and the graphs of smoothed percentile curves of WC and WHtR for children and adolescents aged 2-18 years by sex were generated using the LMS method and locally estimated scatterplot smoothing regression analysis after removing extreme values. Results: Sex-specific reference tables and percentile curves for WC and WHtR were developed. In the new WC curves, the 10th, 50th, and 90th percentile lines were lower than the corresponding lines of the 2007 reference for both sexes. The WHtR curves showed sex-specific differences, although they demonstrated a relative plateau among those aged ≥10 years in both sexes. In the logistic regression analysis, the WC and WHtR z-scores showed higher odds ratios for predicting cardiometabolic risk factors than the body mass index z-score. Conclusion: New WC and WHtR reference values for Korean children and adolescents aged 2-18 years were developed using the latest statistical methods. These references will help monitor and track WC and WHtR for evaluating abdominal obesity among at-risk children and adolescents in Korea. abstract_id: PUBMED:25479861 Waist circumference, waist/height ratio, and neck circumference as parameters of central obesity assessment in children Objective: To analyze studies that assessed the anthropometric parameters waist circumference (WC), waist-to-height ratio (WHR) and neck circumference (NC) as indicators of central obesity in children. Data Sources: We searched PubMed and SciELO databases using the combined descriptors: "Waist circumference", "Waist-to-height ratio", "Neck circumference", "Children" and "Abdominal fat" in Portuguese, English and Spanish. Inclusion criteria were original articles with information about the WC, WHR and NC in the assessment of central obesity in children. We excluded review articles, short communications, letters and editorials. Data Synthesis: 1,525 abstracts were obtained in the search, and 68 articles were selected for analysis. Of these, 49 articles were included in the review. The WC was the parameter more used in studies, followed by the WHR. Regarding NC, there are few studies in children. The predictive ability of WC and WHR to indicate central adiposity in children was controversial. The cutoff points suggested for the parameters varied among studies, and some differences may be related to ethnicity and lack of standardization of anatomical site used for measurement. Conclusions: More studies are needed to evaluate these parameters for determination of central obesity children. Scientific literature about NC is especially scarce, mainly in the pediatric population. There is a need to standardize site measures and establish comparable cutoff points between different populations. abstract_id: PUBMED:36510861 Associations of waist circumference with sex steroid hormones among 4031 USA children and adolescents. In the recent decades, obesity rates among children and adolescents, especially males, have increased significantly. This worldwide phenomenon is thought to significantly affect the levels of sex hormones. However, the association between waist circumference (a marker of abdominal obesity) and sex hormone levels in children and adolescents is unknown. In this study, 4031 participants aged 6-19 years from the United States National Health and Nutrition Examination Survey (NHANES) in the USA were enrolled in this study. The common confounders of age, race, body mass index, educational level, family income, diabetes, and time of sample collection were also collected. The participants missing any of the above information were excluded from the study. We used multiple linear regression and other multiple statistics to assess the associations between waist circumference and serum testosterone, estradiol, sex hormone-binding globulin (SHBG), free androgen index (FAI), and testosterone/estradiol ratio (T/E2). Waist circumference remained associated with sex hormone levels in children and adolescents after controlling for covariates. As waist circumference increases, testosterone levels in children and adolescents show an overall decline after a brief increase, with the inflection point for waist circumference of 65-66 cm. In addition, waist circumference positively correlates with estradiol levels in male children (β = 0.007, 95% confidence interval: 0.004-0.009). Moreover, circulating SHBG decreases in children and adolescents as waist circumference increases. In conclusion, this study highlighted waist circumference as a vital indicator affecting sex hormone levels in children and adolescents. abstract_id: PUBMED:29936734 Epidemiological characteristics of waist circumference and abdominal obesity among Chinese children and adolescents aged 6-17 years Objective: To analyze the epidemiological characteristics of waist circumference and abdominal obesity among Chinese children and adolescents aged 6-17 years. Methods: Data was from the samples of aged 6-17 years in the China National Nutrition and Health Surveillance program in 2010-2012. P(90) (the same age, the same sex) was used as the diagnostic value for abdominal obesity. Results: The overall waist circumference of children and adolescents in all the age groups appeared higher in males than that in females (P<0.000 1), higher in cities than that in the rural areas (P<0.05), and higher in children with high family income than those with middle or low family incomes (P=0.000 3). The rate of abdominal obesity in children and adolescents aged 6-17 years appeared as 11.2% on average and 10.7% and 11.8% for boys and girls, respectively but with no significant difference (P>0.05). Rates on abdominal obesity appeared as 13.2% and 8.5% for boys while as 12.3% and 11.2% for girls respectively, in urban or rural areas. As for the levels of family income, the abdominal obesity rates appeared as 15.8%, 11.5% and 8.8% respectively for boys while 13.5%, 11.9% and 11.6% respectively for girls, under high, middle and low levels of family income. Conclusion: The rate of abdominal obesity in boys seemed more responsive to the impact of income in urban or rural areas. abstract_id: PUBMED:24049597 Waist Circumference and Waist-to-Height Ratio Distributions in Polish and German Schoolchildren: Comparative Analysis. Background: To analyze differences in the distributions of waist circumference (WC) and waist-to-height ratio (WHtR) between Polish (PL) and German (GE) children and adolescents. Methods: Two samples of children and adolescents aged 7-18 y: From PL (n = 11,326) GE (n = 8,218) participated. The two WC cut-off points (WC1 as central fat distribution and WC2 as central obesity) corresponding at age 18 to the adult criteria were determined. Furthermore, the mean WC cut-off points (WC1m, WC2m) for boys and girls aged 14-18 from both countries were evaluated. For the WHtR, values over 0.5 were used as a definition of central fat distribution. The effect of different WC and WHtR criteria on the prevalence of abdominal obesity in both study groups was evaluated. Results: The mean and percentile values of WC and WHtR were generally higher in all German children as compared to their peers from Poland. When WC1m is used, the mean (95% CI) prevalence of central fat distribution in the 14-18 y Polish groups was lower (P < 0.05) than those from Germany (boys: 4.4% (3.6-5.2) vs. 8.9% (7.3-10.5); girls: 10.7% (9.0-12.3) vs. 26.4% (23.2-29.6)), whereas, using the WHtR > 0.5, the results were similar for boys - 6.7% (5.9-7.5) vs. 8.5% (8.1-8.9); they were significantly (P < 0.05) lower for Polish and German girls: 5.3% (5.0-5.6) vs. 12.7% (9.7-16.4). The prevalence of central obesity using WC2m as a criterion in the Polish vs. German groups was as follows: (boys - 1.1% (0.8-1.4) vs. 3.1% (2.2-4.0), P < 0.05; girls - 3.1% (2.5-3.7) vs. 10.2% (8.4-12.0), P < 0.05). Conclusions: The results highlight the greater central obesity associated with the German children, both in terms of WC and WHtR, in comparison to their peers from Poland. The prevalence of AO is significantly associated with the criteria used. The results demonstrate the need for the development of international WC references for pediatric subjects. abstract_id: PUBMED:33010984 Prevalence of Hypertension and Its Association with Body Mass Index and Waist Circumference Among Adolescents in Turkey: A Cross-Sectional Study. Purpose: This study examined the prevalence of hypertension in adolescents, with the aim of determining a link between blood pressure and body composition measurements. Design And Methods: This cross-sectional study included adolescents aged 14-19 years attending schools. Their body mass index (BMI) and waist circumference (WC) measurements were recorded. Blood pressure (BP) was taken three or more times, with an average systolic BP and/or diastolic BP ≥ the 95th percentile was regarded as hypertension. Results: The prevalence rates of pre-hypertension and hypertension were 11.2% and 14.8%, respectively. Obese and overweight adolescents had a higher prevalence of hypertension compared with those of normal weight (41.6% and 15.5% versus 6.2%, respectively). Following adjustment for BMI status and WC, significant correlations were observed between overweight/obesity and hypertension. After adjusting for BMI status and WC, abdominal obesity was not associated with hypertension. Conclusions: A comparatively high prevalence of hypertension was found among the adolescents. Obesity and overweight were highly correlated with hypertension. Practice Implications: School nurses and doctors must describe and address high or raised blood pressure levels in children and adolescents. Because of the high prevalence of hypertension in the obese and overweight adolescents, this study emphasizes the need for the early prevention and control of obesity and hypertension among children. abstract_id: PUBMED:37140701 Population-based references for waist and hip circumferences, waist-to-hip and waist-to-height ratios for children and adolescents, and evaluation of their predictive ability. Childhood obesity is a public health problem globally as well as in Poland. This paper aimed to provide age- and sex-specific waist circumference, hip circumference, waist-to-height ratio and waist-to-hip ratio normative values for Polish children and adolescents aged 3 - 18 years for more precise monitoring of abdominal fat accumulation. The waist circumference, hip circumference, waist-to-height ratio and waist-to-hip ratio references were constructed with the lambda-mu-sigma (LMS) method using data from two nationally representative health surveys: the OLA study and the OLAF study, the largest available paediatric surveys in Poland which provided measured height, weight, waist, hip and blood pressure for 22,370 children and adolescents aged 3 - 18 years. The predictive ability of newly established references for overweight/obesity as defined by the International Obesity Task Force criteria and elevated blood pressure was tested with receiver operating characteristic. Abdominal obesity cut-offs linked to adult cardiometabolic cut-offs were established. Reference values for waist circumference, hip circumference, waist-to-height ratio and waist-to-hip ratio are presented, as well as waist circumference, waist-to-height ratio and waist-to-hip ratio cut-off values linked to adult's cut-offs of cardiometabolic risk. The predictive value for overweight and obesity of population-based waist, hip and waist-to-height ratio references was outstanding-area under the receiver operating characteristic curve > 0.95 in both sexes, whereas with regard to the elevated blood pressure predictive ability was low-area under the receiver operating characteristic curve < 0.65. Conclusion: This paper presents the first waist, hip, waist-to-height ratio and waist-to-hip ratio references for Polish children and adolescents aged 3-18 years. The 90th and 95th percentile and cut-offs linked to adult thresholds for cardiometabolic risk are proposed as cut-offs for abdominal obesity. What is Known: • Waist circumference, waist-to-height ratio and waist-to-hip ratio are used to assess abdominal obesity in children and adults. • In Poland, there is no abdominal obesity and hip circumference references for children and adolescents from 3 to 18 years of age. What is New: • Population-based references of central obesity indices and hip references for children and youth aged 3-18 years and cardiometabolic risk thresholds for children and adolescents linked to adult's cut-offs were established. abstract_id: PUBMED:24459911 Distribution of waist circumference and waist-to-height ratio for 7-18 children and adolescents in Gansu Province Objective: To understand the percentile normal of waist circumference and waist-to-height ratio for 7-18 children and adolescents in Gansu Province and to analyze their distribution and growth trend. Methods: A stratified cluster representative sample of 6600 urban and rural children aged 7-18 years in Gansu Province was selected. To measure their waist circumference and height and establish the norm of waist circumference and waist-to-height ratio with percentile method. To compare them with the domestic correlation data. Results: Sex-age specific percentiles of P5, P10, P25, P50, P75, P90, P95 of waist circumference and waist-to-height ratio for 7-18 children in Gansu Province were accessed. The P50 of waist circumference was increasing year by year and crossed two times, but that of waist-to-height ratio fluctuated slightly. A comparison of the P50 of the waist circumference and waist-to-height ratio among China, Peking and Gansu. Before 15 the data of Gansu were lower than those of China and Beiing;but were higher than that after 16. Conclusion: To have constructed percentile normal of waist circumference and waist-to-height ratio for 7-18 children and adolescents in Gansu Province. The study provided a basic data for further studies of child and adolescent central obesity in China. Answer: The prevalence of abdominal obesity (AO) in Spanish children and adolescents is a matter of concern, as indicated by a study that aimed to determine the prevalence based on waist circumferences (WC) and waist to height ratio (WHtR) in this population. The study included a representative national sample of 1521 children and adolescents aged 6 to 17 years in Spain. The prevalence of AO was defined using three different criteria: WHtR ≥0.50 (WHtR-AO), sex and age-specific WC ≥90th percentile (WC-AO1), and sex and age-specific WC cut-off values associated with high trunk fat measured by dual-energy X-ray absorptiometry (WC-AO2). The respective prevalence rates of WHtR-AO, WC-AO1, and WC-AO2 were found to be 21.3%, 9.4%, and 26.8% in children, and 14.3%, 9.6%, and 21.1% in adolescents. The study concluded that the high proportion of AO observed in young patients who are normal weight or overweight indicates a need to include waist circumference measurements in routine clinical practice (PUBMED:24475305). This conclusion is supported by the fact that waist circumference is not a routine measurement in clinical practice, despite evidence indicating that central adiposity has increased to a higher degree than general adiposity in children and adolescents in recent decades. The findings suggest that waist circumference measurements could provide valuable information for assessing the risk of AO in the pediatric population, which is important for early intervention and prevention of related health issues.
Instruction: Comparison of MALT and non-MALT primary large cell lymphoma of the stomach: does histologic evidence of MALT affect chemotherapy response? Abstracts: abstract_id: PUBMED:11148559 Comparison of MALT and non-MALT primary large cell lymphoma of the stomach: does histologic evidence of MALT affect chemotherapy response? Background: Although the clinicopathologic features of low grade gastric MALToma (lymphoma of mucosa-associated lymphoid tissue) recently have been well delineated, the significance of identifying histologic evidence of MALT origin in a primary high grade gastric lymphoma is less clear. The authors sought to address this issue and, in particular, to clarify if MALT and non-MALT primary large cell gastric lymphoma might have a different response to systemic chemotherapy. Methods: The authors reviewed the pathologic specimens of all patients who had a diagnosis of primary large cell lymphoma of the stomach and who had been treated primarily by systemic chemotherapy in our institutions January 1, 1988-December 31, 1998. The patients were divided into two groups by experienced hematopathologists, based on the presence or absence of histologic features suggestive of MALToma, including typical lymphoepithelial lesions and infiltration of characteristic centrocyte-like cells. Disease staging was done according to the AJCC/UICC system with Musshoff modification. The median number of gastric biopsies for each patient was 7 (range, 1-21). Results: Seventeen patients with and 26 patients without histologic evidence of MALToma were identified. Clinical features were similar between the two groups except that a greater proportion of patients without evidence of MALToma had elevated levels of serum lactate dehydrogenase (50% vs. 12%, P = 0.01). The median duration of follow-up for the 43 patients was 46.5 months (range, 17-124 mos). All patients received standard systemic chemotherapy including anthracyclines or anthracenedione. The response rate was 88.2% for patients with evidence of MALToma and 57.7% for those without (P = 0.03). The 5-year overall survival rate was 80.5% for patients with evidence of MALToma and 48.9% for those without (P = 0.02). Multivariate analysis indicated that response to chemotherapy, disease stage (Stage I and II-1 vs. Stage II-2, III, and IV), and the presence of MALToma features were independent prognostic factors for overall survival. Conclusion: The results of this relatively small study series suggested that the presence of histologic features of MALToma in patients with primary large cell gastric lymphoma might have been associated with a better response to systemic chemotherapy and a better prognosis. Further studies to consolidate this conclusion are necessary. abstract_id: PUBMED:30364585 MALT lymphoma: epidemiology, clinical diagnosis and treatment. Primary gastric lymphoma (PGL) represents a rare pathology, which can be easily misdiagnosed because of unspecific symptoms of the digestive tract. Histologically, PGL can vary from indolent marginal zone B-cell lymphoma of the mucosa-associated lymphoid tissue (MALT) to aggressive diffuse large B-cell lymphoma (DLBCL). During the years, clinical trials revealed the important role of Helicobacter pylori (H. pylori) in the pathogenesis of gastric MALT lymphoma. Infection with Helicobacter pylori is an influential promoter of gastric lymphomagenesis initiation. Long-term studies revealed that eradication therapy could regress gastric lymphomas. abstract_id: PUBMED:23797121 MALT lymphoma: forget surgery? Treatment of gastric marginal zone B cell lymphoma of MALT (mucosa-associated lymphoid tissue) is nowadays standardized as outlined in the German S3 guideline of 2009 and the European (EGILS) consensus report of 2011. The first choice of treatment is Helicobacter pylori eradication in any case irrespective of H. pylori status and lymphoma stage. Some 70-80% of patients reveal complete remission of MALT lymphoma following successful eradication of the bacterium. Those patients with histologically persisting lymphoma residuals are managed by a watch-and-wait strategy. Nonresponders to H. pylori eradication are referred to radiation with a curative intention in stages I and II. The rare cases of MALT lymphoma of stage III and IV should be treated by chemotherapy. Surgery no longer plays a role in the therapy of gastric MALT lymphoma except for very rare complications such as perforation or bleeding that cannot be controlled endoscopically. Diffuse large B cell lymphoma (DLBCL) is the second most common gastric lymphoma. H. pylori eradication may lead to regression of DLBCL in the individual case. However, immunochemotherapy by a combination of rituximab and the CHOP protocol represents the standard treatment approach in patients with DLBCL and offers a good curative chance. abstract_id: PUBMED:10823136 Clinicopathological features of gastric mucosa associated lymphoid tissue (MALT) lymphomas: high grade transformation and comparison with diffuse large B cell lymphomas without MALT lymphoma features. Aims: To investigate the clinicopathological differences among gastric low grade MALT lymphomas (low MALT), large B cell lymphomas with low grade components (secondary high grade MALT lymphomas, high MALT), and diffuse large B cell lymphomas without low grade features (primary high grade MALT lymphomas, DLL). Methods: Clinicopathological and morphological characters of 126 gastric lymphoma cases were studied: 82 cases of low MALT lymphoma including 40 that were surgically resected, 17 cases of high MALT lymphoma including 13 surgically resected, and 27 cases of DLL including 12 surgically resected. Results: Age ranges were as follows: low MALT lymphoma, 34 to 85 years (mean 59.9); high MALT lymphoma, 53 to 88 years (mean 68.5); DLL, 29 to 83 years (mean 62.3). The average age for low and high MALT lymphomas was significantly different (p < 0.05), but there were no differences in other comparisons. There was a female predominance of low MALT lymphoma patients (female to male ratio, 47/35), while for high MALT patients the ratio was almost even (8/9), and for DLL patients there was a male predominance (11/16). Examination of surgically resected material showed that MALT lymphomas had a wider distribution in the gastric wall than DLL. Conclusions: The findings suggest that at least some of the high grade gastric lymphomas, especially in patients younger than the fifth decade, do not originate from high grade transformation of low MALT lymphomas. It seems to take about one decade at least for high grade transformation of low MALT lymphomas. abstract_id: PUBMED:10374886 Gastric low-grade MALT lymphoma, high-grade MALT lymphoma and diffuse large B cell lymphoma show different frequencies of trisomy. Gastric MALT lymphoma is a distinct entity related to Helicobacter pylori gastritis. Some studies suggest a role for trisomy 3 in the genesis of these lymphomas, but they mainly focused on low-grade MALT lymphoma. Gastric MALT lymphoma, however, comprises a spectrum from low- to high-grade cases. Furthermore, its exact relation to primary diffuse large B cell lymphoma (DLBCL) of the stomach is not clear. We applied in situ hybridisation (ISH) with centromeric probes on 43 samples of 39 patients with primary gastric lymphoma (13 samples with low-grade MALT lymphoma, 25 with high-grade MALT lymphoma and five with DLBCL) to detect numerical aberrations of 10 chromosomes. ISH was performed immunohistochemically on nuclei isolated from paraffin-embedded resection tissue and on whole paraffin sections using immunofluorescence. In six of 13 low-grade MALT lymphomas trisomy was detected (46%) and mostly involved chromosome 3 (33%). In high-grade MALT lymphomas, trisomies were found in 16 of 25 cases (64%), mainly involving chromosomes 12 and 18. Trisomy 3 was present in only 13% of these cases. Of five DLBCL, only one showed trisomy. Nine of the 16 aberrant high-grade MALT lymphomas (56%) showed trisomy of more than one chromosome per case vs two of six for low-grade cases. In lymphomas with separate low- and high-grade tumour components some trisomies were detected in both components, whereas others occurred only in the high-grade tumour cells. This supports the hypothesis that high-grade MALT lymphomas can develop from a low-grade type and that this progression is accompanied by the acquisition of more genetic aberrations. However, trisomy 3 probably does not play a major role in this progression. abstract_id: PUBMED:14517694 Clonal relationship in multifocal non-Hodgkin's lymphoma of mucosa-associated lymphoid tissue (MALT). To elucidate the progression of gastric marginal zone B-cell lymphoma of mucosa-associated lymphoid tissue (MALT) type, we analyzed a case presenting simultaneously with MALT lymphoma of the stomach and lung, and a gastric high-grade diffuse large lymphoma. The rearranged immunoglobulin heavy chain (IgH) variable regions were analyzed using a polymerase chain reaction (PCR)-based assay. Clonal relationship was shown between the gastric high-grade and the pulmonary low-grade lymphoma. The gastric MALT lymphoma was not related to the other manifestations. Translocation t(11;18) was not detected in the gastric high-grade lymphoma. MALT lymphomas at various locations and with different histologies may derive from a common precursor cell. Lymphomas at identical sites may have different stem cells. abstract_id: PUBMED:32134755 PTEN Loss and Cyclin A2 Upregulation Define a PI3K/AKT Pathway Activation in Helicobacter pylori-induced MALT and DLBCL Gastric Lymphoma With Features of MALT. Helicobacter pylori infection is strongly associated with primary gastric diseases, such as extranodal mucosa-associated lymphoid tissue (MALT) lymphoma, diffuse large B-cell lymphoma (DLBCL) with histologic evidence of MALT origin, and gastric carcinoma. The cytotoxin-associated gene A (CagA) protein behaves as a bacterial oncoprotein, promoting tumorigenesis via dysregulation of the phosphatidylinositol 3-kinase/AKT pathway (PI3K/AKT). We investigated the molecular mechanisms of PI3K/AKT pathway dysregulation in H. pylori-induced MALT and DLBCL gastric lymphoma. Immunohistochemical assays for CagA, phospho(p)-S473-AKT, PTEN, SHIP, and cyclin A2 proteins were performed on samples from 23 patients with H. pylori-positive MALT lymphoma and 16 patients with H. pylori-positive gastric DLBCL. We showed that CagA localization is correlated with the activation of the AKT pathway in both MALT and DLBCL lymphoma cells. Interestingly, we found a close association between the loss of PTEN, the overexpression of cyclin A2, and the phosphorylation of AKT in gastric MALT and DLBCL tumor cells. abstract_id: PUBMED:24917868 Immunoglobulin gene repertoire diversification and selection in the stomach - from gastritis to gastric lymphomas. Chronic gastritis is characterized by gastric mucosal inflammation due to autoimmune responses or infection, frequently with Helicobacter pylori. Gastritis with H. pylori background can cause gastric mucosa-associated lymphoid tissue lymphoma (MALT-L), which sometimes further transforms into diffuse large B-cell lymphoma (DLBCL). However, gastric DLBCL can also be initiated de novo. The mechanisms underlying transformation into DLBCL are not completely understood. We analyzed immunoglobulin repertoires and clonal trees to investigate whether and how immunoglobulin gene repertoires, clonal diversification, and selection in gastritis, gastric MALT-L, and DLBCL differ from each other and from normal responses. The two gastritis types (positive or negative for H. pylori) had similarly diverse repertoires. MALT-L dominant clones (defined as the largest clones in each sample) presented higher diversification and longer mutational histories compared with all other conditions. DLBCL dominant clones displayed lower clonal diversification, suggesting the transforming events are triggered by similar responses in different patients. These results are surprising, as we expected to find similarities between the dominant clones of gastritis and MALT-L and between those of MALT-L and DLBCL. abstract_id: PUBMED:38444261 The role of baseline 2-[18 F]-FDG-PET/CT metrics and radiomics features in predicting primary gastric lymphoma diagnosis. Diffuse Large B-Cell Lymphomas (DLCBL) and mucosa-associated lymphoid tissue (MALT) are the two most common primary gastric lymphomas (PGLs), but have strongly different features. DLBCL is more aggressive, is frequently diagnosed at an advanced stage and has a poorer prognosis. The aim of this retrospective study was to explore the role of fluorine-18-fluorodeoxyglucose positron emission tomography/computed tomography (2-[18 F]-FDG-PET/CT) and radiomics features (RFs) in predicting the final diagnosis of patients with PGLs. Ninety-one patients with newly diagnosed PGLs who underwent pre-treatment 2-[18 F]-FDG-PET/CT were included. PET images were qualitatively and semi-quantitatively analyzed by deriving maximum standardized uptake value body weight (SUVbw), maximum standardized uptake value lean body mass (SUVlbm), maximum standardized uptake value body surface area (SUVbsa), lesion to liver SUVmax ratio (L-L SUV R), lesion to blood-pool SUVmax ratio (L-BP SUV R), metabolic tumor volume (gMTV) and total lesion glycolysis of gastric lesion (gTLG), total MTV (tMTV), TLG, and first-order RFs (histogram-related and shape related). Receiver-operating characteristic (ROC) curve analyses were performed to determine the differential diagnostic values of PET parameters. The final diagnosis was DLBCL in 54 (59%) cases and MALT in 37 cases (41%). PGLs showed FDG avidity in 83 cases (90%), 54/54 of DLBCL and 29/37 of MALT. All PET/CT metabolic features, such as stage of disease and tumor size, were significantly higher in DLBCL than MALT; while the presence of H. Pylori infection was more common in MALT. At univariate analysis, all PET/CT metrics were significantly higher in DLBCL than MALT lymphomas, while among RFs only Shape volume_vx and Shape sphericity showed a significant difference between the two groups. In conclusion we demonstrated that 2-[18 F]-FDG-PET/CT parameters can potentially discriminate between DLBCL and MALT lymphomas with high accuracy. Among first-order RFs, only Shape volume_vx and Shape sphericity helped in the differential diagnosis. abstract_id: PUBMED:12375141 Numerical chromosomal abnormality in gastric MALT lymphoma and diffuse large B-cell lymphoma. Background: We investigated numerical chromosomal abnormalities, using the fluorescence in situ hybridization (FISH) method, in gastric mucosa-associated lymphoid tissue (MALT) lymphoma and diffuse large B-cell lymphoma (DLBL). We also compared the histopathological findings, including the presence or absence of Helicobacter pylori infection, with the analytical results. Methods: Sixteen patients who underwent operation for malignant gastric lymphoma in our department were divided into three groups: patients with low-grade gastric MALT lymphoma (l-MALT; n = 5), those with high-grade gastric MALT lymphoma (h-MALT; n = 8), and those with DLBL (n = 3). Numerical abnormalities of chromosomes 8, 9, 12, and 17 were investigated by the FISH method, and the presence or absence of H. pylori infection was microscopically examined. Results: Numerical abnormality was observed in chromosome 12 in 11 patients (68.8%), in chromosome 8 in 10 (62.5%), and in chromosome 17 in 5 (31.3%), showing a high frequency. H. pylori infection was detected in 80% and 50% of patients with l-MALT and h-MALT, respectively, but no H. pylori infection was observed in patients with DLBL. Conclusions: A new biological characteristic of gastric MALT lymphoma was obtained, i.e., a high frequency of numerical abnormalities of chromosomes 12, 8, and 17. There was no correlation between the numerical chromosomal abnormalities and the clinicopathological findings. Answer: The comparison of MALT (mucosa-associated lymphoid tissue) and non-MALT primary large cell lymphoma of the stomach suggests that histologic evidence of MALT may affect chemotherapy response. A study reviewed the pathologic specimens of patients diagnosed with primary large cell lymphoma of the stomach who were treated primarily by systemic chemotherapy. The patients were divided into two groups based on the presence or absence of histologic features suggestive of MALToma. The results indicated that patients with evidence of MALToma had a higher response rate to chemotherapy (88.2%) compared to those without (57.7%) (P = 0.03). Additionally, the 5-year overall survival rate was significantly better for patients with evidence of MALToma (80.5%) compared to those without (48.9%) (P = 0.02). Multivariate analysis showed that response to chemotherapy, disease stage, and the presence of MALToma features were independent prognostic factors for overall survival (PUBMED:11148559). These findings suggest that the presence of histologic features of MALToma in patients with primary large cell gastric lymphoma might be associated with a better response to systemic chemotherapy and a better prognosis. However, the study also notes that further research is necessary to consolidate this conclusion.
Instruction: Antegrade double balloon enteroscopy for continued obscure gastrointestinal bleeding following push enteroscopy: is there a role? Abstracts: abstract_id: PUBMED:36051995 Role of balloon enteroscopy for obscure gastrointestinal bleeding in those with surgically altered anatomy: A systematic review. Background: Obscure gastrointestinal (GI) bleeding is defined as persistent bleeding despite negative evaluation with both esophagogastroduodenoscopy and colonoscopy and can be secondary to small intestinal pathology. Standard endoscopy as well as push endoscopy can be a challenge in those with altered anatomy given inaccessible areas as well as perforation risk. Single and double balloon enteroscopy can be warranted in this patient population in instances of obscure GI bleed. Aim: To assess the safety and diagnostic efficacy of balloon enteroscopy for obscure GI bleeding in patients with surgically altered anatomy. Methods: A search was conducted through PubMed, MEDLINE, Google Scholar, Scopus, and Embase with the key words "enteroscopy," "obscure bleeding," and "altered anatomy," to identify relevant articles in English with no restricted time frame. A search within the Reference Citation Analysis database was conducted to ensure inclusion of the latest high impact articles. Study types included in the review were prospective and retrospective reviews, case series, and case reports. The reference lists of these papers were also reviewed to find further papers that were applicable. The authors extracted the data from the studies that fit inclusion criteria. Data of interest included type of study, type of procedure, and type of altered anatomy, as well as the number of patients with any diagnostic or therapeutic intervention. Data was also recorded on procedure tolerance and complications. The data was analyzed with descriptive statistics. Results: Our literature search yielded 14 studies that were included. There were 68 procedures performed with 61 unique patients subjected to these procedures. Forty-four (65%) of the procedures were double balloon, 21 (31%) were single balloon, and 3 (4%) were classified as through the scope balloon assisted. The most common altered anatomy types included Gastric Bypass Roux-en-Y, Pylorus Sparing Whipple, Orthotopic Liver Transplantation with Roux-en-Y, and Gastrojejunostomy Roux-en-Y. The procedures were successfully performed in each patient. There were 5 (7%) procedures that were complicated by perforation. Amongst the available data, the diagnostic yield was 48/59 (81%) and a therapeutic yield of 39/59 (66%). One patient was recommended surgical revision of their altered anatomy following enteroscopy. Conclusion: Balloon enteroscopy is a useful diagnostic modality in investigating obscure GI bleeding within those with surgically altered anatomy; however, precautions must be taken as this population may have increased perforation risk. abstract_id: PUBMED:29325750 Double-balloon enteroscopy (DBE) in patients presenting with obscure gastrointestinal bleeding (OGIB). Background And Study Aims: Obscure gastrointestinal bleeding (OGIB) is defined as bleeding of unknown origin that persists or recurs after an initial negative investigation. Identifying the source of OGIB represents a diagnostic challenge that is frequently focused on visualizing the small intestine. Conventional diagnostic methods, such as push enteroscopy, small-bowel follow-through, radionuclide scanning, and angiography, each exhibit inherent limitations. Double balloon enteroscopy (DBE) was designed specifically to evaluate the entire small bowel. DBE allows for better visualization, biopsy of the identified lesions and application of therapeutic techniques. This study sought to assess the role of DBE in the diagnosis and management of patients with OGIB. Patients And Methods: This prospective study was conducted to analyse data from 31 patients presenting with OGIB referred for DBE in the Endoscopy Unit at the Internal Medicine Department of the Faculty of Medicine, Cairo University. Results: Five patients had lesions in locations other than the small intestine that accounted for GI bleeding. Thus, the potential source of OGIB was defined as the small intestine in 18 of 26 patients (69.2%), and negative DBE findings were noted in eight patients (30.8%). Major findings included small intestinal tumours in eight patients, vascular bleeding lesions in 8 patients and ulcerations in 2 patients. Endoscopic haemostasis was performed in eight patients with vascular lesions. The three patients with Petuz-Jegher syndrome underwent polypectomy of their major polyps. Patients with gastrointestinal tumours were referred for surgery. Conclusion: DBE is an excellent endoscopic procedure that has a relatively high diagnostic and therapeutic yield. The procedure is feasible and exhibits a high safety profile with a low complication rate when performed by an experienced endoscopist. abstract_id: PUBMED:37496818 Early intervention with double balloon enteroscopy for higher yield for inpatient overt obscure gastrointestinal bleeding: A propensity matched analysis. Background And Aim: Overt obscure gastrointestinal bleeding (OOGIB) is defined as continued bleeding with unknown source despite esophagogastroduodenoscopy (EGD) and colonoscopy evaluation. Small bowel evaluation through video capsule endoscopy (VCE) or double balloon enteroscopy (DBE) is often warranted. We studied the timing of DBE in hospitalized OOGIB patients regarding diagnostic yield, therapeutic yield, and GI rebleeding. Methods: We performed a retrospective review of DBEs performed at a tertiary medical center between November 2012 and December 2020. The inclusion criterion was first admission for OOGIB undergoing DBE. Those without previous EGD or colonoscopy were excluded. Patients were stratified into two groups: DBE performed within 72 h of OOGIB (emergent) and beyond 72 h of OOGIB (nonemergent). Propensity score matching was used to adjust for the difference in patients in the two groups. Logistic regression analysis was used to assess factors associated with diagnostic and therapeutic yield. Kaplan-Meir survival curve showed GI bleed-free survival following initial bleed and was compared using the log rank test. Results: A total of 154 patients met the inclusion criterion, of which 62 had emergent DBE and 92 had nonemergent DBE. The propensity-score-matched sample consisted of 112 patients, with 56 patients each in the emergent and nonemergent groups. Univariate and multivariable logistic regression analysis showed a significant association between VCE and emergent DBE and diagnostic and therapeutic yield (P < 0.05). Emergent DBE patients had increased GI bleed-free survival compared to those in the nonemergent group (P = 0.009). Conclusion: Our data demonstrate that emergent DBE during inpatient OOGIB can impact the overall diagnostic yield, therapeutic yield, and GI rebleeding post DBE. abstract_id: PUBMED:29198551 Experience of double balloon enteroscopy. Background: To evaluated the experiences of double balloon enteroscopy (DBE) for obscure gastrointestinal bleeding. Methods: From October 2003 to November 2009, a total of 124 patients with obscure gastrointestinal bleeding were investigated using DBE. A total of 142 procedures (84 per oral and 58 per rectal route) were performed by the same endoscopist. Results: The average insertion time was 71.4 (20-199) minutes per-orally; the average insertion time was 139.2 min for the first 5 procedures, 109.4 min for 6-10 procedures, 76.6 min for 11-15, 66.4 min for 16-20, 67.4 min for 21-25, 59 min for 26-30, 66 min for 31-35, 52 min for 36-40, 42.9 min for 41-45, 44.6 min for 46-50, 42.2 min for 51-55, 38.6 min for 56-60, 44.6 min for 61-65, 37.8 min for 66-70, 38.8 min for 71-75, 44.3 min for 76-80, and 36.6 min for 61-84 procedures; there was no statistical difference after the first ten procedures. The average insertion time was 92.1 (22-260) minutes per-rectally; the average insertion time was 159.6 min for the first 5 procedures, 98.4 min for 6-10 procedures, 86.6 min for 11-15, 76.4 min for 16-20, 82.4 min for 21-25, 75.0 min for 26-30, 78.2 min for 31-35, 72.4 min for 36-40, 68.2 min for 41-45, 66.9 min for 45-50, and 71.4 min for 51-58 procedures; there was no statistical difference after the first five procedures. For the different genders these was no statistically significant difference. There were no differences regarding the diagnostic yield between the previous factors. The overall diagnostic yield was 82.4%. Conclusion: DBE is a safe and effective means of diagnosing and managing patients with obscure gastrointestinal bleeding. However, because of its time-consuming, DBE has a learning curve to overcome before a physician can become an expert to achieve adequate insertion times with good diagnostic and therapeutic rates. abstract_id: PUBMED:23488827 Multicenter comparison of double-balloon enteroscopy and spiral enteroscopy. Background And Aim: Spiral enteroscopy is a novel technique for small bowel exploration. The aim of this study is to compare double-balloon and spiral enteroscopy in patients with suspected small bowel lesions. Methods: Patients with suspected small bowel lesion diagnosed by capsule endoscopy were prospectively included between September 2009 and December 2010 in five tertiary-care academic medical centers. Results: After capsule endoscopy, 191 double-balloon enteroscopy and 50 spiral enteroscopies were performed. Indications were obscure gastrointestinal bleeding in 194 (80%) of cases. Lesions detected by capsule endoscopy were mainly angioectasia. Double-balloon and spiral enteroscopy resulted in finding one or more lesions in 70% and 75% of cases, respectively. The mean diagnosis procedure time and the average small bowel explored length during double-balloon and spiral enteroscopy were, respectively, 60 min (45-80) and 55 min (45-80) (P=0.74), and 200 cm (150-300) and 220 cm (200-300) (P=0.13). Treatment during double-balloon and spiral enteroscopy was possible in 66% and 70% of cases, respectively. There was no significant major procedure-related complication. Conclusion: Spiral enteroscopy appears as safe as double-balloon enteroscopy for small bowel exploration with a similar diagnostic and therapeutic yield. Comparison between the two procedures in terms of duration and length of small bowel explored is slightly in favor of spiral enteroscopy but not significantly. abstract_id: PUBMED:29393257 Clinical Utility of Capsule Endoscopy and Double-Balloon Enteroscopy in the Management of Obscure Gastrointestinal Bleeding. Background: Obscure gastrointestinal bleeding (OGIB) is a common but embarrassing problem for gastroenterologists. Most bleeding lesions associated with OGIB are present in the small intestine and sometimes cannot be identified due to the difficulty associated with physical accessibility. Capsule endoscopy (CE) and double-balloon enteroscopy (DBE) have enabled in the process of diagnosing and have evolved to become approaches to treating OGIB. Summary: CE is a minimally invasive procedure and has a high diagnostic yield in patients with OGIB. DBE offers additional advantage of biopsy collection for pathological diagnosis and therapeutic intervention, but it should be noted that it sometimes causes severe adverse events such as acute pancreatitis, intestinal bleeding, and intestinal perforation. CE should be performed early in the workup course of OGIB. Positive CE findings enhance the diagnostic yield of subsequent DBE, and the effective therapeutic intervention improves the clinical outcomes of OGIB patients. On the contrary, there are no clear guidelines for further investigation of patients with negative CE findings at the present. Although patients in stable general condition may only require follow-up, repeated CE is useful to detect positive findings in patients with evidence of sustained bleeding and progressing anemia. We have revealed that repeated CE has higher positive finding rate than DBE in OGIB patients with negative CE findings in a preliminary study. Key Messages: CE and DBE have complementary roles in the management of OGIB, and the precise timing and proper sequence may be important for the approach to treating OGIB. abstract_id: PUBMED:25400992 Evaluation and outcomes of patients with obscure gastrointestinal bleeding. Obscure gastrointestinal bleeding (OGIB) is defined as recurrent or persistent bleeding or presence of iron deficiency anaemia after evaluation with a negative bidirectional endoscopy. OGIB accounts for 5% of gastrointestinal bleeding and presents a diagnostic challenge. Current modalities available for the investigation of OGIB include capsule endoscopy, balloon assisted enteroscopy, spiral enteroscopy and computed tomography enterography. These modalities overcome the limitations of previous techniques. Following a negative bidirectional endoscopy, capsule endoscopy and double balloon enteroscopy remain the cornerstone of investigation in OGIB given their high diagnostic yield. Long-term outcome data in patients with OGIB is limited, but is most promising for capsule endoscopy. This article reviews the current literature and provides an overview of the clinical evaluation of patients with OGIB, available diagnostic and therapeutic modalities and long-term clinical outcomes. abstract_id: PUBMED:19609674 Antegrade double balloon enteroscopy for continued obscure gastrointestinal bleeding following push enteroscopy: is there a role? Background: The benefit of double balloon endoscopy (DBE) over push enteroscopy (PE) for the proximal small bowel in patients with obscure gastrointestinal bleeding remains unclear. Aim: To quantify the benefit of DBE if PE fails to benefit patients with obscure gastrointestinal bleeding. Methods: This retrospective DBE database review between July 2004 and April 2008 was conducted at a tertiary university hospital in Australia. Thirty-three patients with obscure gastrointestinal bleeding who had undergone PE for proximal small bowel lesions were identified from a DBE database of 280 patients. Mean age was 68.6 (range 30-91) years, and 17 were men. In group A (n = 15) the target lesion was not reached by PE, and in group B (n = 18) an abnormality was found by PE (angioectasia in 17 and red spots in 1) but the patient had ongoing bleeding. Mean follow-up for the cohort was 19.2 (range 5-39) months. DBE interventions were performed as appropriate. Results: An abnormality was found at DBE in 28/33 (85%) patients. DBE found an abnormality in 12/15 (80%) in group A and 16/18 (89%) in group B. Endoscopic intervention was performed in 23/33 patients (70%). In 27/33 (82%) patients a clinical benefit was seen following DBE. Six patients (18%) had no clinical benefit from DBE. Conclusions: In patients with obscure gastrointestinal bleeding and proximal small bowel lesions who fail to benefit from PE, DBE offers a very high benefit in finding and treating lesions with good long-term outcomes. abstract_id: PUBMED:20397267 Non-small-bowel lesions encountered during double-balloon enteroscopy performed for obscure gastrointestinal bleeding. Aim: To report the incidence of non-small-bowel bleeding pathologies encountered during double-balloon enteroscopy (DBE) procedures and to analyse their significance. Methods: A retrospective study of a prospective DBE database conducted in a tertiary-referral center was conducted. A total of 179 patients with obscure gastrointestinal bleeding (OGIB) referred for DBE from June 2004 to November 2008 were analysed looking for the incidence of non-small-bowel lesions (NSBLs; all and newly diagnosed) encountered during DBE. Results: There were 228 (150 antegrade and 78 retrograde) DBE procedures performed in 179 patients. The mean number of DBE procedures was 1.27 per patient. The mean age (SD) of the patients was 62 +/- 16 years old. There were 94 females (52.5%). The positive yield for a bleeding lesion was 65.9%. Of the 179 patients, 44 (24.6%) had NSBLs (19 of them had dual pathology with small-bowel lesions and NSBLs); 27 (15.1%) had lesions not detected by previous endoscopies. The most common type of missed lesions were vascular lesions. Conclusion: A significant proportion of patients (24.6%) had lesions within reach of conventional endoscopy. Careful repeat examination with gastroscopy and colonoscopy might be required. abstract_id: PUBMED:27908511 Double-Balloon Enteroscopy. Since the introduction of double-balloon enteroscopy 15 years ago, flexible enteroscopy has become an established method in the diagnostic and therapeutic work-up of small bowel disorders. With appropriate patient selection, diagnostic and therapeutic yields of 70% to 85% can be expected. The complication rates with diagnostic and therapeutic DBE are estimated at approximately 1% and 3% to 4%, respectively. Appropriate patient selection and device selection, as well as skill, are the key issues for successful enteroscopy. However, technical developments and improvements mean that carrying out enteroscopy is likely to become easier. Answer: Yes, there is a role for antegrade double balloon enteroscopy (DBE) for continued obscure gastrointestinal bleeding (OGIB) following push enteroscopy (PE). A retrospective review of a DBE database indicated that in patients with OGIB who had not benefited from PE, DBE offered a high benefit in finding and treating lesions with good long-term outcomes. In this study, an abnormality was found at DBE in 85% of patients, and endoscopic intervention was performed in 70% of patients. Moreover, 82% of patients saw a clinical benefit following DBE, suggesting that DBE is a valuable next step when PE fails to resolve OGIB (PUBMED:19609674). Additionally, DBE has been shown to be a safe and effective means of diagnosing and managing patients with OGIB, with an overall diagnostic yield of 82.4% in one study (PUBMED:29198551). It is also noted that DBE has a learning curve, and adequate insertion times with good diagnostic and therapeutic rates can be achieved as the endoscopist gains experience. Furthermore, the role of DBE in the management of OGIB is supported by its relatively high diagnostic and therapeutic yield, as well as its feasibility and safety profile when performed by an experienced endoscopist (PUBMED:29325750). DBE is particularly useful in patients with surgically altered anatomy, where it has been found to be a useful diagnostic modality, although precautions must be taken due to an increased perforation risk in this population (PUBMED:36051995). In summary, antegrade DBE is a valuable procedure for patients with continued OGIB after PE, offering high diagnostic and therapeutic yields and contributing to improved clinical outcomes.
Instruction: Do an operation and radiotherapy in the breast-preserving therapy of breast carcinoma modify the meaningfulness of subsequent mammographies? Abstracts: abstract_id: PUBMED:8765349 Do an operation and radiotherapy in the breast-preserving therapy of breast carcinoma modify the meaningfulness of subsequent mammographies? Purpose: Today, whenever possible, mamma carcinoma is operated on by preserving as much breast as possible, and is followed by postoperative radiotherapy of the thoracic wall. This form of treatment can lead to alterations of the skin and parenchyma, visible in mammogram, which may make evaluation of these mammographies difficult with regard to recidive. This paper presented describes nature and frequency of changes of treated breast caused by therapy. Patients And Methods: Three hundred and eighty mammographies of 50 female patients were evaluated on alterations of skin and parenchyma caused by combined therapy. Inclusive criteria were among others 2 mammographies in the 1st year after therapy, no advanced or parallel chemotherapy and a follow-up time of at least 5 years. Results: Thirty-two patients (64%) showed no fibrotic changes of breast, 5 patients (10%) initially developed denseness of parenchyma which then, however, reverted back to a completely normal state. Only 13 patients (26%) had irreversible fibrotic alterations of parenchyma, only 3 (6%) of which showed extensive fibrosis which definitely might make recidive diagnostic difficult. Conclusion: Major changes of parenchyma caused by preparatory treatment which can be registered by mammography are relatively rare. For this reason, mammography maintains its unchanged high position within post-tumorous treatment, especially as it even today is the only diagnostic method to evidence microcalcifications suspect of being malignant. abstract_id: PUBMED:8451730 Radiotherapy following a breast-preserving operation in breast carcinoma Between 1980 and 1988, 116 patients with invasive breast cancer were treated by limited surgery with axillary node dissection followed by radiation therapy. Radiotherapy was performed by individually computerized treatment planning. The target volume dose was 50 Gy to the whole breast followed by an electron boost of 10 Gy to the tumor bed. The median follow-up period was 59 months. The actuarial overall survival rate is 93%, and the disease-free survival 75.2%. Ten (8.6%) breast recurrences occurred. Analysis of clinical and pathological prognostic factors revealed that local control was impaired by young age, negative hormone receptors, G3-4 tumors and intraductal growth. The recurrence rate for T3- and T4-tumors was 12.5%. abstract_id: PUBMED:37442059 Accelerated subsequent lung cancer after post-operative radiotherapy for breast cancer. Background: Post-operative whole breast radiotherapy for breast cancer (BC) may increase the risk of subsequent lung cancer (LC). The impact of radiotherapy intensification (boost) has not been specifically explored in this context. We investigated the role of radiation modalities on the development of subsequent LC among our patients treated by radiotherapy for localized BC. Methods: All patients with a diagnosis of LC between 2000 and 2020 with a history of prior localized BC treated by surgery and post-operative radiotherapy were retrospectively reviewed. Primary endpoint was time to first diagnosis of LC after BC treatment with radiotherapy (RT). Results: From 98 patients who developed subsequent LC after primary BC treated with post-operative RT, 38% of patients (n = 37) received an additional RT boost, and 46% (n = 45) received hormonal treatment post radiation. A total of 61% (n = 60) were smokers. With regards to LC characteristics, adenocarcinoma was the most frequent histology (68%, n = 66); 36% (n = 35) harbored at least 1 molecular alteration, 57% (n = 20) of them being amenable to targeted therapy. Median time to first diagnosis of LC was 6 years [1.7-28.4 yrs] in the whole cohort. In the subgroup of patients treated with boost this time was reduced to 4 years [1.8-20.8 years] compared to 8 years for patients without boost [1.7-28.4 yrs] (p = 0.007). Boost, smoking usage, endocrine therapy, and age <50 yrs old at BC radiation remained independent factors associated with shorter time to first diagnosis of LC after BC treatment. Discussion: We report for the first time the potential impact of boost -part of BC radiation treatment- for BC on the risk of subsequent LC. The impact of low dose radiation on lung parenchyma could explain this phenomenon, but the underlying physiopathology is still under investigation. This work highlights the need for clinicians to identify patients at risk of developing faster subsequent thoracic malignancy after BC radiation, for implementing personalized surveillance. abstract_id: PUBMED:3807556 When is a breast-preserving operation permitted? Breast preserving operations in cases of breast cancer are permitted only under following conditions: Tumor extension should not exceed stage pT1. Axillary lymph nodes have to be removed completely. Histological examination requires a competent pathologist. Carefully planned and consequently applied radiotherapy with a dosage of at least 60 Gy (6000 rad) is mandatory. abstract_id: PUBMED:1470142 Breast preserving operation for early breast cancer--clinical implications and biological aspects of residual tumor Among the 93 surgical specimens of stage I breast cancer treated by breast preserving operation, surgical margins were less than 10mm in 23 (25%) cases because of intraductal spread. Such tumors with extensive intraductal component, however, were frequently positive for estrogen receptor (ER). Possible risk factors for local failure have been studied in an attempt to exclude high-risk patients not suitable for breast preserving operation. The results so far obtained suggest that biological markers such as c-erb B-2 or PS2 proteins may provide useful informations to select patients who are most likely to recur. In addition, adjuvant endocrine therapy may be recommended for the patients treated by breast preserving operation. abstract_id: PUBMED:9703786 Adjuvant radiotherapy in the treatment of breast cancer. Recent studies on the impact of adjuvant radiotherapy on local control, survival and quality of life in patients with breast cancer have been evaluated. Following these studies, breast conserving surgery with subsequent radiation therapy is the standard treatment for early breast cancer. In advanced disease, radiation is indicated for patients with T3- or T4 and/or with four or more positive lymph nodes. In general, radiotherapy leads to a 2-4 fold reduction of local relapse. In addition, there is considerable evidence that radiotherapy is associated with a higher overall survival. In elderly women postoperative radiotherapy should not be withheld. An extensive intraductal component (EIC) or multicentric disease can be compatible with breast conserving therapy if surgery has been completed. Self assessment of patients rated the cosmetic results of breast conserving therapy higher than mastectomy and breast reconstruction. abstract_id: PUBMED:27565077 Operation with less adjuvant therapy for elderly breast cancer. Background: The standard of care for elderly women with breast cancer remains controversial. The aim of this study was to clarify the management of elderly breast cancer patients who undergo surgery. Materials And Methods: This retrospective single-center cohort study included 2276 breast cancer patients who underwent surgery between 1993 and 2014. The patients were divided into three groups according to age: ≤64 y (young), 65-74 y (older), and ≥75 y (elderly). Results: The elderly had more advanced stage disease at diagnosis (stage III and IV, 16.2%, 17.5%, and 22.1% for the young, older, and elderly groups, respectively). The elderly were more likely to undergo mastectomy (43.3%, 41.4%, and 50.7%, respectively), omit axillary operation (0.6%, 1.1%, and 9.3%, respectively), and skip radiotherapy after breast-conserving surgery (93.1%, 86.8%, and 29.1%, respectively). Endocrine therapy was widely used in all the groups (94.4%, 93.8%, and 90.1%, respectively), but frequency of chemotherapy was lower in the elderly regardless of hormone receptor (HR) status (40.8%, 25.5%, and 9.3% in HR(+), 87.2%, 75.3%, and 39.5% in HR(-), respectively). Although the locoregional recurrence rate was higher in the elderly (4.2%, 3.4%, and 7.0% at 5 y, respectively; P = 0.028), there were no differences among groups in distant metastasis-free survival or breast cancer-specific survival. Conclusions: Although elderly patients had more advanced stages of cancer and received less treatment, there were no differences in survival. Omission of axillary dissection, radiation, and chemotherapy after operation may be an option for breast cancer patients aged ≥75 y. abstract_id: PUBMED:9101779 Is breast saving therapy of breast carcinoma without postoperative radiotherapy justified? We report a retrospective analysis of 388 patients treated by breast preserving surgery without postoperative radiotherapy. The lymphatic invasion by the carcinoma is one of the most important factors for local recurrence. The local recurrence shows a highly significant negative influence on metastases free survival and overall survival. However, we could not define any (sub) group of patients in whom a postoperative radiotherapy was not necessary. abstract_id: PUBMED:24002382 DEGRO practical guidelines: radiotherapy of breast cancer I: radiotherapy following breast conserving therapy for invasive breast cancer. Background And Purpose: The aim of the present paper is to update the practical guidelines for postoperative adjuvant radiotherapy of breast cancer published in 2007 by the breast cancer expert panel of the German Society for Radiooncology (Deutsche Gesellschaft für Radioonkologie, DEGRO). The present recommendations are based on a revision of the German interdisciplinary S-3 guidelines published in July 2012. Methods: A comprehensive survey of the literature concerning radiotherapy following breast conserving therapy (BCT) was performed using the search terms "breast cancer", "radiotherapy", and "breast conserving therapy". Data from lately published meta-analyses, recent randomized trials, and guidelines of international breast cancer societies, yielding new aspects compared to 2007, provided the basis for defining recommendations according to the criteria of evidence-based medicine. In addition to the more general statements of the DKG (Deutsche Krebsgesellschaft), this paper addresses indications, target definition, dosage, and technique of radiotherapy of the breast after conservative surgery for invasive breast cancer. Results: Among numerous reports on the effect of radiotherapy during BCT published since the last recommendations, the recent EBCTCG report builds the largest meta-analysis so far available. In a 15 year follow-up on 10,801 patients, whole breast irradiation (WBI) halves the average annual rate of disease recurrence (RR 0.52, 0.48-0.56) and reduces the annual breast cancer death rate by about one sixth (RR 0.82, 0.75-0.90), with a similar proportional, but different absolute benefit in prognostic subgroups (EBCTCG 2011). Furthermore, there is growing evidence that risk-adapted dose augmentation strategies to the tumor bed as well as the implementation of high precision RT techniques (e.g., intraoperative radiotherapy) contribute substantially to a further reduction of local relapse rates. A main focus of ongoing research lies in partial breast irradiation strategies as well as WBI hypofractionation schedules. The potential of both in replacing normofractionated WBI has not yet been finally clarified. Conclusion: After breast conserving surgery, no subgroup even in low risk patients has yet been identified for whom radiotherapy can be safely omitted without compromising local control and, hence, cancer-specific survival. In most patients, this translates into an overall survival benefit. abstract_id: PUBMED:9608248 Radiotherapy of in situ carcinoma of the breast. EORTC Breast Group More and more breast cancers are being detected at an situ stage. Carcinoma in situ (CIS) can be divided in two categories, depending on where it arises. Ductal CIS arises in the ducts. Malignant epithelial cells are found within the ducts but there is no invasion through the basement membrane. Large tumors tend to be multicentric along the ducts resulting in underestimation of the treatment volume for tumorectomy and radiotherapy. Lobular CIS occurs in the lobules and may extend into the ducts in the same way that ductal CIS may extend into the lobules, resulting in occasional difficulty in distinguishing between the two. Preinvasive ductal carcinoma or DCIS is being diagnosed more often and now constitutes 22% of breast cancers detected by mammography. Many of these lesions are nonpalpable. Standard therapy has been total mastectomy. Breast conserving operations such as lumpectomy alone will yield a high recurrence rate of 20% and half of this will reappear as invasive cancer. DCIS is not proven to be radioresistant. The optimal dose for radiotherapy of DCIS is unknown. Recent results from an NSABP trial B17 of lumpectomy with and without radiotherapy suggested that lumpectomy with postoperative radiation therapy gave acceptable relapse free survival of 83% compared to 72% after surgery alone (p = 0.00004) and low recurrence rates of 1.9% vs 4.8%. The results of the corresponding EORTC-studies will be published in 1998. Answer: Yes, an operation and radiotherapy in the breast-preserving therapy of breast carcinoma can modify the meaningfulness of subsequent mammographies. The combined therapy can lead to alterations of the skin and parenchyma, which may be visible in mammograms and can potentially make the evaluation of these mammographies difficult with regard to recurrence. However, major changes of parenchyma caused by preparatory treatment that can be registered by mammography are relatively rare. Therefore, mammography maintains its high position within post-tumorous treatment, especially as it is the only diagnostic method to evidence microcalcifications suspect of being malignant (PUBMED:8765349).
Instruction: Is FDG PET/CT cost-effective for pre-operation staging of potentially operative non-small cell lung cancer? Abstracts: abstract_id: PUBMED:22698711 Is FDG PET/CT cost-effective for pre-operation staging of potentially operative non-small cell lung cancer? - From Chinese healthcare system perspective. Objectives: The remarkable morbidity and mortality of lung cancer in the large population address major economic challenges to Chinese healthcare system. This study aims to assess the cost-effectiveness of fluorodeoxyglucose positron emission tomography (FDG PET)/CT for staging patients with non-small cell lung cancer (NSCLC) in China. Methods: Management of potentially operative NSCLC was modeled on decision analysis employing data in China. The strategies compared were conventional CT staging (strategy A), additional PET/CT in all patients (strategy B) or only in patients with normal-sized lymph nodes on CT (strategy C). Published medical data for Chinese patients was extracted. The costs corresponded to reimbursement by Chinese public health provider in 2010. Uncertainly of employed parameters was calculated in sensitivity analysis. Results: Taking strategy A as baseline, the incremental cost-effectiveness ratio (ICER) of strategy B was 23,800RMB ($3500) per life year saved, which was acceptable in views of a developing country as China; while strategy C exhibited some loss of life years. Sensitivity analysis suggested the ICER (B-A) was raised more remarkably by a deterioration of PET specificity than by that of its sensitivity. The ICER was turned negative by PET specificity lower than 0.79. Economically, PET cost was proportional to the ICER (B-A), and decrease of palliative therapy cost could reduce both the ICER and overall cost. Conclusions: The PET/CT strategy is potentially cost-effective for management of NSCLC in China. Patients with nodal-positive CT results are not suggested to be excluded from further PET/CT. Furthermore, maintaining high specificity of PET in clinical scenarios is crucial. Prospective trials are warranted to transfer these results into policy making. abstract_id: PUBMED:36584217 Thoracic staging in patients with non-small cell lung cancer: A systematic review and meta-analysis on diagnostic accuracy of [18F]FDG PET/MRI and [18F]FDG PET/CT. Background: This study aimed to evaluate the diagnostic accuracy of [18F]FDG PET/MR versus [18F]FDG PET/CT in the thoracic staging of patients with non-small cell lung cancer (NSCLS). Material And Methods: The Preferred Reporting Items for Systematic Reviews (PRISMA) were followed in conducting the present study. All available research was collected through Embase (Elsevier), PubMed, as well as Cochrane Library databases up to June 2021. Only studies covering both [18F]FDG PET/MRI and [18F]FDG PET/CT techniques in the same group were included. Statistical analysis was done using Stata v.12. Results: The overall accuracy of [18F]FDG PET/CT in T and N staging was 92% (95% CI: 89-95 , I2 : 93.4%) and 78% (95% CI: 74-82 , I2 : 98.5%) respectively. While, the corresponding rates for [18F]FDG PET/MRI were 91% (95% CI: 88-94 , I2 : 96.5%) and 89% (95% CI: 84-94 , I2 : 88.1%) respectively. Conclusions: The present meta-analysis showed that [18F]FDG PET/CT and [18F]FDG PET/MRI exhibit relatively the same performance in detecting N and T stages in patients with NSCLC. Thus, [18F]FDG PET/MRI can be a worthy alternative for [18F]FDG PET/CT in the diagnosis of advanced of NSCLC in the chest area, more specifically in N-staging, since it provides higher soft-tissue contrast. There is a need for more reliable research for comparing the diagnostic performance of these imaging techniques and various optimized [18F]FDG PET/MRI protocols. abstract_id: PUBMED:28920249 FDG PET/CT for the preoperative nodal staging of non-small cell lung cancer in a tuberculosis-endemic country: Are maximum standardized uptake values useful? Background: The aim of this study was to determine an optimum standardized uptake value threshold for identifying nodal metastasis in non-small cell lung cancer (NSCLC) patients using Fluorine-18 2-fluoro-2-deoxy-D-glucose (FDG) positron emission tomography/computed tomography (PET/CT) in Taiwan, a tuberculosis-endemic country. The variation in standardized uptake values of nodal metastasis among different NSCLC histological subtypes was also evaluated. Methods: We retrospectively reviewed 75 NSCLC patients who had received FDG PET/CT before surgery. The diagnostic accuracy of FDG PET/CT for the preoperative nodal staging was evaluated by histopathologic findings. Results: A total of 316 nodal stations were evaluated. The sensitivity and specificity of FDG PET/CT for nodal staging were 58.6% and 81.8%, respectively, using an SUV cut-off of 2.6. With regard to the levels of mean SUVmax in true-positive and false-positive groups, there was no significant difference among different histological subtypes. Conclusion: The present study demonstrated that FDG PET/CT for pre-operative nodal staging using SUVmax > 2.6 is a useful tool (with a higher specificity and a higher negative predictive value) to rule out the possibility of metastatic lymphadenopathy in operable patients with NSCLC. abstract_id: PUBMED:31868151 FDG PET/CT in the Staging of Lung Cancer. Background: Accurate staging is crucial for the proper management of patients with nonsmall cell lung cancer, especially for choosing the best treatment strategy. Different Imaging methods are used to stage patients with non-small cell lung cancer. In the last two decades, FDG PET/CT is carried out in almost all the main Hospitals around the world in this setting. Objective: The aim of this paper is to focus on the value of integrated FDG PET/CT in the TNM staging of the non-small cell lung cancer. Methods: A non-systematic revision of the literature was performed in order to identify all papers about the role of FDG PET/CT in the evaluation of non-small cell lung cancer and to highlight the value of FDG PET/CT in this setting. Results: Many data are now available about this topic, including also randomized controlled trials. FDG PET/CT is of limited added value in the characterization of T status but it increases the diagnostic accuracy for the assessment of the nodal status. The main advantage of FDG PET/CT over conventional imaging methods is its higher sensitivity in identifying extra-thoracic metastases, especially bone and adrenal lesions. Conclusion: PET/CT with FDG should be included in the diagnostic work-up of patients with lung cancer, because it provides useful information for appropriate therapy. abstract_id: PUBMED:35543731 Higher accuracy of [68 Ga]Ga-DOTA-FAPI-04 PET/CT comparing with 2-[18F]FDG PET/CT in clinical staging of NSCLC. Purpose: This study aimed to explore the clinical staging performance of [68 Ga]Ga-DOTA-FAPI-04 PET/CT compared with that of 2-[18F]FDG PET/CT in non-small cell lung cancer (NSCLC) patients lesion by lesion. Methods: A total of 134 diagnosed or suspected NSCLC patients were enrolled in the prospective study (ChiCTR2000038080); they received paired 2-[18F]FDG PET/CT and [68 Ga]Ga-DOTA-FAPI-04 PET/CT. Of these patients, the retrospective analysis of 74 NSCLC patients with pathological results was conducted from primary tumor (T) diagnosis, lymph node (N), and metastatic lesion (M) staging. The imaging characteristics of the lung nodules and suspected metastases were obtained and analyzed, and the staging performance of 2-[18F]FDG PET/CT and [68 Ga]Ga-DOTA-FAPI-04 PET/CT was compared. Results: For T diagnosis, [68 Ga]Ga-DOTA-FAPI-04 showed better diagnostic performance than 2-[18F]FDG in 79 lung nodules of 72 patients, especially for nonsolid and small-dimension adenocarcinoma nodules. For N staging, 98 lymph nodes (LNs) with pathological results in 37 patients were analyzed. The SUVmax of [68 Ga]Ga-DOTA-FAPI-04 in the nonmetastatic LNs was significantly lower than that in the metastatic LNs. Regarding metastatic LN identification, the calculated optimum cut-off value of [68 Ga]Ga-DOTA-FAPI-04 SUVmax was 5.5, and the diagnostic accuracy using [68 Ga]Ga-DOTA-FAPI-04 and 2-[18F]FDG criteria was 94% and 30%, respectively (P < 0.001). For M staging, [68 Ga]Ga-DOTA-FAPI-04 identified more lesions than 2-[18F]FDG (257 vs. 139 lesions) in 14 patients with multiple metastases. Overall, the staging accuracy of [68 Ga]Ga-DOTA-FAPI-04 was better than that of 2-[18F]FDG in 52 patients with different pathological stages [43/52 (82.7%) vs. 27/52 (51.9%), P = 0.001]. Conclusion: Compared with 2-[18F]FDG PET/CT, [68 Ga]Ga-DOTA-FAPI-04 PET/CT demonstrated better staging performance in NSCLC patients with different pathological stages, especially those with localized disease. abstract_id: PUBMED:33815538 Clinical Significance of Quantitative FDG PET/CT Parameters in Non-Small Cell Lung Cancer Patients. Background: An initial evaluation of non-small cell lung cancer (NSCLC) patients with 18F- fluorodeoxyglucose (FDG) positron emission tomography/computed tomography (PET/CT) scan can modify treatment planning. We investigated the clinical significance of FDG PET/CT quantitative parameters (QPs) in NSCLC patients. Materials And Methods: We included 125 NSCLC patients for initial staging FDG PET/CT scan. The primary tumor (T), regional lymph node metastases (N), and distant metastases (M) were evaluated on FDG PET/CT images. QPs, including standard uptake value (SUVmax), metabolic tumor volume (MTV), and total lesion glycolysis (TLG) were calculated separately for each T, N, and M lesion and also for the whole body. Statistical analysis through SPSS version 22 was used to evaluate the clinical significance of PET/CT QPs concerning primary tumor pathology characteristics, initial tumor stage, and patient's prognosis. Results: We followed the patients for 19.28 (±11.42) months. Considering primary tumor pathology, there was a significant difference in FDG PET/CT QPs, including primary tumor SUVmax (p=0.00), metastases SUVmax (p=0.014), whole-body MTV (p=0.045), and whole-body TLG (p=0.002). There was also a significant difference in QPs, including primary tumor SUVmax (p=0.00) and regional lymph node metastases SUVmax (p=0.048) when accounting for tumor initial stage. There was a significant prognostic value for the whole-body TLG (p=0.01) and a cut-off point of 568 was reached to differentiate better versus worse survival outcome. Conclusion: We demonstrated a statistically significant difference in FDG PET/CT QPs when accounting for primary NSCLC pathology characteristics and initial stage, as well as patient's prognosis, and recommend incorporating QP values into clinical PET/CT reports. abstract_id: PUBMED:24504054 Thoracic staging in lung cancer: prospective comparison of 18F-FDG PET/MR imaging and 18F-FDG PET/CT. Unlabelled: Therapeutic decisions in non-small cell lung cancer (NSCLC) patients depend on the tumor stage. PET/CT with (18)F-FDG is widely accepted as the diagnostic standard of care. The purpose of this study was to compare a dedicated pulmonary (18)F-FDG PET/MR imaging protocol with (18)F-FDG PET/CT for primary and locoregional lymph node staging in NSCLC patients using histopathology as the reference. Methods: Twenty-two patients (12 men, 10 women; mean age ± SD, 65.1 ± 9.1 y) with histopathologically confirmed NSCLC underwent (18)F-FDG PET/CT, followed by (18)F-FDG PET/MR imaging, including a dedicated pulmonary MR imaging protocol. T and N staging according to the seventh edition of the American Joint Committee on Cancer staging manual was performed by 2 readers in separate sessions for (18)F-FDG PET/CT and PET/MR imaging, respectively. Results from histopathology were used as the standard of reference. The mean and maximum standardized uptake value (SUV(mean) and SUV(max), respectively) and maximum diameter of the primary tumor was measured and compared in (18)F-FDG PET/CT and PET/MR imaging. Results: PET/MR imaging and (18)F-FDG PET/CT agreed on T stages in 16 of 16 of patients (100%). All patients were correctly staged by (18)F-FDG PET/CT and PET/MR (100%), compared with histopathology. There was no statistically significant difference between (18)F-FDG PET/CT and (18)F-FDG PET/MR imaging for lymph node metastases detection (P = 0.48). For definition of thoracic N stages, PET/MR imaging and (18)F-FDG PET/CT were concordant in 20 of 22 patients (91%). PET/MR imaging determined the N stage correctly in 20 of 22 patients (91%). (18)F-FDG PET/CT determined the N stage correctly in 18 of 22 patients (82%). The mean differences for SUV(mean) and SUV(max) of NSCLC in (18)F-FDG PET/MR imaging and (18)F-FDG PET/CT were 0.21 and -5.06. These differences were not statistically significant (P > 0.05). The SUV(mean) and SUV(max) measurements derived from (18)F-FDG PET/CT and (18)F-FDG PET/MR imaging exhibited a high correlation (R = 0.74 and 0.86, respectively; P < 0.0001). Size measurements showed an excellent correlation between (18)F-FDG PET/MR imaging and (18)F-FDG PET/CT (R = 0.99; P < 0.0001). The lower and upper limits of agreement between (18)F-FDG PET/CT and (18)F-FDG PET/MR imaging using Bland-Altman analysis were -2.34 to 3.89 for SUV(mean), -7.42 to 4.40 for SUV(max), and -0.59 to 0.83 for the tumor size, respectively. Conclusion: (18)F-FDG PET/MR imaging using a dedicated pulmonary MR imaging protocol, compared with (18)F-FDG PET/CT, does not provide advantages in thoracic staging in NSCLC patients. abstract_id: PUBMED:32525121 Is FDG-PET/CT used correctly in the combined approach for nodal staging in NSCLC patients? Background: The most widely accepted approach nowadays in nodal staging of non-small cell lung cancer (NSCLC) is the combined use of 18-Fluorodeoxyglucose-positron emission tomography/computed tomography (FDG-PET/CT) and endobronchial ultrasound-transbronchial needle aspiration (EBUS-TBNA). However, this approach may not be sufficient, especially for early stages. Aims: Our aim was to assess whether more satisfactory results can be obtained with standardized uptake value maximum lymph node/standardized uptake value mean mediastinal blood pool (SUVmax LN/SUVmean MBP), SUVmax LN/Primary tumor, or a novel cut-off value to SUVmax in this special group. Subjects And Methods: Patients with diagnosed NSCLC and underwent FDG-PET/CT were reviewed retrospectively. 168 LNs of 52 early stage NSCLC patients were evaluated. The LNs identified in surgery/pathology reports were found in the FDG-PET/CT images. Anatomic and metabolic parameters were measured. Statistical analysis was performed by using of MedCalc Statistical Software. Results: Regardless of LNs size; sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of SUVmax >2.5 were 91.5%, 65.9%, 58.2%, and 95.1%, respectively. Optimum cut-off value of SUVmax was >4.0. Sensitivity, specificity, PPV, and NPV were found as 81.0%, 90.0%, 81.0%, and 90.0% respectively. Optimum cut-off value of SUVmax LN/SUVmean MBP was >1.71. Sensitivity, specificity, PPV, and NPV were found as 94.7%, 80.0%, 71.1%, and 96.7%, respectively. Optimum cut-off value of SUVmax LN/Primary tumor was >0.28. Sensitivity, specificity, PPV, and NPV were found as 81.1%, 85.1%, 72.9% and 90.1%, respectively. Conclusion: SUVmax LN/SUVmean MBP >1.71 has higher PPV than currently used, with similar NPV and sensitivity. This can provide increase in the accuracy of combined approach. In this way, faster nodal staging/treatment decisions, cost savings for healthcare system and time saving of medical professionals can be obtained. abstract_id: PUBMED:37438928 Accuracy of nodal staging by 18F-FDG-PET/CT in limited disease small-cell lung cancer. Background: Small-cell lung cancer (SCLC) is highly aggressive with a nearly incurable disease in most cases. The most important prognostic factor is the status of the mediastinal lymph nodes. Only a small proportion of patients can be diagnosed at early stages and directed to curative multimodal treatment. Therefore, accuracy of nodal staging by (18F)-Fluoro-2-deoxyglucose (FDG) positron emission tomography (PET) computed tomography (18F-FDG-PET/CT) in (very) limited disease SCLC, although not well investigated, is highly important. Methods: Treatment naive, non-bulky patients treated or diagnosed with SCLC between June 2012 and April 2020 with complete data including FDG-PET/CT and invasive mediastinal staging were retrospectively analyzed (n = 19). Sensitivity, specificity, negative predictive value (NPV), positive predictive value (PPV) and accuracy of mediastinal lymph node staging of 18F-FDG-PET/CT was calculated. Results: The FDG-PET/CT showed a sensitivity of 91%, and the specificity was calculated as 87.5%. In this cohort, the disease prevalence in lymph nodes was 58% (n = 11). Positive predictive value was 91%, NPV 88% and accuracy calculated at 89%. One patient was upstaged from single-level N2 to multilevel N2. In one patient, upstaging in invasive staging was performed from N2 to N3, and one patient was downstaged from N1 to N0. Conclusions: FDG-PET/CT is a valuable tool for the detection of distant metastases, but in mediastinal staging of SCLC some limitations might remain. Invasive methods remain the gold standard. Therefore, the mediastinal lymph nodal status of patients with SCLC screened for multimodal treatment should be further evaluated by additional invasive techniques to verify the exact N-staging and to optimize treatment stratification. abstract_id: PUBMED:36317010 Value of 18F FDG-PET/CT parameters on long term follow-up for patients with non-small cell lung cancer. Objectives: The purpose of this study was to investigate the value of PET/CT in the preoperative staging of non-small cell lung cancer in predicting long-term survival and diagnostic performance, validated by histopathology following surgical resection. Methods: Between 02/2009 and 08/2011, 255 patients with non-small cell lung cancer were included in this single-center prospective study. All underwent 18F FDG-PET/CT for pre-operative staging, and in 243 patients complete surgical resection was possible. Regarding lymph node involvement and extrathoracic metastases, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated using the histopathological staging as reference. Median follow-up for censored patients was 9.1 years. Results: Overall 5-year survival rate of all patients was 55.6%, and of patients who had complete surgical resection it was 58.2%. In multivariate analysis of all surgically resected patients lymph node involvement (p=0.029) and age >61 years (p=<0.001) were significant independent prognostic factors. SUVmax and SUVmean cut-offs between SUV 2 and 11, however, were not associated with better or ;worse survival. The PET-CT sensitivity, specificity, positive predictive value and negative predictive value for predicting lymph node involvement were 57, 95, 88, and 76%, respectively. Furthermore, sensitivity, specificity, positive predictive value, and negative predictive value for detecting extrathoracic metastases were 100, 58, 98, and 100%, respectively. Conclusions: In this study, tumor 18F FDG-uptake values did not provide additional prognostic information. Age>61 years and lymph node metastasis were associated with worse long-term survival in surgically resected patients. 18F FDG-PET/CT scans allow for improved patient selection. However, in staging mediastinal lymph nodes, there is a high rate of false positives and false negatives, suggesting that tissue biopsy is still indicated in many cases. Answer: Based on the provided abstracts, FDG PET/CT appears to be cost-effective for pre-operation staging of potentially operative non-small cell lung cancer (NSCLC) in certain contexts. A study from the Chinese healthcare system perspective found that adding FDG PET/CT for all patients (strategy B) had an incremental cost-effectiveness ratio (ICER) of 23,800 RMB ($3500) per life year saved, which was considered acceptable for a developing country like China. The study concluded that the PET/CT strategy is potentially cost-effective for managing NSCLC in China (PUBMED:22698711). Another study compared the diagnostic accuracy of [18F]FDG PET/MRI and [18F]FDG PET/CT in thoracic staging of NSCLC patients. It found that both imaging techniques exhibited relatively the same performance in detecting N and T stages, suggesting that [18F]FDG PET/MRI could be a worthy alternative to [18F]FDG PET/CT, especially in N-staging due to higher soft-tissue contrast (PUBMED:36584217). A study from Taiwan, a tuberculosis-endemic country, showed that FDG PET/CT for pre-operative nodal staging using an SUVmax cut-off of 2.6 is a useful tool with higher specificity and negative predictive value to rule out metastatic lymphadenopathy in operable NSCLC patients (PUBMED:28920249). Furthermore, FDG PET/CT is considered to have limited added value in the characterization of T status but increases diagnostic accuracy for the assessment of nodal status and is particularly sensitive in identifying extra-thoracic metastases (PUBMED:31868151). A study comparing [68 Ga]Ga-DOTA-FAPI-04 PET/CT with 2-[18F]FDG PET/CT found that the former demonstrated better staging performance in NSCLC patients, especially those with localized disease (PUBMED:35543731). Quantitative parameters from FDG PET/CT scans have been shown to have clinical significance in NSCLC patients, correlating with primary tumor pathology characteristics, initial tumor stage, and patient prognosis (PUBMED:33815538). In summary, the abstracts suggest that FDG PET/CT is a valuable tool for pre-operation staging of NSCLC, with potential cost-effectiveness and clinical utility in various healthcare settings. However, the cost -effectiveness and utility may vary depending on the specific healthcare system, the stage of the disease, and the availability of alternative imaging modalities such as PET/MRI. The importance of maintaining high specificity of PET in clinical scenarios is emphasized to ensure cost-effectiveness (PUBMED:22698711). Additionally, while FDG PET/CT is a valuable tool for detecting distant metastases, its limitations in mediastinal staging suggest that invasive methods remain the gold standard, and further evaluation by invasive techniques may be necessary to optimize treatment stratification (PUBMED:37438928). Lastly, a study investigating the long-term follow-up value of PET/CT parameters in NSCLC patients found that while 18F FDG-uptake values did not provide additional prognostic information, PET/CT scans were helpful in patient selection. However, the study also noted a high rate of false positives and negatives in staging mediastinal lymph nodes, indicating that tissue biopsy may still be required in many cases (PUBMED:36317010). In conclusion, the evidence suggests that FDG PET/CT can be cost-effective for pre-operative staging of NSCLC, particularly when it leads to improved patient selection and treatment planning. However, the decision to use FDG PET/CT should be based on a careful consideration of the specific clinical context, the accuracy and specificity required, and the cost implications within the particular healthcare system.
Instruction: Could prolactin receptor gene polymorphism play a role in pathogenesis of breast carcinoma? Abstracts: abstract_id: PUBMED:15119991 Could prolactin receptor gene polymorphism play a role in pathogenesis of breast carcinoma? Objective: Constitutive activation of various hormone and growth factor receptors is newly recognised as a common cause of tumour development. This study investigated the presence of any mutation or polymorphism of prolactin receptor (PRLR) in 38 patients with breast cancer. Research Methods: Genomic DNA was extracted and PCR amplification was carried out for exon 1-10 of PRLR from tumoral and adjacent non-cancerous breast tissue of tumour specimens from 38 breast cancer patients. PCR products were analysed by SSCP and automatic sequencing for mutations. Results: For the first time, A150C (Leu-->Ile) transversion at exon 6 of PRLR in tumour tissues, in adjacent non-cancerous breast tissues, and in blood samples of two (5.3%) out of 38 patients with breast cancer were detected. In contrast to this finding, no polymorphism of PRLR in blood samples of 100 normal individuals were found. Conclusion: Polymorphism of prolactin receptors might play a role in mammary carcinogenesis as a consequence of intracellular changes of PRLR signalling. abstract_id: PUBMED:21125332 Association of gene polymorphisms in prolactin and its receptor with breast cancer risk in Taiwanese women. Given the essential role of prolactin (PRL) and its receptor (PRLR) in breast tumorigenesis, we investigated whether single nucleotide polymorphisms (SNPs) in the PRL and PRLR genes were associated with breast cancer in Taiwanese women. A total of 160 breast cancer patients and 336 unrelated control Taiwanese women were enrolled in this study. Three SNPs (rs1341238, rs2244502 and rs3756824) in the PRL gene and one SNP (rs10941235) in the PRLR gene were genotyped using allele-specific polymerase chain reaction and DNA sequencing. We showed that the PRLR SNP rs10941235 was associated with breast cancer (P < 0.05) and its genotype frequencies in the breast cancer group were significantly associated (P < 0.05) with the levels of cancer antigen 15-3, a serum biomarker of breast cancer. The PRL SNP rs3756824 was significantly associated with breast cancer metastasis (P < 0.05). These findings suggest that the polymorphism rs10941235 in the PRLR gene is associated with breast cancer and cancer antigen 15-3 levels in Taiwanese women. abstract_id: PUBMED:26980025 A favorable role of prolactin in human breast cancer reveals novel pathway-based gene signatures indicative of tumor differentiation and favorable patient outcome. Prolactin (PRL) hormone is known to play a key role in mammary gland development allowing for successful lactation. The role of this hormone in breast tumorigenesis is still controversial. Here, we evaluated PRL protein and gene expression levels in human breast cancer using tissue microarray of 100 breast cancer cases, as well as different publically available human breast cancer gene profiling databases. Interestingly, our results showed a significant downregulation of PRL expression in breast cancer compared to normal adjacent tissue. Moreover, expression of PRL was associated with more differentiated tumors, early stage, smaller tumor size and absence of distant metastasis. Importantly, our results indicate that higher PRL mRNA levels are significantly associated with prolonged relapse-free survival (RFS) in breast cancer patients (P=3.7 x 10(-9)). Additionally, examining expression of PRL pathway-based gene signature composed of PRL, PRLR, Jak2 and Stat5a showed a significant association with more differentiated tumors (P<.00001), prolonged RFS (P=1.8 x 10(-6)) as well as overall survival (OS) (P=.0026). As well, our results indicate that PRL-directed differentiation program in mammary epithelial cells offer good prognosis in human breast cancer. Indeed, expression of a gene signature composed of PRL-upregulated genes showed a significant association with well-differentiated tumors (P<.00001). Whereas expression of a gene signature composed of PRL-downregulated genes showed a significant association with shortened distant metastasis-free survival (DMFS) (P=.0086). Altogether our results highlight that PRL hormone and its signaling pathway may play an important role in maintaining tumor differentiation state and in turn better patient outcome. abstract_id: PUBMED:36841367 The role of the NDRG1 in the pathogenesis and treatment of breast cancer. Breast cancer (BC) is the leading cause of cancer death in women. This disease is heterogeneous, with clinical subtypes being estrogen receptor-α (ER-α) positive, having human epidermal growth factor receptor 2 (HER2) overexpression, or being triple-negative for ER-α, progesterone receptor, and HER2 (TNBC). The ER-α positive and HER2 overexpressing tumors can be treated with agents targeting these proteins, including tamoxifen and pertuzumab, respectively. Despite these treatments, resistance and metastasis are problematic, while TNBC is challenging to treat due to the lack of suitable targets. Many studies examining BC and other tumors indicate a role for N-myc downstream-regulated gene-1 (NDRG1) as a metastasis suppressor. The ability of NDRG1 to inhibit metastasis is due, in part, to the inhibition of the initial step in metastasis, namely the epithelial-to-mesenchymal transition. Paradoxically, there are also reports of NDRG1 playing a pro-oncogenic role in BC pathogenesis. The oncogenic effects of NDRG1 in BC have been reported to relate to lipid metabolism or the mTOR signaling pathway. The molecular mechanism(s) of how NDRG1 regulates the activity of multiple signaling pathways remains unclear. Therapeutic strategies that up-regulate NDRG1 have been developed and include agents of the di-2-pyridylketone thiosemicarbazone class. These compounds target oncogenic drivers in BC cells, suppressing the expression of multiple key hormone receptors including ER-α, progesterone receptor, androgen receptor, and prolactin receptor, and can also overcome tamoxifen resistance. Considering the varying role of NDRG1 in BC pathogenesis, further studies are required to examine what subset of BC patients would benefit from pharmacopeia that up-regulate NDRG1. abstract_id: PUBMED:20658264 Amplification of the prolactin receptor gene in mammary lobular neoplasia. The identification of lobular carcinoma in situ (LCIS) in a patient's specimen confers an appreciable increased risk of development of future invasive mammary carcinoma. However, the study of LCIS presents a challenge as it is usually only recognized in fixed specimens. Recent advances in high throughput genomics have made possible comprehensive copy number analysis of lesions such as this. Using array comparative genomic hybridization (aCGH), we characterized eight cases of lobular carcinoma (four invasive and four non-invasive) from microdissected samples of archival specimens and validated our results by quantitative real-time PCR (qRT-PCR). Immunohistochemistry (IHC) was performed on an independent set of 80 in situ ductal (DCIS) and lobular breast lesions to confirm our results. Amplification of the prolactin receptor gene (PRLr) was identified in 4/4 cases of LCIS by aCGH. We confirmed this amplification by qRT-PCR and demonstrated PRLr expression in 29/40 (73%) cases of lobular neoplasia by IHC. Amplification of PRLr was neither detected in 10 cases of DCIS nor in 5 areas of normal breast tissue by qRT-PCR and only 14/40 (35%) cases of DCIS showed PRLr expression by IHC (P = 0.0008). Our study suggests the prolactin receptor gene is a molecular target that may be important in the pathogenesis and progression of lobular neoplasia. Investigation of the status of this gene in cases of DCIS has indicated that it may not be as important in the progression of this type of breast cancer, supporting the view that lobular and ductal carcinomas may evolve along separate pathways. abstract_id: PUBMED:16434456 Association of prolactin and its receptor gene regions with familial breast cancer. Context: The contribution of prolactin (PRL) through its receptor (PRLR) to the pathogenesis and progression of human mammary tumors has received recent attention. Objective: We investigated whether genetic variation in the PRL and PRLR genes is associated with the risk of breast cancer (BC). Design: We conducted a case-control study with a total of seven single nucleotide polymorphisms (SNPs). Setting: The study was conducted at an academic research laboratory and university clinics. Patients And Other Participants: A total of 441 German familial, unrelated BC cases and 552 controls matched by age, ethnicity, and geographical region participated in the study. Intervention(s): There were no interventions. Main Outcome Measures(s): SNP genotype and haplotype distributions and haplotype interactions were correlated with the risk of BC. Results: Two SNPs (rs1341239 and rs12210179) within the PRL promoter regions were significantly associated with increased risk in homozygotes for the variant alleles [odds ratio (OR), 1.67 and 95% confidence interval (CI), 1.11-2.50; and OR, 2.09 and 95% CI, 1.23-3.52, respectively]. The PRL haplotype containing the variant alleles of the promoter SNPs increased significantly the risk of BC (OR 1.42, 95%CI 1.07-1.90). A PRLR haplotype was associated with a significant decrease in BC risk (OR 0.69, 95% CI 0.54-0.89). An increasing number of PRL and PRLR risk haplotypes led to a significant trend of increasing risk for BC (chi(2) = 12.15; P = 0.007). Conclusions: Genetic variation in the PRL and PRLR genes was shown to influence BC risk. Additional studies are needed to further clarify the role of the PRL and PRLR genes in the risk of BC. abstract_id: PUBMED:20962042 Prolactin inhibits activity of pyruvate kinase M2 to stimulate cell proliferation. Mitogenic and prosurvival effects underlie the tumorigenic roles of prolactin (PRL) in the pathogenesis of breast cancer. PRL signaling is mediated through its receptor (PRLr). A proteomics screen identified the pyruvate kinase M2 (PKM2), a glycolytic enzyme known to play an important role in tumorigenesis, as a protein that constitutively interacts with PRLr. Treatment of cells with PRL inhibited pyruvate kinase activity and increased the lactate content in human cells in a manner that was dependent on the abundance of PRLr, activation of Janus kinase 2, and tyrosine phosphorylation of the intracellular domain of PRLr. Knockdown of PKM2 attenuated PRL-stimulated cell proliferation. The extent of this proliferation was rescued by the knock-in of the wild-type PKM2 but not of its mutant insensitive to PRL-mediated inhibition. We discuss a hypothesis that the inhibition of PKM2 by PRL contributes to the PRL-stimulated cell proliferation. abstract_id: PUBMED:22037779 Life stage differences in mammary gland gene expression profile in non-human primates. Breast cancer (BC) is the most common malignancy of women in the developed world. To better understand its pathogenesis, knowledge of normal breast development is crucial, as BC is the result of disregulation of physiologic processes. The aim of this study was to investigate the impact of reproductive life stages on the transcriptional profile of the mammary gland in a primate model. Comparative transcriptomic analyses were carried out using breast tissues from 28 female cynomolgus macaques (Macaca fascicularis) at the following life stages: prepubertal (n = 5), adolescent (n = 4), adult luteal (n = 5), pregnant (n = 6), lactating (n = 3), and postmenopausal (n = 5). Mammary gland RNA was hybridized to Affymetrix GeneChip(®) Rhesus Macaque Genome Arrays. Differential gene expression was analyzed using ANOVA and cluster analysis. Hierarchical cluster analysis revealed distinct separation of life stage groups. More than 2,225 differentially expressed mRNAs were identified. Gene families or pathways that changed across life stages included those related to estrogen and androgen (ESR1, PGR, TFF1, GREB1, AR, 17HSDB2, 17HSDB7, STS, HSD11B1, AKR1C4), prolactin (PRLR, ELF5, STAT5, CSN1S1), insulin-like growth factor signaling (IGF1, IGFBP1, IGFBP5), extracellular matrix (POSTN, TGFB1, COL5A2, COL12A1, FOXC1, LAMC1, PDGFRA, TGFB2), and differentiation (CD24, CD29, CD44, CD61, ALDH1, BRCA1, FOXA1, POSTN, DICER1, LIG4, KLF4, NOTCH2, RIF1, BMPR1A, TGFB2). Pregnancy and lactation displayed distinct patterns of gene expression. ESR1 and IGF1 were significantly higher in the adolescent compared to the adult animals, whereas differentiation pathways were overrepresented in adult animals and pregnancy-associated life stages. Few individual genes were distinctly different in postmenopausal animals. Our data demonstrate characteristic patterns of gene expression during breast development. Several of the pathways activated during pubertal development have been implicated in cancer development and metastasis, supporting the idea that other developmental markers may have application as biomarkers for BC. abstract_id: PUBMED:18779591 Identification of a gain-of-function mutation of the prolactin receptor in women with benign breast tumors. There is currently no known genetic disease linked to prolactin (Prl) or its receptor (PrlR) in humans. Given the essential role of this hormonal system in breast physiology, we reasoned that genetic anomalies of Prl/PrlR genes may be related to the occurrence of breast diseases with high proliferative potential. Multiple fibroadenomas (MFA) are benign breast tumors which appear most frequently in young women, including at puberty, when Prl has well-recognized proliferative actions on the breast. In a prospective study involving 74 MFA patients and 170 control subjects, we identified four patients harboring a heterozygous single nucleotide polymorphism in exon 6 of the PrlR gene, encoding Ile(146)-->Leu substitution in its extracellular domain. This sole substitution was sufficient to confer constitutive activity to the receptor variant (PrlR(I146L)), as assessed in three reconstituted cell models (Ba/F3, HEK293 and MCF-7 cells) by Prl-independent (i) PrlR tyrosine phosphorylation, (ii) activation of signal transducer and activator of transcription 5 (STAT5) signaling, (iii) transcriptional activity toward a Prl-responsive reporter gene, and (iv) cell proliferation and protection from cell death. Constitutive activity of PrlR(I146L) in the breast sample from a patient was supported by increased STAT5 signaling. This is a unique description of a functional mutation of the PrlR associated with a human disease. Hallmarks of constitutive activity were all reversed by a specific PrlR antagonist, which opens potential therapeutic approaches for MFA, or any other disease that could be associated with this mutation in future. abstract_id: PUBMED:20601496 Prolactin and oestrogen synergistically regulate gene expression and proliferation of breast cancer cells. The pituitary hormone prolactin (PRL) plays an important role in mammary gland development. It was also suggested to contribute to breast cancer progression. In vivo data strongly supported a crucial role of PRL in promoting tumour growth; however, PRL demonstrated only a weak, if any, pro-proliferative effect on cancer cells in vitro. Several recent studies indicated that PRL action in vivo may be influenced by the hormonal milieu, e.g. other growth factors such as 17beta-oestradiol (E(2)). Here, we explored the potential interplay between PRL and E(2) in regulation of gene expression and cell growth. PRL alone induced either a weak or no proliferative response of T47D and BT-483 cells respectively, while it drastically enhanced cell proliferation in E(2)-stimulated cultures. Affymetrix microarray analysis revealed 12 genes to be regulated by E(2), while 57 genes were regulated by PRL in T47D cells. Most of the PRL-regulated genes (42/57) were not previously described as PRL target genes, e.g. WT1 and IER3. One hundred and five genes were found to be regulated upon PRL/E(2) co-treatment: highest up-regulation was found for EGR3, RUNX2, EGR1, MAFF, GLIPR1, IER3, SOCS3, WT1 and AREG. PRL and E(2) synergised to regulate EGR3, while multiple genes were regulated additively. These data show a novel interplay between PRL and E(2) to modulate gene regulation in breast cancer cells. Answer: Yes, prolactin receptor gene polymorphism could play a role in the pathogenesis of breast carcinoma. Studies have identified associations between polymorphisms in the prolactin receptor (PRLR) gene and breast cancer. For instance, a study detected an A150C transversion at exon 6 of PRLR in tumor tissues, adjacent non-cancerous breast tissues, and blood samples of two out of 38 patients with breast cancer, which was not found in blood samples of 100 normal individuals, suggesting that PRLR polymorphism might be involved in mammary carcinogenesis (PUBMED:15119991). Another study found that the PRLR single nucleotide polymorphism (SNP) rs10941235 was associated with breast cancer and its genotype frequencies were significantly associated with the levels of cancer antigen 15-3, a serum biomarker of breast cancer, in Taiwanese women (PUBMED:21125332). Additionally, genetic variation in the PRL and PRLR genes was shown to influence breast cancer risk, with certain SNPs within the PRL promoter regions being significantly associated with increased risk in homozygotes for the variant alleles (PUBMED:16434456). Furthermore, a gain-of-function mutation of the PRLR was identified in women with benign breast tumors, which conferred constitutive activity to the receptor variant and was associated with increased STAT5 signaling, suggesting a potential role in breast disease pathogenesis (PUBMED:18779591). These findings collectively suggest that polymorphisms and mutations in the PRLR gene may contribute to the development and progression of breast carcinoma.
Instruction: Trial of labor in patients with a previous cesarean section: does maternal age influence the outcome? Abstracts: abstract_id: PUBMED:26454124 Labor outcome at extremely advanced maternal age. Background: Women of advanced maternal age (AMA) are at increased risk for cesarean delivery compared to non-AMA women. However, it is unclear whether this association is altered by parity and the presence or absence of a trial of labor. Objective: We sought to examine modes of delivery and maternal outcomes among AMA women stratified by parity and the presence or absence of a trial of labor. Study Design: This is a retrospective cohort study of all women delivering singletons births at ≥20 weeks' gestation in the state of California from 2007 through 2011. Data were extracted from maternal discharge data linked to infant birth certificate records. We compared non-AMA women (age 20-34 years, reference group) to AMA women who were classified as follows: age 35-39, 40-44, 45-49, and ≥50 years). The primary outcome was route of delivery (cesarean vs vaginal) stratified by parity and whether a trial of labor occurred (prelabor vs intrapartum cesarean delivery). The association between a trial of labor and perinatal morbidity was also studied. Results: There were 1,346,889 women who met inclusion criteria, which included 181 (0.01%) women who were age ≥50 years at the time of delivery. Overall, 34.7% underwent a cesarean delivery and this risk differed significantly by age group (30.5%, 20-34 years; 40.5%, 35-39 years; 47.3%, 40-44 years; 55.6%, 45-49 years; 62.4%, >50 years). Nulliparous women age ≥50 years were significantly less likely to undergo a trial of labor compared to the reference group (relative risk [RR], 0.44; 95% confidence interval [CI], 0.32-0.62). Furthermore, nulliparous women age ≥50 years were significantly more likely to experience an intrapartum cesarean delivery (RR, 2.61; 95% CI, 1.31-5.20), however the majority (74%) who underwent a trial of labor experienced a vaginal delivery. Compared to the reference group, women age ≥50 years were 5 times more likely to experience severe maternal morbidity (1.7% vs 0.3%; RR, 5.08; 95% CI, 1.65-15.61) and their infants 3 times more likely to require neonatal intensive care unit admission (14.9% vs 5.2%; RR, 3.1; 95% CI, 2.2-4.4), however these outcomes were not associated significantly with having undergone a trial of labor, a cesarean delivery following labor, or a prelabor cesarean delivery. Similar trends were observed among multiparous women. Conclusion: Compared to non-AMA women, women age ≥50 years with a singleton pregnancy experience significantly higher rates of cesarean delivery. However the majority of those who undergo a trial of labor will have a vaginal delivery. Neither a trial of labor nor a prelabor cesarean delivery is significantly associated with maternal or neonatal morbidity. These data support either approach in women of extremely AMA. abstract_id: PUBMED:11319470 Factors predicting outcome of trial of labor after prior cesarean section in a developing country Objective: To determine factors predictive of the outcome of trial of labor after cesarean section in women living in developing countries. Material: and methods. A retrospective study of 313 cases of trial of labor was conducted in patients with a history of at least one prior cesarean section. Multiple logistic regression was used to identify predictive factors. The adjusted odds ratios with 95% confidence intervals (95%CI) were used to indicate risk of failure of the trial of labor when the factor was present. Results: The rate of vaginal birth was 71.6%. Factors significantly predictive of failure of trial of labor were birth weight>3000 g (OR=2.68; 95%CI: 1.57-4.57), no previous vaginal delivery (OR=1.71; 95%CI: 1.26-3.76) and maternal age>28 years (OR=1.71; 95%CI: 1.01-2.90). Conclusion: Trial of labor should be proposed in most women with one previous cesarean section. A prior history of vaginal delivery is the best predictor of the positive outcome of trial of labor. abstract_id: PUBMED:15118651 Trial of labor in patients with a previous cesarean section: does maternal age influence the outcome? Objective: This study was undertaken to evaluate the effect of maternal age on the rate of vaginal delivery and the rate of uterine rupture in patients undergoing a trial of labor (TOL) after a prior cesarean delivery. Study Design: A cohort study of all women with a live singleton fetus undergoing a TOL after a previous low-transverse cesarean delivery was performed between 1988 and 2002 in a tertiary care center. Patients were divided into 3 groups according to maternal age: less than 30 years old, 30 to 34 years old, and 35 years or older. Women with no prior vaginal delivery and with at least 1 prior vaginal delivery were analyzed separately. The rate of vaginal delivery and the rate of symptomatic uterine rupture were calculated. Multivariate logistic regression analyses were performed to adjust for potential confounding variables. Results: Of the 2493 patients who met the study criteria, there were 1750 women without a prior vaginal delivery (659, 721, and 370, respectively) and 743 women with a prior vaginal delivery (199, 327, and 217, respectively). The rate of uterine rupture was comparable between the groups (2.0%, 1.1%, 1.4%, P=.404 and 0%, 0.3%, 0.9%, P=.312). Successful vaginal delivery was inversely related to maternal age (71.9%, 70.7%, 65.1%, P=.063, and 91.5%, 91.1%, 82.9%, P=.005). After adjusting for confounding variables, maternal age equal to or greater than 35 years old was associated with a lower rate of successful vaginal delivery in patients without prior vaginal delivery (odds ratio [OR] 0.73, 95% CI: 0.56-0.94), and in patients with a prior vaginal delivery (OR: 0.47, 95% CI: 0.29-0.74). Conclusion: Patients who are 35 years or older are more prone to have a failed TOL after a prior cesarean delivery. abstract_id: PUBMED:31550773 Effect of induced labor on delivery outcome of pregnant women undergoing vaginal trial of labor after cesarean section Objective: To explore the effect of induced labor on delivery outcome of pregnant women undergoing vaginal trial of labor after cesarean section. Methods: Totally, 173 pregnant women undergoing vaginal trial of labor after cesarean from April 1st, 2016 to October 31st, 2017 were involved. According to whether or not induced labor, pregnant women were divided into induced labor group (n=47) and natural labor group (n=126). The two groups of the general situation, the situation of delivery and delivery outcome were compared. Further more, the induced labor group were divided into cesarean section patients (n=20) and vaginal delivery patients (n=27) based on pregnancy outcomes, induction situation and delivery situation were analyzed. Results: (1) The gestational weeks was 39.2±1.1 in induced labor group, 38.7±1.0 in natural labor group. The 1-minute Apgar score was 9.98±0.15 in induced labor group, and 9.87±0.60 in natural labor group. The neonatal weight was (3 497±426) g in induced labor group, and (3 288±350) g in natural labor group. The thickness of lower uterine segment at 36-39 weeks of pregnancy was (2.4±0.6) mm in induced labor group, (2.1±0.6) mm in natural labor group. There were obviously differences in the gestational week, the 1-minute Apgar score, the neonatal weight and the thickness of lower uterine segment at 36-39 weeks of pregnancy between the two groups (all P<0.05). There were no significant differences in 5-minute Apgar score, the interval between previous cesarean delivery, postpartum hemorrhage, fetal distress, and intrauterine infection, secondary uterine inertia, placental abruption and uterine ruption between the two groups (all P>0.05). (2) In induced labor group, the cervical score of cesarean section patients was 3.8±1.7, while the cervical score of vaginal delivery patients was 5.2±1.7. The induced labor days of cesarean section patients was (1.6±0.9) days, while the induced labor days of vaginal delivery patients was (1.2±0.4) days. There were obviously differences in the cervical score and the induced labor days among the two subgroup patients (all P<0.05). There were no significiant differences in Apgar score, neonatal weight, the thickness of lower uterine segment at 36-39 gestational weeks and the interval times of previous cesarean delivery between the two subgroup patients (P>0.05). Conclusion: Induction of labor does not reduce neonatal Apgar score, and does not increase the incidence of postpartum complications, and eliminats the increase of cesarean section rate caused by increased gestational weeks and fetal enlargement, and increases the vaginal delivery rate of pregnant women undergoing vaginal trail of labor after cesarean section. abstract_id: PUBMED:31461806 Ratio of transfer cesarean section after trial of labor and maternal-fetal adverse outcomes based on Robson classifications Objective: To investigate the ratio of transfer cesarean section after trial of labor and maternal-fetal outcomes based on Robson classifications. Methods: The delivery data by cesarean section in Third Affiliated Hospital of Guangzhou Medical University from January 1st, 2009 to December 31st, 2015 (gestational age ≥28 weeks and newborn birth weight >1 000 g) were retrospectively collected. The ratio of transfer cesarean section after trial of labor and maternal-fetal adverse outcomes were analyzed by weighted adverse outcome score in different Robson classifications. Results: (1) The highest ratio of transfer cesarean section after trial of labor was classification 9 (all abnormal lies, including previous cesarean section and breech were excluded) reached 47.31% (431/911) , followed by classification 2 (nulliparous women with a single cephalic pregnancy, ≥37 weeks gestation who had labour induced) accounted for 44.90%(409/911). (2)The tops of weighted adverse outcome score of transfer cesarean section after trial of labor were classification 10 (single cephalic pregnancy at <37 weeks gestation, including women with previous cesarean delivery) 24.55, classification 5 (single cephalic pregnancy multiparous women, with at least one previous cesarean delivery, ≥37weeks gestation) 3.64. Conclusion: Carefully evaluating the delivery mode and emphasizing the intrapartum management in nulliparous women with a single cephalic pregnancy, at ≥37 weeks gestation who had labour induced and trial of labor after cesarean section is essential to reduce the risk of adverse outcomes in transfer cesarean section after trial of labor. abstract_id: PUBMED:29065732 Choice of scheduled cesarean delivery versus trial of labor for advanced maternal age primiparous women. Objective: We aimed to evaluate perinatal outcomes of advanced maternal age (AMA) primiparous women seeking scheduled cesarean delivery (CD) versus a trial of labor (TOL). Materials And Methods: This was a retrospective cohort study of primiparous women ≥40 years of age carrying a single fetus, who delivered at a tertiary, university-affiliated medical center (2007-2014). We compared perinatal outcomes of women who chose a scheduled CD with those who chose TOL. Subsequently, we compared women who had successful TOL (sTOL) with those who had a failed TOL (fTOL). Results: Out of 62 102 deliveries during the study period, inclusion criteria were met by 374 women. Of them, 83 (22.2%) women had a scheduled CD, and 291 (77.8%) had TOL. Of the TOL group, 118 women (40.5%) had fTOL and 173 (59.5%) had sTOL. Women in the fTOL had higher rates of 1-min Apgar score <7, neonatal asphyxia and adverse neonatal composite outcome compared with women in the sTOL group (p < .05). After adjusting for potential confounders, compared with sTOL, fTOL was not associated with adverse neonatal composite outcome. Compared with scheduled CD, fTOL was associated with an increased risk for adverse neonatal composite outcome (aOR 2.65, 95% CI 1.13-6.19, p = .03). Conclusion: AMA primiparous women attempting TOL have comparable outcome to those seeking scheduled CD, however, women with fTOL have higher rates of adverse neonatal outcome. abstract_id: PUBMED:12752164 Neonatal outcome after trial of labor compared with elective repeat cesarean section. Background: Trial of labor after cesarean section has been an important strategy for lowering the rate of cesarean delivery in the United States, but concerns regarding its safety remain. The purpose of this study was to evaluate the outcome of newborns delivered by elective repeat cesarean section compared to delivery following a trial of labor after cesarean. Methods: All low-risk mothers with 1 or 2 previous cesareans and no prior vaginal deliveries, who delivered at our institution from December 1994 through July 1995, were identified. Neonatal outcomes were compared between 136 women who delivered by elective repeat cesarean section and 313 women who delivered after a trial of labor. To investigate reasons for differences in outcome between these groups, neonatal outcomes within the trial of labor group were then compared between those mothers who had received epidural analgesia (n = 230) and those who did not (n = 83). Results: Infants delivered after a trial of labor had increased rates of sepsis evaluation (23.3% vs 12.5%, p = 0.008); antibiotic treatment (11.5% vs 4.4%, p = 0.02); intubation to evaluate for the presence of meconium below the cords (11.5% vs 1.5%, p < 0.001); and mild bruising (8.0% vs 1.5%, p = 0.008). Within the trial of labor group, infants of mothers who received epidural analgesia were more likely to have received diagnostic tests and therapeutic interventions including sepsis evaluation (29.6% vs 6.0%, p = 0.001) and antibiotic treatment (13.9% vs 4.8%, p = 0.03) than within the no-epidural analgesia group. Conclusions: Infants born to mothers after a trial of labor are twice as likely to undergo diagnostic tests and therapeutic interventions than infants born after an elective repeat cesarean section, but the increase occurred only in the subgroup of infants whose mothers received epidural analgesia for pain relief during labor. The higher rate of intervention could relate to the well-documented increase in intrapartum fever that occurs with epidural use. abstract_id: PUBMED:30901074 Delivery outcome after trial of labor in nulliparous women 40 years or older-A nationwide population-based study. Introduction: The number of women postponing childbirth until an advanced age is increasing. Our aim was to study the outcome of labor in nulliparous women ≥40 years, compared with women 25-29 years, after both spontaneous onset and induction of labor. Material And Methods: The nationwide population-based Swedish Medical Birth Register was used to study the perinatal outcome in nulliparous women with a singleton, term (gestational weeks 37-44), live fetus in cephalic presentation and a planned vaginal delivery from 1992 to 2011. We included 7796 nulliparous women ≥40 years and 264 262 nulliparous women 25-29 years. Prevalence and risk of intrapartum cesarean section, operative vaginal delivery, obstetric anal sphincter injury and a 5-minute Apgar score <7 were calculated for women ≥40 years stratified for spontaneous onset and induction of labor, using women 25-29 years as the reference in both strata. Crude and adjusted odds ratios (aOR) were calculated by unconditional logistic regression and presented with 95% confidence intervals (CI). Results: Overall, 79% of women ≥40 years with a trial of labor reached a vaginal delivery. After spontaneous onset, intrapartum cesarean section was performed in 15.4% of women ≥40 years compared with 5.4% of women 25-29 years (aOR 3.07, 95% CI 2.81-3.35). Operative vaginal delivery was performed in 22.3% of women ≥40 years compared with 14.2% of women 25-29 years (aOR 1.71, 95% CI 1.59-1.85). After induction of labor, an intrapartum cesarean section was performed in 37.2% women ≥40 years compared with 20.2% women 25-29 years (aOR 2.51, 95% CI 2.24-2.81). Operative vaginal delivery was performed in 22.6% of women ≥40 years compared with 18.4% women 25-29 years (aOR 1.45, 95% CI 1.28-1.65). The risk of obstetric anal sphincter injury or a 5-minute Apgar score <7 was not increased in women ≥40 years, regardless of onset of labor. Conclusions: Trial of labor ended in vaginal delivery in 79% of nulliparous women ≥40 years. The risks of intrapartum cesarean section and operative vaginal delivery were higher in women ≥40 years compared with women 25-29 years, after both spontaneous onset and induction of labor. The risk of obstetric anal sphincter injury or a 5-minute Apgar score <7 was not increased. abstract_id: PUBMED:14754696 Predicting failed trial of labor after primary cesarean delivery. Objective: To apply published scoring systems retrospectively to patients who had undergone a trial of labor after cesarean delivery to estimate whether there was a score at which a trial of labor should be discouraged. Methods: Patients with 1 previous cesarean delivery who then delivered between January 1, 1998, and December 31, 1998, were studied. An investigator blinded to outcome assigned scores using 3 different scoring systems. Student t test, chi(2), analysis of variance, and receiver operating curve analysis were performed. P <.05 was significant. Results: Seventy-six percent (117/153) of trial of labor patients had a vaginal birth after cesarean delivery. Successful vaginal birth after cesarean delivery patients had significantly different mean scores using all 3 scoring systems, but none of the systems accurately predicted failed trial of labor resulting in cesarean delivery. Unfavorable scores were associated with high rates of major complications. Conclusions: An unfavorable score predicting a high rate of complications and more failed trials of labor may help in counseling patients considering trial of labor. A better system to predict the success or failure of trial of labor is needed. Level Of Evidence: III abstract_id: PUBMED:27788742 The outcome of trial of labor after cesarean section Objective: To explore the outcome of trial of labor after cesarean section(TOLAC). Methods: Totally 614 TOLAC were conducted in the First Affiliated Hospital of Kunming Medical University from July 2013 to June 2016. Among them, 586 cases of singleton pregnancy with one prior cesarean section(gestational age≥28 weeks)were studied retrospectively. The maternal and neonatal outcomes among the vaginal birth after cesarean(VBAC)group(481 cases), failed TOLAC group(105 cases)and the elective repeat cesarean section(ERCS)group(1 145 cases)were compared. Multiple logistic regression was used to determine the risk factors of admission to neonatal intensive care unit(NICU). Results: (1)The TOLAC rate was 29.62%(614/2 073)from July 2013 to June 2016, and the VBAC rate was 82.6%(507/614). The cesarean section rate was reduced by VBAC by 3.147%(507/16 112).(2)The comparison of adverse maternal outcomes: in the VBAC group, the postpartum hemorrhage volume was(431±299)ml, the rate of postpartum fever was 6.4%(31/481), the birth weight of the neonates was(3 085± 561)g, and the rate of large for gestational age was 2.9%(14/481). All were significantly lower than those in the failed TOLAC group and the ERCS group(P<0.05). There was no significant difference in other adverse maternal outcomes[the uterine rupture rate(0.2% ,1/481), the bladder injury rate(0), the proportion of postpartum hemorrhage volume≥1 500 ml(1.0%, 5/481), the blood transfusion rate(3.7%, 18/481)]and adverse perinatal outcomes[the rate of neonatal 5-minute Apgar score<7(0.4%, 21/481), the rate of umbilical arterial pH<7.0(0.6% , 3/481), the rate of the NICU admission and the perinatal mortality rate(12.3%, 59/481)]among the 3 groups(P>0.05). Multiple logistic regression showed no association between VBAC and admission to the NICU(OR=0.84, 95%CI: 0.58-1.21). The isolated risk factors for admission to the NICU were preterm birth(OR=16.71, 95% CI: 11.44-24.40), hypertensive disorder complicating pregnamcy(OR=3.89, 95% CI: 2.39-6.35), meconium stained amniotic fluid(OR=2.48, 95% CI: 1.62-3.80), small for gestational age(OR=2.00, 95% CI: 1.19-3.36)and diabetes mellitus(OR=1.69, 95% CI: 1.14-2.50). Conclusions: VBAC reduces cesarean section rate, with good outcomes in both mother and neonate. It is a safe and feasible way of labor in women with only one cesarean section history. Answer: Maternal age does influence the outcome of a trial of labor in patients with a previous cesarean section. Studies have shown that advanced maternal age (AMA) is associated with a higher likelihood of cesarean delivery and that the success of vaginal delivery after a trial of labor (TOL) decreases with increasing maternal age (PUBMED:15118651). Specifically, women who are 35 years or older are more prone to have a failed TOL after a prior cesarean delivery (PUBMED:15118651). In a study that examined the effect of maternal age on the rate of vaginal delivery and the rate of uterine rupture in patients undergoing TOL after a prior cesarean delivery, it was found that successful vaginal delivery was inversely related to maternal age. After adjusting for confounding variables, maternal age equal to or greater than 35 years old was associated with a lower rate of successful vaginal delivery in patients without prior vaginal delivery, and in patients with a prior vaginal delivery (PUBMED:15118651). Another study found that factors predictive of failure of TOL included birth weight greater than 3000 g, no previous vaginal delivery, and maternal age greater than 28 years (PUBMED:11319470). Additionally, a study on labor outcomes at extremely advanced maternal age indicated that women aged 50 years or older with a singleton pregnancy experience significantly higher rates of cesarean delivery compared to non-AMA women. However, the majority of those who underwent a TOL had a vaginal delivery, and neither a TOL nor a prelabor cesarean delivery was significantly associated with maternal or neonatal morbidity (PUBMED:26454124). In summary, maternal age is a significant factor influencing the outcome of TOL in patients with a previous cesarean section, with older maternal age being associated with a higher risk of cesarean delivery and a lower success rate for vaginal delivery after TOL.
Instruction: Indications for inferior vena cava filter placement: do physicians comply with guidelines? Abstracts: abstract_id: PUBMED:27247472 Inferior Vena Cava Filters: Guidelines, Best Practice, and Expanding Indications. Vena caval interruption, currently accomplished by percutaneous image-guided insertion of an inferior vena cava (IVC) filter, is an important therapeutic option in the management of selected patients with venous thromboembolism. The availability of optional (or retrievable) filters, in particular, has altered the practice patterns for IVC filters, with a shift to these devices and expansion of indications for filter placement. As new devices have become available and clinicians have become more familiar and comfortable with IVC filters, the indications for filter placement have continued to evolve and expand. This article reviews current guidelines and expanding indications for IVC filter placement. abstract_id: PUBMED:27489697 Total laparoscopic retrieval of inferior vena cava filter. While there is some local variability in the use of inferior vena cava filters and there has been some evolution in the indications for filter placement over time, inferior vena cava filters remain a standard option for pulmonary embolism prophylaxis. Indications are clear in certain subpopulations of patients, particularly those with deep venous thrombosis and absolute contraindications to anticoagulation. There are, however, a variety of reported inferior vena cava filter complications in the short and long term, making retrieval of the filter desirable in most cases. Here, we present the case of a morbidly obese patient complaining of chronic abdominal pain after inferior vena cava filter placement and malposition of the filter with extensive protrusion outside the inferior vena cava. She underwent successful laparoscopic retrieval of her malpositioned inferior vena cava filters after failure of a conventional endovascular approach. abstract_id: PUBMED:17457563 Vena cava filter Fulminant pulmonary embolism is one of the major causes of death in the Western World. In most cases, deep leg and pelvic venous thrombosis are the cause. If an anticoagulant/thrombotic therapy is no longer possible or ineffective, a vena cava filter implant may be indicated if an embolism is threatening. Implantation of the filter is a simple and safe intervention. Nevertheless, it is necessary to take into consideration that the data base for determining the indications for this treatment are very limited. Currently, a reduction in the risk of thromboembolism with the use of filters of about 30%, of recurrences of almost 5% and fatal pulmonary embolism of 1% has been reported, with a risk of up to 20% of filter induced vena cava thrombosis. abstract_id: PUBMED:28242032 Indications, complications and outcomes of inferior vena cava filters: A retrospective study. Introduction: Inferior vena cava filters are used to prevent embolization of a lower extremity deep vein thrombosis when the risk of pulmonary embolism is thought to be high. However, evidence is lacking for their benefit and guidelines differ on the recommended indications for filter insertion. The study aim was to determine the reasons for inferior vena cava filter placement and subsequent complication rate. Materials And Methods: A retrospective cohort of patients receiving inferior vena cava filters in Edmonton, Alberta, Canada from 2007 to 2011. Main outcome was the indication of inferior vena cava filter insertion. Other measures include baseline demographic and medical history of patients, clinical outcomes and filter retrieval rates. Results: 464 patients received inferior vena cava filters. An acute deep vein thrombosis with a contraindication to anticoagulation was the indication for 206 (44.4%) filter insertions. No contraindication to anticoagulation could be identified in 20.7% of filter placements. 30.6% were placed in those with active cancer, in which mortality was significantly higher. Only 38.9% of retrievable filters were successfully retrieved. Conclusions: Inferior vena cava filters were placed frequently in patients with weak or no guideline-supported indications for filter placement and in up to 20% of patients with no contraindication to anticoagulation. The high rates of cancer and the high mortality rate of the cohort raise the possibility that some filters are placed inappropriately in end of life settings. abstract_id: PUBMED:28228959 Heart block and cardiac embolization of fractured inferior vena cava filter. Objective: A 66-year-old man underwent a placement of an inferior vena cava filter before a gastric surgery 9 years prior, presented to the emergency room with a complete atrioventricular block. Chest x-ray and transthoracic echocardiogram showed struts migrating to right ventricle with tricuspid regurgitation. Cardiothoracic surgery was consulted and declined an open surgical intervention due to the location of the embolized fragments and the patient's overall condition. It was also felt that the fragments had migrated chronically and were adhered to the cardiac structures. Methods: The patient underwent a dual-chamber permanent pacemaker implantation. Post-implant fluoroscopy showed no displacement of the inferior vena cava filter struts due to the pacemaker leads indicating that the filter fracture had likely been a chronic process. Results: This case highlights a rare combination of complications related to inferior vena cava filter fractures and the importance of assessing for such fractures in chronic placements. Inferior vena cava filter placement for a duration greater than 1 month can be associated with filter fractures and strut migration which may lead to, although rare, serious or fatal complications such as complete atrioventricular conduction system disruption and valvular damage including significant tricuspid regurgitation. Conclusions: Assessing for inferior vena cava filter fractures in chronic filter placement is important to avoid such complications. When possible, retrieval of the filter should be considered in all patients outside the acute setting in order to avoid filter-related complications. Filter retrieval rates remain low even when a retrievable filter is in place and the patient no longer has a contraindication to anticoagulation. abstract_id: PUBMED:2501522 Extended indications for placement of an inferior vena cava filter. To study the morbidity and mortality rates after placement of an inferior vena cava filter and to define the appropriate indications for interruption of the inferior vena cava, the records of all patients who underwent insertion of a Greenfield filter during the decade January 1978 to December 1987 were reviewed. Patients were designated as having either a traditional or extended indication for placement of an inferior vena cava filter. Two hundred sixty inferior vena cava filters were placed in 264 attempts, with no deaths related to insertion of the filter. An extended indication was the primary reason for placement of the Greenfield filter in 66 (25%) of the patients. In patients with extended indications there were no cases of air embolism or filter misplacement and only three wound complications (4.5%). Pulmonary embolism after insertion of the inferior vena cava filter occurred in three patients (4.5%), with one fatality (1.5%). Inferior vena cava occlusion was documented in three cases (4.5%), and manifestations of the postphlebitic syndrome in early follow-up were present in two patients (3.0%). As the procedures to prevent fatal pulmonary embolism have become safer, more efficacious, and less morbid, the number of patients in whom the potential benefits of insertion of an inferior vena cava filter outweigh the risks has become larger. Our results support the liberalized use of Greenfield filters in those patients who do not necessarily have one of the traditional indications for placement of an inferior vena cava filter but are at a high risk of having a fatal pulmonary embolus. abstract_id: PUBMED:32755356 Inferior Vena Cava Filter Retrieval: Patient Selection, Procedural Planning, and Postprocedural Complications. OBJECTIVE. Utilization of retrievable inferior vena cava filters (rIVCFs) has come under increased scrutiny because of historically high rates of placement, generalized lack of retrieval when the inferior vena cava filter (IVCF) is no longer indicated, and reports of device-related complications. These events have led to an increased interest in IVCF retrieval, including the development of advanced endovascular retrieval techniques and the proliferation of specialized clinical practices for rIVCFs. We aim to describe the indications for IVCF retrieval, patient selection, procedural planning, and procedural complications and management. CONCLUSION. IVCFs continue to have a role in the prevention of pulmonary embolism in select patients. Rising awareness of device-related complications paired with historically low retrieval rates has prompted renewed emphasis and interest in filter retrieval. Diligent follow-up and procedural planning permit prompt and safe filter retrieval. abstract_id: PUBMED:36976151 Direct superior vena cava puncture for inferior vena cava filter retrieval. Most inferior vena cava (IVC) filters are designed for retrieval from a superior approach. Retrieval becomes technically challenging when the central veins in the chest are occluded. In a patient with thrombosis of the bilateral brachiocephalic veins, the authors describe direct puncture of the superior vena cava (SVC) under fluoroscopy, followed by the successful retrieval of a fractured IVC filter using forceps. A snare inserted into the SVC via the common femoral vein was used as a radiopaque target for direct SVC puncture from the lower neck. Cone beam computed tomography and pullback tractography were used to confirm a safe access trajectory. Thus, direct SVC access may be used for filter retrieval in similar clinical scenarios. abstract_id: PUBMED:31570169 Inferior Vena Cava Filter Evaluation and Management for the Diagnostic Radiologist: A Comprehensive Review Including Inferior Vena Cava Filter-Related Complications and PRESERVE Trial Filters. Inferior vena cava filters are commonly encountered devices on diagnostic imaging that were highlighted in a 2010 Food and Drug Administration safety advisory regarding their complications from long-term implantation. The Predicting the Safety and Effectiveness of Inferior Vena Cava Filters (PRESERVE) trial is an ongoing after-market study investigating the safety and utility of commonly utilized filters in practice today. While most of these filters are safe, prompt recognition and management of any filter-associated complication is imperative to prevent or reduce the morbidity and mortality associated with them. This review is aimed at discussing the appropriate utilization and placement of inferior vena cava filters in addition to the recognition of filter-associated complications on cross-sectional imaging. An overview of the PRESRVE trial filters is also provided to understand each filter's propensity for specific complications. abstract_id: PUBMED:29502773 Inferior Vena Cava Filters: Current Indications, Techniques, and Recommendations. The vena cava filter (VCF) is intended to prevent the progression of deep venous thrombosis to pulmonary embolism. Recently, the indications for VCF placement have expanded, likely due in part to newer retrievable inferior vena caval filters and minimally invasive techniques. This article reviews the available VCFs, the indications for use, the techniques for placement, and possible outcomes and complications. Answer: The use of inferior vena cava (IVC) filters has been subject to evolving guidelines and indications, with some variability in compliance among physicians. IVC filters are primarily used to prevent pulmonary embolism in patients with deep vein thrombosis, especially when anticoagulation is contraindicated or ineffective (PUBMED:17457563). However, the indications for IVC filter placement have expanded over time, and there is evidence that filters are often placed in patients with weak or no guideline-supported indications (PUBMED:28242032). In fact, one study found that no contraindication to anticoagulation could be identified in 20.7% of filter placements, and filters were frequently placed in patients with active cancer, where mortality was significantly higher (PUBMED:28242032). The availability of optional (retrievable) filters has altered practice patterns, with a shift towards these devices and an expansion of indications for filter placement (PUBMED:27247472). Despite the clear indications in certain subpopulations, complications associated with IVC filters, both in the short and long term, have made retrieval desirable in most cases (PUBMED:27489697). However, retrieval rates remain low, even when a retrievable filter is in place and the patient no longer has a contraindication to anticoagulation (PUBMED:28228959). The literature suggests that while IVC filters remain a standard option for prophylaxis against pulmonary embolism, there is a discrepancy between guideline recommendations and actual clinical practice, with filters being placed in patients without strong indications or in end-of-life settings where the benefit may be questionable (PUBMED:28242032). This indicates that not all physicians comply with the guidelines when deciding to place an IVC filter. Additionally, the data base for determining the indications for IVC filter treatment is limited, which may contribute to the variability in compliance with guidelines (PUBMED:17457563). Overall, while IVC filters have a role in the prevention of pulmonary embolism, the need for careful patient selection, adherence to guidelines, and consideration of filter retrieval is emphasized to avoid unnecessary complications and to ensure appropriate use (PUBMED:32755356).
Instruction: Does the Gamma Knife dose rate affect outcomes in radiosurgery for trigeminal neuralgia? Abstracts: abstract_id: PUBMED:36275137 Gamma Knife radiosurgery for trigeminal neuralgia provides greater pain relief at higher dose rates. In Gamma Knife (GK) radiosurgery, dose rate decreases during the life cycle of its radiation source, extending treatment times. Prolonged treatments influence the amount of sublethal radiation injury that is repaired during exposure, and is associated with decreased biologically-equivalent dose (BED). We assessed the impact of treatment times on clinical outcomes following GK of the trigeminal nerve - a rare clinical model to isolate the effects of treatment times. This is a retrospective analysis of 192 patients with facial pain treated across three source exchanges. All patients were treated to 80 Gy with a single isocenter. Treatment time was analyzed in terms of patient anatomy-specific dose rate, as well as BED calculated from individual patient beam-on times. An outcome tool measuring pain in three distinct domains (pain intensity, interference with general and oro-facial activities of daily living), was administered before and after intervention. Multivariate linear regression was performed with dose rate/BED, brainstem dose, sex, age, diagnosis, and prior intervention as predictors. BED was an independent predictor of the degree of improvement in all three dimensions of pain severity. A decrease in dose rate by 1.5 Gy/min corresponded to 31.8% less improvement in the overall severity of pain. Post-radiosurgery incidence of facial numbness was increased for BEDs in the highest quartile. Treatment time is an independent predictor of pain outcomes, suggesting that prescription dose should be customized to ensure iso-effective treatments, while accounting for the possible increase in adverse effects at the highest BEDs. abstract_id: PUBMED:18095194 Gamma knife radiosurgery for treatment of pain Radiosurgery is to irradiate the small volume targets inside of the cranium with high dose radiation in a single session. Gamma knife radiosurgery has been in use for treatment of pain many years. The first cases which have been treated with radiosurgery technique were trigeminal neuralgia patients. After the first installment of gamma knife in 1968, it has been used for thalamotomy in cancer pain. Nowadays the main indication for gamma knife radiosurgery for pain is trigeminal neuralgia. abstract_id: PUBMED:26252452 Higher dose rate Gamma Knife radiosurgery may provide earlier and longer-lasting pain relief for patients with trigeminal neuralgia. Object: Gamma Knife radiosurgery (GKRS) utilizes cobalt-60 as its radiation source, and thus dose rate varies as the fixed source decays over its half-life of approximately 5.26 years. This natural decay results in increasing treatment times when delivering the same cumulative dose. It is also possible, however, that the biological effective dose may change based on this dose rate even if the total dose is kept constant. Because patients are generally treated in a uniform manner, radiosurgery for trigeminal neuralgia (TN) represents a clinical model whereby biological efficacy can be tested. The authors hypothesized that higher dose rates would result in earlier and more complete pain relief but only if measured with a sensitive pain assessment tool. Methods: One hundred thirty-three patients were treated with the Gamma Knife Model 4C unit at a single center by a single neurosurgeon during a single cobalt life cycle from January 2006 to May 2012. All patients were treated with 80 Gy with a single 4-mm isocenter without blocking. Using an output factor of 0.87, dose rates ranged from 1.28 to 2.95 Gy/min. The Brief Pain Inventory (BPI)-Facial was administered before the procedure and at the first follow-up office visit 1 month from the procedure (mean 1.3 months). Phone calls were made to evaluate patients after their procedures as part of a retrospective study. Univariate and multivariate linear regression was performed on several independent variables, including sex, age in deciles, diagnosis, follow-up duration, prior surgery, and dose rate. Results: In the short-term analysis (mean 1.3 months), patients' self-reported pain intensity at its worst was significantly correlated with dose rate on multivariate analysis (p = 0.028). Similarly, patients' self-reported interference with activities of daily living was closely correlated with dose rate on multivariate analysis (p = 0.067). A 1 Gy/min decrease in dose rate resulted in a 17% decrease in pain intensity at its worst and a 22% decrease in pain interference with activities of daily living. In longer-term follow-up (mean 1.9 years), GKRS with higher dose rates (> 2.0 Gy/min; p = 0.007) and older age in deciles (p = 0.012) were associated with a lower likelihood of recurrence of pain. Discussion: Prior studies investigating the role of dose rate in Gamma Knife radiosurgical ablation for TN have not used validated outcome tools to measure pain preoperatively. Consequently, differences in pain outcomes have been difficult to measure. By administering pain scales both preoperatively as well as postoperatively, the authors have identified statistically significant differences in pain intensity and pain interference with activities of daily living when comparing higher versus lower dose rates. Radiosurgery with a higher dose rate results in more pain relief at the early follow-up evaluation, and it may result in a lower recurrence rate at later follow-up. abstract_id: PUBMED:23091943 Gamma knife radiosurgery Gamma Knife radiosurgery can be used as an alternative or complementary therapy to neurosurgery or radiotherapy for the treatment of some brain disorders or tumors of small volume. The most frequent indications are brain metastases, vestibular schwannomas, meningiomas, trigeminal neuralgia, arteriovenous malformations, some gliomas, and pituitary adenomas. Created in 1999, the Gamma Knife Center of the ULB remains currently the unique center in Belgium where a Gamma Knife radiosurgery treatment can be performed. abstract_id: PUBMED:29184709 Gamma knife radiosurgery for cerebellopontine angle epidermoid tumors. Background: Intracranial epidermoid tumors are commonly found in the cerebellopontine angle where they usually present with either trigeminal neuralgia or hemifacial spasm. Radiosurgery for these tumors has rarely been reported. The purpose of this study is to assess the safety and clinical outcome of the treatment of cerebellopontine epidermoid tumors with gamma knife radiosurgery. Methods: This is a retrospective study involving 12 patients harboring cerebellopontine angle epidermoid tumors who underwent 15 sessions of gamma knife radiosurgery. Trigeminal pain was present in 8 patients and hemifacial spasm in 3 patients. All cases with trigeminal pain were receiving medication and still uncontrolled. One patient with hemifacial spasm was medically controlled before gamma knife and the other two were not. Two patients had undergone surgical resection prior to gamma knife treatment. The median prescription dose was 11 Gy (10-11 Gy). The tumor volumes ranged from 3.7 to 23.9 cc (median 10.5 cc). Results: The median radiological follow up was 2 years (1-5 years). All tumors were controlled and one tumor shrank. The median clinical follow-up was 5 years. The trigeminal pain improved or disappeared in 5 patients, and of these, 4 cases stopped their medication and one decreased it. The hemifacial spasm resolved in 2 patients who were able to stop their medication. Facial palsy developed in 1 patient and improved with conservative treatment. Transient diplopia was also reported in 2 cases. Conclusion: Gamma knife radiosurgery provides good clinical control for cerebellopontine angle epidermoid tumors. abstract_id: PUBMED:37179673 Outcomes of Gamma Knife Radiosurgery for Drug-Resistant Primary Trigeminal Neuralgia. Objective: To evaluate the result of gamma knife radiosurgery (GKRS) for drug-resistant primary trigeminal neuralgia (TN). Methods: Patients who were diagnosed with drug-resistant primary TN were treated with GKRS at the Nuclear Medicine and Oncology Center, Bach Mai Hospital from January 2015 to June 2020. Using the pain rating scale of the Barrow Neurological Institute (BNI), follow-ups and evaluations were conducted at 1 month, 3 months, 6 months, and 9 month, and 1, 2 years, 3 years, and 5 years after radiosurgery. Comparisons of pain levels was performed before and after radiosurgery according to the BNI scale. Results: From January 2015 to June 2020, 33 patients were treated with GKS. There were 23 female patients and 10 male patients; the average age was 61.9. The average onset time of disease was 4.42 years. Among all patients, 84.8% of patients were relieved from pain and 78.8% were pain-free without medication. The mean time for pain relief was 3 months and was not related to the dose of GKS (<80 Gy and ≥80 Gy). The efficacy of pain relief is not related to blood vessel contact with the trigeminal nerve, the dose of GKS, and the onset of disease. The rate of recurrence after the first pain relief was low (14.3%). Conclusions: Gamma knife is an effective method for the treatment of primary drug-resistant TN, especially in elderly patients with underlying medical conditions. The analgesic effect does not depend on the presence of nerve-vascular conflict. abstract_id: PUBMED:26958427 Treatment of epidermoid tumors with gamma knife radiosurgery: Case series. Background: Epidermoid tumors (ETs) are benign lesions that are treated mainly by means of surgical resection, with overall good results. External beam radiotherapy is an alternative treatment for those recurrent tumors, in which a second surgery might not be the best choice for the patient. A little information exists about the effectiveness of gamma knife radiosurgery for the treatment of newly diagnosed and recurrent ETs. We present three cases of ETs treated with gamma knife radiosurgery. Case Description: Case 1 is a 21-year-old female with an ET located in the left cerebellopontine angle (CPA) with symptoms related to VIII cranial nerve dysfunction. Symptom control was achieved and maintained after single session radiosurgery with gamma knife. Case 2 is a 59-year-old female patient with the history of trigeminal neuralgia secondary to a recurrent ET located in the left CPA. Significant pain improvement was achieved after treatment with gamma knife radiosurgery. Case 3 is a 29-year-old male patient with a CPA ET causing long lasting trigeminal neuralgia, pain relief was achieved in this patient after gamma knife radiosurgery. Conclusion: Long-term symptom relief was achieved in all three cases proving that gamma knife radiosurgery is a good and safe alternative for patients with recurrent or nonsurgically treated ETs. abstract_id: PUBMED:29296330 Trigeminal nerve integrated dose and pain outcome after gamma knife radiosurgery for trigeminal neuralgia. Background: Gamma knife radiosurgery (GKRS) is an established treatment for trigeminal neuralgia. Identifying factors that influence outcome will help improve patients' results. Methods: We conducted a retrospective review of all patients treated with GKRS for trigeminal neuralgia at our institution from 2005 to 2010. Patients' clinical features and treatment details were reviewed. Analysis was performed to identify predictors of response and recurrence. Results: A hundred and forty five patients were treated. Mean follow up period was 24 months. At last follow up, 48 patients (33%) were pain free with no medications, and 48 patients (33%) were pain free maintained on medications. Twenty-eight patients (19%) had pain after the treatment but had significant reduction in their pain severity. Twenty-one patients (15%) did not have any significant pain reduction. Forty-four patients (30%) developed facial numbness. Recurrence occurred in 51 patients (35%). Post-treatment numbness was a predictor of good treatment response (OR 2.720, CI 1.193-6.200, p 0.0173). Higher integrated dose was a predictor of poor pain response to radiosurgery (OR 0.729, CI 0.566-0.940, p 0.0146). At an integrated dose value of 5.3 mJ or less, there was more than 50% chance of pain free outcome. Longer pain duration prior to treatment was the only independent predictor of increased recurrence risk (HR 1.038, 95%CI 1.001-1.075; p=0.0412). Conclusions: Radiosurgery is an effective treatment modality for trigeminal neuralgia. Post treatment numbness is associated with good treatment response and higher integrated dose predicts poor outcome after radiosurgery for trigeminal neuralgia. abstract_id: PUBMED:21121798 Does the Gamma Knife dose rate affect outcomes in radiosurgery for trigeminal neuralgia? Object: The object of this study was to determine whether the radiation dose rate affects clinical outcomes in patients who undergo stereotactic Gamma Knife surgery (GKS) to manage typical trigeminal neuralgia (TN). Methods: The authors retrospectively studied pain relief in 165 patients with medically intractable TN, who underwent 80-Gy GKS using a single 4-mm collimator between 1994 and 2005. No patient had received prior radiation treatment. The measured relative helmet output factor of the Gamma Knife was 0.8 throughout this interval, and the dose rate varied from 1.21 Gy/minute to 3.74 Gy/minute (median 2.06 Gy/minute). Irradiation time varied from 26.73 to 95.11 minutes. The authors divided patients into a low-dose-rate (LDR) group, in which the dose rate varied from 1.21 to 2.05 Gy/minute, and a high-dose-rate (HDR) group, in which the dose rate varied from 2.06 to 3.74 Gy/minute. Post-GKS, the patients' pain control was determined using the Barrow Neurological Institute (BNI) pain scale. There was no statistically significant difference between groups with respect to history of prior microvascular decompression (p = 0.410) or peripheral neuroablative procedures (p = 0.583). The length of symptoms in patients varied from 3 to 414 months with a median of 84 months (p = 0.698). Median follow-up was 26 months with a maximum of 139 months. Results: Initial pain relief was obtained in 71% of patients in the LDR group and 78% in the HDR group (p = 0.547). Patients who initially obtained improved pain relief (BNI Scores I-IIIa) after GKS maintained pain control for median durations of 52 months (LDR group) and 54 months (HDR group) (p = 0.403). New or increased facial sensory dysfunction was found in 14.5% of patients in the LDR group and in 19.3% of patients in the HDR group (p = 0.479). Conclusions: The authors found that the GKS dose rate did not affect pain control or morbidity within the range of 1.21-3.74 Gy/minute. Cobalt 60 source decay did not affect outcomes of GKS for TN pain management, even for dose rates approximating a 2-half-life decay of the isotope. abstract_id: PUBMED:31085863 The efficacy of gamma knife radiosurgery in patients with trigeminal neuralgia: The initial experience of the Bezmialem Vakif University. Background: Gamma knife stereotactic radiosurgery is, compared with surgical treatment, a less invasive treatment option for patients with trigeminal neuralgia (TN). Aim: In this report, we analyzed the effect and safety of gamma knife radiosurgery performed in patients with TN. Materials And Methods: We retrospectively reviewed patients who underwent gamma knife radiosurgery for TN between June 2014 and January 2017. All patients were treated with Leksell Gamma Knife Model C (Elekta, Stockholm, Sweden) with a prescription dose of 40 Gy with a 50% isodose line. The follow-up of the patients was performed 1 week after the procedure and after every 3 months. The pain score of the patients was recorded using the visual analog scale (VAS). Complications were also reviewed. Statistical Analysis: Statistical analysis was performed using the Statistical Package for the Social Sciences software for Windows, version 23.0. Results: Twenty-four patients (10 males, 14 females) were included in the study. The median age of the patients was 62.5 years (range, 34-91 years). The pre-gamma knife median VAS was 10 (range, 5-10), and the median VAS was 1 (range, 0-10) during the last follow-up. The pain decreased in 16 (76%) patients. Two patients (9%) had treatment-related complications. One patient developed hypoesthesia along the dermatome of the maxillary branch of the fifth cranial nerve and another patient developed facial paresis, which recovered after the usage of steroids for 3 months. Conclusion: Gamma knife radiosurgery is an effective and safe treatment for patients with TN with an acceptable pain control rate. Answer: Yes, the Gamma Knife dose rate does affect outcomes in radiosurgery for trigeminal neuralgia. Studies have shown that higher dose rates in Gamma Knife radiosurgery (GKRS) may result in earlier and more complete pain relief for patients with trigeminal neuralgia (TN) (PUBMED:26252452). A decrease in dose rate by 1 Gy/min was associated with a 17% decrease in pain intensity at its worst and a 22% decrease in pain interference with activities of daily living (PUBMED:26252452). Furthermore, a decrease in dose rate by 1.5 Gy/min corresponded to 31.8% less improvement in the overall severity of pain (PUBMED:36275137). Higher dose rates were also associated with a lower likelihood of recurrence of pain in longer-term follow-up (PUBMED:26252452). However, one study found that the dose rate did not affect pain control or morbidity within the range of 1.21-3.74 Gy/minute, suggesting that cobalt 60 source decay did not affect outcomes of GKRS for TN pain management within that specific dose rate range (PUBMED:21121798). Despite this, the overall evidence suggests that treatment time and dose rate are independent predictors of pain outcomes, and prescription doses should be customized to ensure iso-effective treatments while accounting for the possible increase in adverse effects at the highest biologically-equivalent doses (PUBMED:36275137).
Instruction: An investigation of the relationship between energy density and endovenous laser ablation success: does energy density matter? Abstracts: abstract_id: PUBMED:27178405 Factors affecting optimal linear endovenous energy density for endovenous laser ablation in incompetent lower limb truncal veins - A review of the clinical evidence. Objectives The objective is to identify the factors that affect the optimal linear endovenous energy density (LEED) to ablate incompetent truncal veins. Methods We performed a literature review of clinical studies, which reported truncal vein ablation rates and LEED. A PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analysis) flow diagram documents the search strategy. We analysed 13 clinical papers which fulfilled the criteria to be able to compare results of great saphenous vein occlusion as defined by venous duplex ultrasound, with the LEED used in the treatment. Results Evidence suggests that the optimal LEED for endovenous laser ablation of the great saphenous vein is >80 J/cm and <100 J/cm in terms of optimal closure rates with minimal side-effects and complications. Longer wavelengths targeting water might have a lower optimal LEED. A LEED <60 J/cm has reduced efficacy regardless of wavelength. The optimal LEED may vary with vein diameter and may be reduced by using specially shaped fibre tips. Laser delivery technique and type as well as the duration time of energy delivery appear to play a role in determining LEED. Conclusion The optimal LEED to ablate an incompetent great saphenous vein appears to be >80 J/cm and <95 J/cm based on current evidence for shorter wavelength lasers. There is evidence that longer wavelength lasers may be effective at LEEDs of <85 J/cm. abstract_id: PUBMED:37172934 A randomized clinical trial to assess the impact of laser power with constant linear endovenous energy density on outcomes of endovenous laser ablation (SLEDGE trial). Objective: To date, conflicting evidence has been reported regarding the energy settings to use during endovenous laser ablation (EVLA). In the present study, we evaluated the outcomes of EVLA of the great saphenous veins (GSVs) using different power settings with the same linear endovenous energy density (LEED) of ∼70 J/cm. Methods: We performed a single-center, randomized, controlled noninferiority trial with a blinded outcome assessment of patients with varicose veins of the GSV who underwent EVLA with a wavelength of 1470 nm and a radial fiber. The patients were randomly assigned to three groups according to the energy setting: group 1, 5 W power and an automatic fiber traction speed of 0.7 mm/s (LEED, 71.4 J/cm); group 2, 7 W and 1.0 mm/s (LEED, 70 J/cm); and group 3, 10 W and 1.5 mm/s (LEED, 66.7 J/cm). The primary outcome was the rate of GSV occlusion at 6 months. The secondary outcomes were pain intensity along the target vein the next day and at 1 week and 2 months after EVLA, the necessity for analgesics, and the occurrence of significant complications. Results: From February 2017 to June 2020, 245 lower extremities of 203 patients were enrolled. Groups 1, 2, and 3 included 83, 79, and 83 limbs, respectively. At 6 months of follow-up, 214 lower extremities were examined with duplex ultrasound. GSV occlusion was observed in 72 of 72 limbs (100%; 95% confidence interval [CI], 100%-100%) in group 1 and 70 of 71 limbs (98.6%; 95% CI, 97%-100%) in groups 2 and 3 (P < .05 for noninferiority). No difference was found in the pain level, necessity for analgesics, or rate of any other complications. Conclusions: The technical results, pain level, and complications of EVLA were not associated with the combination of energy power (5-10 W) and the speed of automatic fiber traction when a similar LEED of ∼70 J/cm was reached. abstract_id: PUBMED:36749579 Linear endovenous energy density (LEED) should always be quoted with the power used in endovenous thermal ablation - results from an in-vitro porcine liver model study. Background: Linear Endovenous Energy Density (LEED) is the energy used per cm of vein in endovenous thermal ablation (EVTA) but does not include time. This study examines the effect of time during EVTA. Method: EVTA was performed in a previously validated porcine liver model. LEEDs of 40, 60, 80 and 100 J/cm, using different powers were repeated 5 times each. Thermal spread, tissue carbonisation and device-tissue sticking during treatment were recorded. Result: LEED positively correlated to thermal spread and carbonisation of the tissue. Power was correlated with carbonisation but not thermal spread. Pullback had no correlations with thermal spread or carbonisation. Catheter sticking found occurred in powers >= 15 W or LEED >= 80 J/cm. Conclusion: LEED is a good measure of EVTA but does not include time. Power, which does include time, correlates with carbonisation and with device-tissue sticking. The power used must be quoted with the LEED. abstract_id: PUBMED:18760628 An investigation of the relationship between energy density and endovenous laser ablation success: does energy density matter? Purpose: To assess the relationship between energy density and the success of endovenous laser ablation (EVLA) treatment. Materials And Methods: A total of 586 EVLAs were performed in a period of 35 months. Retrospective chart review was performed, and data collected included the patients' age, sex, and history of venous stripping procedures, as well as the name, laterality, and length of the treated vein segment(s) and the total energy delivered. Energy density was calculated by dividing total energy delivered (in J) by the length of vein (in cm). Energy density selection was based on the treating interventionalist's preference. Ablated segments were grouped into those treated with less than 60 J/cm, 60-80 J/cm, 81-100 J/cm, and more than 100 J/cm. Failure of EVLA was defined by recanalization of any portion of the treated vein during follow-up as assessed by duplex Doppler ultrasound examination. Failure rates were compared with the chi(2) test and Wilcoxon rank-sum test. Results: A total of 471 segments were included in the analysis with an average follow-up period of 5 months (range, 0.2-28.7 months). Overall, 11 failures were encountered, including four in the group treated with less than 60 J/cm (n = 109; 4%), two in the 60-80-J/cm group (n = 77; 3%), four in the 81-100-J/cm group (n = 169; 3%), and one in the group treated with more than 100 J/cm (n = 116; 1%). There was no statistically significant difference in failure rates among energy density ranges. Conclusion: EVLA has a low failure rate that is not affected by energy density. abstract_id: PUBMED:24878667 Histological difference between pulsed wave laser and continuous wave laser in endovenous laser ablation. Background: Endovenous laser ablation to saphenous veins has been popular as a minimally invasive treatment for chronic venous insufficiency. However, adverse effects after endovenous laser ablation using continuous wave laser still remain. Pulsed wave with enough short pulse duration and sufficiently long thermal relaxation time may avoid the excess energy delivery, which leads to the perforation of the vein wall. Method: (1) Free radiation: Laser is radiated in blood for 10 s. (2) Endovenous laser ablation: Veins were filled with blood and placed in saline. Endovenous laser ablations were performed. Results: (1) There were clots on the fiber tips with continuous wave laser while no clots with pulsed wave laser. (2) In 980-nm continuous wave, four of 15 specimens had ulcers and 11 of 15 had perforation. In 1470-nm continuous wave with 120 J/cm of linear endovenous energy density, two of three presented ulcers and one of three showed perforation. In 1470-nm continuous wave with 60 J/cm of linear endovenous energy density, two of four had ulcers and two of four had perforation. In 1320-nm pulsed wave, there were neither ulcers nor perforation in the specimens. Conclusions: While endovenous laser ablation using continuous wave results in perforation in many cases, pulsed wave does not lead to perforation. abstract_id: PUBMED:32631172 Endovenous laser ablation: A comprehensive review. Objective: To provide an evidence-based overview of endovenous laser ablation and describe its role as an effective and durable technique for the management of superficial venous insufficiency. Methods: The published literature on the treatment of varicose veins using endovenous laser ablation was reviewed. The literature search focused on the history of endovenous laser ablation, its safety and durability, known complications, and differences in outcomes based on the iterations of fiber type and laser wavelength. Results: Treatment safety and efficacy of endovenous laser ablation appear to be based on the amount of energy administered over a defined distance, or the linear endovenous energy density. The ideal linear endovenous energy density varies with the laser wavelength and fiber-type. Post-operative pain and bruising may be reduced by the use of higher wavelength fibers or the use of radial or jacket-tip fibers as compared to bare-tip fibers. The incidence of endothermal heat-induced thrombosis remains low and has declined with increasing experience. Reports have demonstrated a greater than 90% technical success rate with saphenous endovenous laser ablation, long-term durability of ablation, and commensurate improvement in quality of life. Conclusions: Endovenous laser ablation is a safe and durable treatment option for the management of incompetent superficial and perforator veins of the lower extremities. As an endothermal technology, it remains a key component of the standard of care for the treatment of chronic venous insufficiency. abstract_id: PUBMED:27295103 Endovenous ablation of saphenous vein varicosis In the past 15 years, the minimally invasive endovenous treatments of varicose veins have been widely accepted. The efficacy of the different endovenous methods and the minimal post operative side effects are meanwhile well documented in a large number of evidence based publications. The recent NICE Guidelines (2013) considering the varicose vein treatment recommend in case of an insufficiency of saphenous veins first the endovenous thermal ablation with radiofrequency or laser, then the ultrasound guided sclerotherapy and as the third line the classic surgical treatment with stripping and high ligation. abstract_id: PUBMED:30448152 The relative roles of power, linear endovenous energy density, and pullback velocity in determining short-term success after endovenous laser ablation of the truncal saphenous veins. Objective: The objective of this study was to describe the relative contributions of power output, linear endovenous energy density (LEED), and pullback rate (PBR) in determining successful long-term occlusion of the truncal saphenous veins after endovenous laser ablation (EVLA). Methods: A consecutive 203 patients (336 ablated veins) with reflux of the great saphenous vein or small saphenous vein (Clinical, Etiology, Anatomy, and Pathophysiology class C2-C6) defined by duplex ultrasound and clinical criteria were treated with 1470-nm EVLA at a power of 6 to 12 W. Prospective outcomes were evaluated in serial clinical and duplex ultrasound follow-up. Univariate logistic regression (ULR) and multivariable logistic regression modeling assessed LEED, power output, and PBR as success predictors and optimal settings for sustained closure. Results: Higher power outputs (8-12 W) were significantly better than lower outputs (6-7 W) for successful closure. ULR suggested a ≥90% probability of success for power output >10.34 W (P < .001) and LEED >26.56 J/cm (P = .001). Power output was foremost (P < .001) and LEED second (P < .001), and PBR was insignificant overall (P = .38), becoming significant only at LEED values >26 J/cm (P < .001). Multivariable logistic regression confirmed both power (P < .040) and LEED (P < .008) but not PBR (P = .69) as significant determinants. Clinical side effects were not associated by ULR with power output (P = .14), LEED (P = .71), or PBR (P = .39). Conclusions: Power and LEED are separate but important determinants of short-term EVLA success. Threshold-dependent effects are observed for PBR (LEED ≤26 J/cm or ≥26 J/cm), with significant PBR correlation seen only at higher LEED values. Whereas ideal values for power and LEED differ according to the clinical scenario, our findings suggest that use of higher power outputs and greater LEED values (≥90% success probability achieved with power >10.34 W or LEED >26.56 J/cm) may yield optimal results. abstract_id: PUBMED:38240500 Defining the Parameters for Endovenous Microwave Ablation to Achieve Equivalence With Endovenous Laser Ablation, Using the Porcine Liver Model. Aims: Endovenous microwave ablation (EMWA) is an endovenous thermoablation (EVTA) system to ablate incompetent truncal veins. Early results suggest that EMWA uses more power than endovenous laser ablation (EVLA) to get the same results. We aimed to define the parameters for EMWA, which give the same tissue ablation as EVLA, using the validated porcine liver model. Methods: EVLA (1470 nm 600 micron radial fibre) treatments were performed at 6 W, 8 W and 10 W, at pullback speeds of 6, 7, 8 and 9 s/cm, giving Linear Endovenous Energy Densities (LEEDs) between 36 - 90 J/cm. Each combination of power and pullback was repeated 5 times. We then used EMWA in the same model. Powers of 35-75 W and pullback speeds of 4-9 s/cm were used (LEEDs 140-675 J/cm). Ablation tracts from both devices were analysed by 2 blinded observers, noting thermal spread and carbonisation. Results: For each commonly used parameter for EVLA, we identified a range of parameters for EMWA that produced similar tissue ablation in the porcine liver model. To keep the pullback speeds within the usual range, powers of 35-75 W were needed with EMWA, with mean EMWA LEEDs 3.9 - 5.8 times higher than EVLA LEEDs. We found the quicker the pullback speed, the higher the multiple of EMWA LEED we needed to get the same effect. Conclusion: We have identified parameters for EMWA that gave equivalent tissue ablation in the porcine liver model to commonly used parameters used for EVLA. These need to be validated clinically, but as the model used has already been validated against clinical outcome in endovenous thermal ablation, there is little reason to suspect that these results would not be valid. As the power during EMWA is higher than EVLA, EVMA LEEDs are approximately 4-6 times higher than EVLA LEEDs to achieve the same thermal effect on the tissues. abstract_id: PUBMED:29804500 Comparison of the effects of endovenous laser ablation at 1470 nm versus 1940 nm and different energy densities. Objectives: To evaluate histological and immunohistochemical changes in the great saphenous vein after endovenous laser ablation at two different wavelengths (1470 vs. 1940 nm) and linear endovenous energy density values (50 vs. 100 J/cm). Method: Segments were obtained from the conventional eversion removal of great saphenous vein and divided into a control group and four groups for ex vivo irradiation (control group; A: 1470 nm, 50 J/cm; B: 1470 nm, 100 J/cm; C: 1940 nm, 50 J/cm; D: 1940 nm, 100 J/cm). Fifty venous segments ( n = 10/group) were analyzed. Changes were classified into low-temperature changes, moderate-temperature changes, high-temperature changes, and very high-temperature changes. Results: In the intima, low-temperature changes + moderate-temperature changes were significantly more prevalent in group A (65.4%) than in D ( p = 0.001). In the media, low-temperature changes + moderate-temperature changes were achieved mostly in groups A and C (77.4% and 75.0%, respectively). In adventitia fragments, 100% of changes in group A were low-temperature changes + moderate-temperature changes. Conclusions: The 1940-nm laser wavelength with linear endovenous energy density of 100 J/cm was excessively destructive to the intima and media causing a high rate of high-grade thermal damage. These findings corroborate the possibility of using lower linear endovenous energy densities with 1940-nm devices to achieve effective occlusion with less high grade thermal damage to the intima and media, as well as to prevent damages to the adventitia and perivenous tissues, including venous perforation and its attendant clinical consequences. Answer: The relationship between energy density and the success of endovenous laser ablation (EVLA) has been investigated in various studies, and the findings suggest that energy density does indeed matter, although the optimal energy density can vary depending on several factors. One study found that the optimal linear endovenous energy density (LEED) for EVLA of the great saphenous vein is greater than 80 J/cm and less than 100 J/cm, which provides optimal closure rates with minimal side effects and complications. It was also noted that longer wavelengths targeting water might have a lower optimal LEED, and that a LEED less than 60 J/cm has reduced efficacy regardless of wavelength. The optimal LEED may vary with vein diameter and may be reduced by using specially shaped fiber tips. Laser delivery technique and type, as well as the duration time of energy delivery, also play a role in determining LEED (PUBMED:27178405). Another study, the SLEDGE trial, evaluated the outcomes of EVLA using different power settings while maintaining a similar LEED of approximately 70 J/cm. The study concluded that the technical results, pain level, and complications of EVLA were not associated with the combination of energy power (5-10 W) and the speed of automatic fiber traction when a similar LEED was reached (PUBMED:37172934). An in-vitro study using a porcine liver model found that LEED positively correlated with thermal spread and carbonization of the tissue. Power, which includes time, correlated with carbonization and with device-tissue sticking. This study emphasized that the power used must be quoted with the LEED, as LEED alone does not include time (PUBMED:36749579). A retrospective chart review of 586 EVLAs found that EVLA has a low failure rate that is not affected by energy density, suggesting that while energy density is a factor, it may not be the sole determinant of EVLA success (PUBMED:18760628). In summary, energy density is an important factor in the success of EVLA, but it is not the only one. The optimal LEED for EVLA varies depending on the wavelength of the laser, the diameter of the vein, the type of fiber tip used, and the technique and duration of energy delivery. Additionally, power settings and the speed of fiber traction also play significant roles in the outcomes of EVLA.
Instruction: Is adjustment of National Eye Institute Visual Function Questionnaire scores for general health necessary in randomized trials? Abstracts: abstract_id: PUBMED:31409130 Noninferiority trials with nonadherence to the assigned randomized treatment. Background: Nonadherence to treatment assignment in a noninferiority randomized trial is especially problematic because it attenuates observed differences between the treatment arms, possibly leading one to conclude erroneously that a truly inferior experimental therapy is noninferior to a standard therapy (inflated type 1 error probability). The Lachin-Foulkes adjustment is an increase in the sample size to account for random nonadherence for the design of a superiority trial with a time-to-event outcome; it has not been explored in the noninferiority trial setting nor with nonrandom nonadherence. Noninferiority trials where patients have knowledge of a personal prognostic risk score may lead to nonrandom nonadherence, as patients with a relatively high risk may be more likely to not adhere to the random assignment to the (reduced) experimental therapy, and patients with a relatively low risk score may be more likely to not adhere to the random assignment to the (more aggressive) standard therapy. Methods: We investigated via simulations the properties of the Lachin-Foulkes adjustment in the noninferiority setting. We considered nonrandom in addition to random nonadherence to the treatment assignment. For nonrandom nonadherence, we used the scenario where a risk score, potentially associated with the between-arm treatment difference, influences patients' nonadherence. A sensitivity analysis is proposed for addressing the nonrandom nonadherence for this scenario. The noninferiority TAILORx adjuvant breast cancer trial, where eligibility was based on a genomic risk score, is used as an example throughout. Results: The Lachin-Foulkes adjustment to the sample size improves the operating characteristics of noninferiority trials with random nonadherence. However, to maintain type 1 error probability, it is critical to adjust the noninferiorty margin as well as the sample size. With nonrandom nonadherence that is associated with a prognostic risk score, the type 1 error probability of the Lachin-Foulkes adjustment can be inflated (e.g. doubled) when the nonadherence is larger in the experimental arm than the standard arm. The proposed sensitivity analysis lessens the inflation in this situation. Conclusion: The Lachin-Foulkes adjustment to the sample size and noninferiority margin is a useful simple technique for attenuating the effects of random nonadherence in the noninferiority setting. With nonrandom nonadherence associated with a risk score known to the patients, the type 1 error probability can be inflated in certain situations. A proposed sensitivity analysis for these situations can attenuate the inflation. abstract_id: PUBMED:27179253 Statistical lessons learned for designing cluster randomized pragmatic clinical trials from the NIH Health Care Systems Collaboratory Biostatistics and Design Core. Background/aims: Pragmatic clinical trials embedded within health care systems provide an important opportunity to evaluate new interventions and treatments. Networks have recently been developed to support practical and efficient studies. Pragmatic trials will lead to improvements in how we deliver health care and promise to more rapidly translate research findings into practice. Methods: The National Institutes of Health (NIH) Health Care Systems Collaboratory was formed to conduct pragmatic clinical trials and to cultivate collaboration across research areas and disciplines to develop best practices for future studies. Through a two-stage grant process including a pilot phase (UH2) and a main trial phase (UH3), investigators across the Collaboratory had the opportunity to work together to improve all aspects of these trials before they were launched and to address new issues that arose during implementation. Seven Cores were created to address the various considerations, including Electronic Health Records; Phenotypes, Data Standards, and Data Quality; Biostatistics and Design Core; Patient-Reported Outcomes; Health Care Systems Interactions; Regulatory/Ethics; and Stakeholder Engagement. The goal of this article is to summarize the Biostatistics and Design Core's lessons learned during the initial pilot phase with seven pragmatic clinical trials conducted between 2012 and 2014. Results: Methodological issues arose from the five cluster-randomized trials, also called group-randomized trials, including consideration of crossover and stepped wedge designs. We outlined general themes and challenges and proposed solutions from the pilot phase including topics such as study design, unit of randomization, sample size, and statistical analysis. Our findings are applicable to other pragmatic clinical trials conducted within health care systems. Conclusion: Pragmatic clinical trials using the UH2/UH3 funding mechanism provide an opportunity to ensure that all relevant design issues have been fully considered in order to reliably and efficiently evaluate new interventions and treatments. The integrity and generalizability of trial results can only be ensured if rigorous designs and appropriate analysis choices are an essential part of their research protocols. abstract_id: PUBMED:30826377 The selection of comparators for randomized controlled trials of health-related behavioral interventions: recommendations of an NIH expert panel. Objectives: To provide recommendations for the selection of comparators for randomized controlled trials of health-related behavioral interventions. Study Design And Setting: The National Institutes of Health Office of Behavioral and Social Science Research convened an expert panel to critically review the literature on control or comparison groups for behavioral trials and to develop strategies for improving comparator choices and for resolving controversies and disagreements about comparators. Results: The panel developed a Pragmatic Model for Comparator Selection in Health-Related Behavioral Trials. The model indicates that the optimal comparator is the one that best serves the primary purpose of the trial but that the optimal comparator's limitations and barriers to its use must also be taken into account. Conclusion: We developed best practice recommendations for the selection of comparators for health-related behavioral trials. Use of the Pragmatic Model for Comparator Selection in Health-Related Behavioral Trials can improve the comparator selection process and help resolve disagreements about comparator choices. abstract_id: PUBMED:11448327 Development of the 25-item National Eye Institute Visual Function Questionnaire. Objective: To develop and test the psychometric properties of a 25-item version of the National Eye Institute Visual Function Questionnaire (NEI VFQ-25). Design: Prospective observational cohort study of persons with 1 of 5 chronic eye diseases or low vision who were scheduled for nonurgent visits in ophthalmology practices and a reference sample of persons without eye disease. Setting: Eleven university-based ophthalmology practices and the NEI Clinical Center. Patients: Eligible participants had to have 1 of the following eye conditions: age-related cataracts, age-related macular degeneration, diabetic retinopathy, primary open-angle glaucoma, cytomegalovirus retinitis, or low vision from any cause. Seven of the 12 sites also enrolled persons in a reference sample. Reference sample participants had no evidence of underlying eye disease but were scheduled for either screening eye examinations or correction of refractive error. All eligible persons had to be 21 years or older, English speaking, and cognitively able to give informed consent and participate in a health status interview. Measurements And Main Results: To provide the data needed to create the NEI VFQ-25, all subjects completed an interview that included the 51-item NEI VFQ. Estimates of internal consistency indicate that the subscales of the NEI VFQ-25 are reliable. The validity of the NEI VFQ-25 is supported by high correlations between the short- and long-form versions of the measure, observed between-group differences in scores for persons with different eye diseases of varying severity, and the moderate-to-high correlations between the NEI VFQ-25 subscales that have the most to do with central vision and measured visual acuity. Conclusions: The reliability and validity of the NEI VFQ-25 are comparable to those of the 51-item NEI VFQ field test version of the survey. This shorter version will be more feasible in settings such as clinical trials where interview length is a critical consideration. In addition, preliminary analyses indicate that the psychometric properties of the NEI VFQ-25 are robust for the eye conditions studied; this suggests that the measure will provide reproducible and valid data when used across multiple conditions of varying severity. abstract_id: PUBMED:18311749 Effects of postmenopausal hormone therapy on rheumatoid arthritis: the women's health initiative randomized controlled trials. Objective: To study the effects of postmenopausal hormone therapy (PHT) on the incidence and severity of rheumatoid arthritis (RA). Methods: The Women's Health Initiative randomized controlled trials evaluated the effects of unopposed estrogen (E-alone) and estrogen plus progestin (E+P) compared with placebo on a diverse set of health outcomes over 7.1 and 5.6 years, respectively. RA cases were identified using historical and medication data. The hazard of developing RA was estimated using Cox proportional hazards regression models. Disease symptom severity was estimated using the Short Form 36 (SF-36) and self-reported joint pain scores at baseline and after 1 year. Mean changes in severity were compared using linear regression models. Results: Of the 27,347 participants, 63 prevalent cases and 105 incident cases of RA were identified. A nonsignificant reduction in the risk of developing RA (hazard ratio 0.74; 95% confidence interval [95% CI] 0.51, 1.10) was noted with PHT use. PHT use led to improved SF-36 scores in unadjusted analyses (percent change 12.5%; 95% CI -24.45, -0.57) but not after adjustment for relevant covariates (P = 0.33). Nonsignificant improvements in joint pain scores were seen with PHT use (odds ratio [OR] 4.10; 95% CI 0.83, 20.20). PHT did not improve swelling (OR 1.27; 95% CI 0.08, 19.63) or prevent new joint pains (OR 0.72; 95% CI 0.11, 4.68) in RA participants. Conclusion: There were no statistically significant differences in the risk of developing RA or the severity of RA between the PHT and placebo groups. abstract_id: PUBMED:31869281 Essential Ingredients and Innovations in the Design and Analysis of Group-Randomized Trials. This article reviews the essential ingredients and innovations in the design and analysis of group-randomized trials. The methods literature for these trials has grown steadily since they were introduced to the biomedical research community in the late 1970s, and we summarize those developments. We review, in addition to the group-randomized trial, methods for two closely related designs, the individually randomized group treatment trial and the stepped-wedge group-randomized trial. After describing the essential ingredients for these designs, we review the most important developments in the evolution of their methods using a new bibliometric tool developed at the National Institutes of Health. We then discuss the questions to be considered when selecting from among these designs or selecting the traditional randomized controlled trial. We close with a review of current methods for the analysis of data from these designs, a case study to illustrate each design, and a brief summary. abstract_id: PUBMED:34119080 Does Knowledge of Treatment Assignment Affect Patient Report of Symptoms, Function, and Health Status? An Evaluation Using Multiple Myeloma Trials. Objectives: Unblinded trials are common in oncology, but patient knowledge of treatment assignment may bias response to questionnaires. We sought to ascertain the extent of possible bias arising from patient knowledge of treatment assignment. Methods: This is a retrospective analysis of data from 2 randomized trials in multiple myeloma, 1 double-blind and 1 open label. We compared changes in patient reports of symptoms, function, and health status from prerandomization (screening) to baseline (pretreatment but postrandomization) across control and investigational arms in the 2 trials. Changes from prerandomization scores at ~2 and 6 months on treatment were evaluated only across control arms to avoid comparisons between 2 different experimental drugs. All scores were on 0- to 100-point scales. Inverse probability weighting, entropy balancing, and multiple imputation using propensity score splines were used to compare score changes across similar groups of patients. Results: Minimal changes from screening were seen at baseline in all arms. In the control arm, mean changes of <7 points were seen for all domains at 2 and 6 months. The effect of unblinding at 6 months in social function was a decline of less than 6 points (weighting: -3.09; 95% confidence interval -8.41 to 2.23; balancing: -4.55; 95% confidence interval -9.86 to 0.76; imputation: -5.34; 95% confidence interval -10.64 to -0.04). Conclusion: In this analysis, we did not find evidence to suggest that there was a meaningful differential effect on how patients reported their symptoms, function or health status after knowing their treatment assignment. abstract_id: PUBMED:8493430 Practical issues in data monitoring of clinical trials: summary of responses to a questionnaire at NIH. A targeted poll was undertaken to compare and contrast models of data monitoring of randomized clinical trials sponsored by the National Institutes of Health. In an attempt to represent the institutes which conduct clinical trials, twelve individuals were selected and asked to respond to a questionnaire specifically prepared for this workshop. The response rate was 100 per cent. Most of the large trials sponsored by the institutes have independent, formally constituted data monitoring committees. There was one institute which does not have any data monitoring committees. The questionnaire is described in detail and a summary of the results is given. abstract_id: PUBMED:34596659 Generalizability of randomized controlled trials in heart failure with reduced ejection fraction. Background: Heart failure (HF) trials have stringent inclusion and exclusion criteria, but limited data exist regarding generalizability of trials. We compared patient characteristics and outcomes between patients with HF and reduced ejection fraction (HFrEF) in trials and observational registries. Methods And Results: Individual patient data for 16 922 patients from five randomized clinical trials and 46 914 patients from two HF registries were included. The registry patients were categorized into trial-eligible and non-eligible groups using the most commonly used inclusion and exclusion criteria. A total of 26 104 (56%) registry patients fulfilled the eligibility criteria. Unadjusted all-cause mortality rates at 1 year were lowest in the trial population (7%), followed by trial-eligible patients (12%) and trial-non-eligible registry patients (26%). After adjustment for age and sex, all-cause mortality rates were similar between trial participants and trial-eligible registry patients [standardized mortality ratio (SMR) 0.97; 95% confidence interval (CI) 0.92-1.03] but cardiovascular mortality was higher in trial participants (SMR 1.19; 1.12-1.27). After full case-mix adjustment, the SMR for cardiovascular mortality remained higher in the trials at 1.28 (1.20-1.37) compared to RCT-eligible registry patients. Conclusion: In contemporary HF registries, over half of HFrEF patients would have been eligible for trial enrolment. Crude clinical event rates were lower in the trials, but, after adjustment for case-mix, trial participants had similar rates of survival as registries. Despite this, they had about 30% higher cardiovascular mortality rates. Age and sex were the main drivers of differences in clinical outcomes between HF trials and observational HF registries. abstract_id: PUBMED:9408714 The advantages of community-randomized trials for evaluating lifestyle modification. Observational studies may provide suggestive evidence for the results of behavior change and lifestyle modification, but they do not replace randomized trials for comparing interventions. To obtain a valid comparison of competing intervention strategies, randomized trials of adequate size are the recommended approach. Randomization avoids bias, achieves balance (on average) of both known and unknown predictive factors between intervention and comparison groups, and provides the basis of statistical tests. The value of randomization is as relevant when investigating community interventions as it is for studies that are directed at individuals. Randomization by group is less efficient statistically than randomization by individual, but there are reasons why randomization by group (such as community) may be chosen, including feasibility of delivery of the intervention, political and administrative considerations, avoiding contamination between individuals allocated to competing interventions, and the very nature of the intervention. One example is the Community Intervention Trial for Smoking Cessation (COMMIT), which involved 11 matched pairs of communities and randomized within these pairs to active community-level intervention versus comparison. For analysis of results, community-level permutation tests (and corresponding test-based confidence intervals) can be designed based on the randomization distribution. The advantages of this approach are that it is robust, and the unit of randomization is the unit of analysis, yet it can incorporate individual-level covariates. Such covariates can play a role in imputation for missing values, adjustment for imbalances, and separate analyses in demographic subsets (with appropriate tests for interaction). A community-randomized trial can investigate a multichannel community-based approach to lifestyle modification, thus providing generalizability coupled with a rigorous evaluation of the intervention. Answer: The abstracts provided do not directly address the question of whether adjustment of National Eye Institute Visual Function Questionnaire (NEI VFQ-25) scores for general health is necessary in randomized trials. The NEI VFQ-25 is a tool developed to measure the influence of visual disability and visual symptoms on general health and the quality of life in patients with chronic eye diseases (PUBMED:11448327). However, the abstracts focus on various aspects of clinical trial design, conduct, and analysis, such as nonadherence in noninferiority trials (PUBMED:31409130), design considerations in pragmatic clinical trials (PUBMED:27179253), selection of comparators in health-related behavioral trials (PUBMED:30826377), effects of postmenopausal hormone therapy on rheumatoid arthritis (PUBMED:18311749), design and analysis of group-randomized trials (PUBMED:31869281), potential bias from patient knowledge of treatment assignment (PUBMED:34119080), data monitoring in clinical trials (PUBMED:8493430), generalizability of heart failure trials (PUBMED:34596659), and the advantages of community-randomized trials (PUBMED:9408714). To answer the question about the necessity of adjusting NEI VFQ-25 scores for general health in randomized trials, one would need to consider the specific objectives of the trial and the role of general health in the outcomes being measured. If general health is a potential confounder or if there is a need to isolate the impact of eye disease on quality of life from overall health status, then adjusting for general health may be appropriate. However, this decision should be based on the study's hypothesis, the relationship between eye disease and general health, and the statistical analysis plan. Since none of the abstracts provided discuss this specific issue, further literature review or expert consultation would be necessary to make an informed decision on this matter.
Instruction: Do nurse practitioners make a difference in provision of health counseling in hospital outpatient departments? Abstracts: abstract_id: PUBMED:15543924 Do nurse practitioners make a difference in provision of health counseling in hospital outpatient departments? Purpose: This study examined whether nurse practitioners (NPs) had any impact on the type and amount of health counseling provided during patient visits to hospital outpatient departments (OPDs). Data Sources: This is a secondary data analysis of the National Hospital Ambulatory Medical Care Survey from 1997 to 2000. Only patient visits to hospital OPDs were included. Rates of health counseling provided at patient visits involving an NP were compared with those without an NP. Adjusted odds ratio was reported separately for each type of health counseling provided at patient visits for nonillness care, for chronic problems, and for acute problems. Conclusions: Health counseling for diet, exercise, human immunodeficiency virus (HIV) and sexually transmitted disease (STD) prevention, tobacco use, and injury prevention are more likely to be provided at nonillness care visits involving an NP than at those not involving an NP. The presence of an NP is associated not only with higher rates of counseling for diet, exercise, and tobacco use provided at patient visits for chronic problems but also with higher rates of counseling for diet and HIV/STD prevention provided at patient visits for acute problems. Implications For Practice: This study indicates an important role NPs can play in providing preventive services in outpatient hospital departments. The findings reflect the emphasis of the NP education on health counseling and patient education in clinical practice. abstract_id: PUBMED:9885533 Physician assistants and nurse practitioners in hospital outpatient departments, 1993-1994. Objective: To describe the characteristics of visits to physician assistants (PAs) and nurse practitioners (NPs) in hospital outpatient departments in the United States. Methods: Data from the 1993 and 1994 National Hospital Ambulatory Medical Care Surveys were used to compare hospital outpatient department visits in which the patient was seen by a PA or NP, or both, with outpatient visits to all practitioners. Results: An average of 64 million annual outpatient visits were made in 1993-1994, and patients were seen by PAs, NPs, or both, at 8% of these visits. PA-NP visits were more likely than total visits to occur in the Midwest, in non-urban areas, and in obstetric-gynecology clinics, and a higher proportion involved patients younger than age 25. Smaller differences were found between PA-NP visits and total outpatient visits in "reason for visit," "principal diagnosis," and "medication prescribed." Conclusion: Beyond the care they provide in physicians' offices and other non-hospital settings, PAs and NPs make an important contribution to ambulatory health care delivery in hospital outpatient departments. abstract_id: PUBMED:12964142 Characteristics of patient visits to nurse practitioners in hospital outpatient departments. Nurse practitioners (NPs) are the largest and the fastest growing groups among nonphysician practitioners in the United States. However, there has been lack of studies on the supply, demand, and use of nurse practitioners in hospital outpatient departments (OPDs) across the nation. Using the National Hospital Ambulatory Medical Care Survey (1997-2000), this study describes patient visits to NPs in general medicine, pediatrics, and obstetrics/gynecology clinics in hospitals across the nation. The percentage of patient visits involving NPs increased from 5.9 percent in 1997 to 7.3 percent in 2000. NPs have greater roles in hospital OPDs in nonmetropolitan areas than in metropolitan areas. Regional difference in patient visits to NPs supports the relationship between the practice environment and the use of NPs. As expected, NPs continue to serve the health care needs of women and children in hospital OPDs. Of all OPD visits with a NP service, NPs saw patients with no presence of a physician in 82 percent of these visits. As the role of the NP evolves in the U.S. health care delivery system, further studies on the clinical practice of NPs in hospital OPDs can help evaluate the impact of NPs in providing quality of patient care at minimum cost. abstract_id: PUBMED:9838757 Characteristics of patient visits to nurse practitioners and physician assistants in hospital outpatient departments. Many authors have described differences between nurse practitioners and physician assistants. Most studies have compared physician with nonphysician providers' practice. Few studies have compared nurse practitioners and physician assistants, and none has used a national data base. This exploratory, atheoretical research examined which of the following characteristics predicted patients being seen by nurse practitioners and physician assistants: patient and hospital demographics, diagnosis, diagnostic/screening services, therapeutic services, and disposition of the visit. The data set used for analysis was the 1992 National Hospital Ambulatory Medical Care Survey. Based on a multistage probability design yielding national estimates for patient visits in hospital outpatient settings, the National Hospital Ambulatory Medical Care Survey showed that there were 2,847 weighted patient visits to either nurse practitioners or physician assistants (4.6 million patient visits using national estimates). Results of multivariate logistic regression suggest that nurse practitioners were the most likely nonphysician provider for outpatients receiving more health promotion and counseling (therapeutic) services and for those needing women's and children's services. Outpatients in rural areas predicted visits to physician assistants. As more nonphysician providers enter the work force, the results of this research may assist with understanding the utilization of nurse practitioners and physician assistants in primary care. abstract_id: PUBMED:24128950 Counseling and provision of long-acting reversible contraception in the US: national survey of nurse practitioners. Objective: Nurse practitioners (NPs) provide frontline care in women's health, including contraception, an essential preventive service. Their importance for contraceptive care will grow, with healthcare reforms focused on affordable primary care. This study assessed practice and training needs to prepare NPs to offer high-efficacy contraceptives - intrauterine devices (IUDs) and implants. Method: A US nationally representative sample of nurse practitioners in primary care and women's health was surveyed in 2009 (response rate 69%, n=586) to assess clinician knowledge and practices, guided by the CDC US Medical Eligibility Criteria for Contraceptive Use. Results: Two-thirds of women's health NPs (66%) were trained in IUD insertions, compared to 12% of primary care NPs. Contraceptive counseling that routinely included IUDs was low overall (43%). Nurse practitioners used overly restrictive patient eligibility criteria, inconsistent with CDC guidelines. Insertion training (aOR=2.4, 95%CI: 1.10 5.33) and knowledge of patient eligibility (aOR=2.9, 95%CI: 1.91 4.32) were associated with IUD provision. Contraceptive implant provision was low: 42% of NPs in women's health and 10% in primary care. Half of NPs desired training in these methods. Conclusion: Nurse practitioners have an increasingly important position in addressing high unintended pregnancy in the US, but require specific training in long-acting reversible contraceptives. abstract_id: PUBMED:12462818 Documentation of health risks and health promotion counseling by emergency department nurse practitioners and physicians. Purpose: The purpose of this study was to examine medical record documentation of health risk factors and health promotion discharge counseling by nurse practitioners and physicians practicing in an emergency department in the U.S. midwest. Methods: In this two-group comparative study researchers examined random-stratified medical records 305 nonacute ambulatory patients for selected health risk factors, including smoking, alcohol use, elevated blood pressure, obesity, and dental caries. Results: Fifty-nine percent of this sample of relatively young adults (mean age = 33) had one or more health-risk factors. According to medical record documentation, only 22% of these adults, with nonacute problems, received health promotion counseling. Multivariate analyses indicated that nurse practitioners were slightly more likely to provide smoking cessation counseling than were physicians. Conclusions: Many opportunities for identification of health risks and follow-up counseling, as recommended in Healthy People 2000 and by the U.S. Preventive Services Task Force, were not documented. To meet the new goals of Healthy People 2010, health care providers in all settings should identify health risk factors and document health promotion counseling during every patient encounter. abstract_id: PUBMED:15749136 Provision of health counseling in office-based practices and hospital outpatient clinics. Objective: To compare the rates of health counseling provided during primary care visits in two different types of ambulatory care settings. Methods: Secondary analysis of the 2000 National Ambulatory Medical Care Survey (NAMCS) and National Hospital Ambulatory Medical Care Survey (NHAMCS). Results: Of the estimated 722 million adult ambulatory care visits during 2000, 90.8% were made to office-based physician practice settings and 9.2% to hospital-based outpatient departments. Consistent with previous reports, the demographic profile of patients who seek primary care in hospital outpatient departments differs from those seen in office-based practices. Provision of health counseling for exercise [OR = 1.4; 95% confidence intervals (CI): 1-1.8], diet (OR = 1.6; 95% CI: 1.2-2.3), breast self-exam (OR = 2; 95% CI: 1.1-3.6) and stress management (OR = 1.7; 95% CI: 1-2.7) during patient visits was more likely to be reported in the office-based practices than in hospital outpatient clinics. The visit-based rates of health counseling for HIV/STD prevention, tobacco use, mental health or injury prevention were low in both settings. Conclusions: There is opportunity to improve rates of preventive counseling in primary care settings and to reduce disparities that exist. Identifying the reasons for these disparities and effective interventions will be important steps in providing equitable care in the area of preventive health counseling. abstract_id: PUBMED:23002812 Collaboration among nurse practitioners and registered nurses in outpatient oncology settings in Canada. Aim: This article is a report on a case study that described and analysed the collaborative process among nurse practitioners and registered nurses in oncology outpatient settings to understand and improve collaborative practice among nurses. Background: Changes in the health system have created new models of care delivery, such as collaborative nursing teams. This has resulted in the increased opportunity for enhanced collaboration among nurse practitioners and registered nurses. The study was guided by Corser's Model of Collaborative Nurse-Physician Interactions (1998). Design: Embedded single case design with multiple units of analysis. Methods: Qualitative data were collected in 2010 using direct participant observations and individual and joint (nurse dyads) interviews in four outpatient oncology settings at one hospital in Ontario, Canada. Findings: Thematic analysis revealed four themes: (1) Together Time Fosters Collaboration; (2) Basic Skills: The Brickworks of Collaboration; (3) Road Blocks: Obstacles to Collaboration; and (4) Nurses' Attitudes towards their Collaborative Work. Conclusion: Collaboration is a complex process that does not occur spontaneously. Collaboration requires nurses to not only work together but also spend time socially interacting away from the clinical setting. While nurses possess the conceptual knowledge of the meaning of collaboration, findings from this study showed that nurses struggle to understand how to collaborate in the practice setting. Strategies for improving nurse-nurse practitioner collaboration should include: the support and promotion of collaborative practice among nurses by hospital leadership and the development of institutional and organizational education programmes that would focus on creating innovative opportunities for nurses to learn about intraprofessional collaboration in the practice setting. abstract_id: PUBMED:8581241 Nurse practitioners in major accident and emergency departments: a national survey. Use of nurse practitioners in major accident and emergency (A&E) departments is rapidly increasing: currently they are used in 30% of such departments and this is expected to rise to 63% by the end of 1995. Most are trained by a formal programme in the employing hospital but 12% claim to have no formal training. The nurse practitioner could prescribe a limited range of drugs in 82% of major departments with 'official' nurse practitioners, but radiograph requesting was permitted in only 57% of such departments: of those not able to request radiographs, 95% blamed radiographers for preventing this. abstract_id: PUBMED:36735568 Contraceptive counseling for adolescents in the emergency department: A novel curriculum for nurse practitioners and physician assistants. Abstract: Many adolescents use the emergency department (ED) as their primary source of health care. As a result, the ED serves as a unique opportunity to reach adolescents. Although many adolescent visits to the ED are related to reproductive health, ED providers report barriers to providing this care, including lack of training. Nurse practitioners (NPs) and physician assistants (PAs) serve a vital role in the provision of consistent care to adolescents in the ED. The purpose of this study was to create a curriculum to train NPs and PAs at two pediatric institutions to provide patient-centered contraceptive counseling to adolescents in the pediatric ED regardless of their chief complaint. To do this, we created a four-part webinar followed by an in-person training session. Participants completed training and then conducted counseling sessions with adolescents in the ED. Counseling sessions were recorded and reviewed for fidelity to delineated counseling principles, and data from post-counseling surveys were collected. 27 NPs and PAs completed the training and conducted 99 counseling sessions. Nearly all sessions incorporated essential content and communication principles such as shared decision making (90%) and teach-back methods (75%). All NPs and PAs who participated reported satisfaction and subjective improvement in knowledge and competence from the training. This curriculum offers a novel and feasible approach to train NPs and PAs to deliver patient-centered contraception counseling to adolescents in the ED setting, and it can serve as a model for how to educate different providers to incorporate reproductive health education into the busy ED visit. Answer: Yes, nurse practitioners (NPs) do make a difference in the provision of health counseling in hospital outpatient departments (OPDs). A study examining the impact of NPs on the type and amount of health counseling provided during patient visits to hospital OPDs found that health counseling for diet, exercise, HIV and STD prevention, tobacco use, and injury prevention was more likely to be provided at nonillness care visits involving an NP than at those not involving an NP. Additionally, the presence of an NP was associated with higher rates of counseling for diet, exercise, and tobacco use provided at patient visits for chronic problems, as well as higher rates of counseling for diet and HIV/STD prevention provided at patient visits for acute problems (PUBMED:15543924). This indicates an important role that NPs can play in providing preventive services in outpatient hospital departments, reflecting the emphasis of NP education on health counseling and patient education in clinical practice.
Instruction: Survival outcomes for patients with stage IVB vulvar cancer with grossly positive pelvic lymph nodes: time to reconsider the FIGO staging system? Abstracts: abstract_id: PUBMED:25524458 Survival outcomes for patients with stage IVB vulvar cancer with grossly positive pelvic lymph nodes: time to reconsider the FIGO staging system? Objective: To evaluate treatment outcomes for patients with vulvar cancer with grossly positive pelvic lymph nodes (PLNs). Methods: From a database of 516 patients with vulvar cancer, we identified patients with grossly positive PLNs without distant metastasis at initial diagnosis. We identified 20 patients with grossly positive PLNs; inclusion criteria included PLN 1.5cm or larger in short axis dimension on CT/MRI (n=11), FDG-avid PLN on PET/CT (n=3), or biopsy-proven PLN disease (n=6). Ten patients were treated with chemoradiation therapy (CRT), 4 with RT alone, and 6 with various combinations of surgery, RT or CRT. Median follow-up time for patients who had not died of cancer was 47months (range, 4-228months). Results: Mean primary vulvar tumor size was 6.4cm; 12 patients presented with 2009 AJCC T2 and 8 with T3 disease. All patients had grossly positive inguinal nodes, and the mean inguinal nodal diameter was 2.8cm. The 5-year overall survival and disease specific survival rates were 43% and 48%, respectively. Eleven patients had recurrences, some at multiple sites. There were 9 recurrences in the vulva, but no isolated nodal recurrences. Four patients developed distant metastasis within 6months of starting radiation therapy. Conclusions: Aggressive locoregional treatment can lead to favorable outcomes for many patients with grossly involved PLNs that is comparable to that of grossly involved inguinal nodes only. We recommend modification of the FIGO stage IVB classification to more accurately reflect the relatively favorable prognosis of patients with PLN involvement. abstract_id: PUBMED:26959073 The prognostic role of FIGO stage in patients with vulvar cancer: a systematic review and meta-analysis. Objective: To perform a meta-analysis examining the survival of patients with vulvar cancer based on the 2009 International Federation of Gynecology and Obstetrics (FIGO) staging system. Methods: Medline, PubMed, and Cochrane databases were searched until 20 March 2015 for prospective or retrospective studies using the terms vulvar cancer, prognostic/prognosis, survival, recurrence, lymph nodes (LNs), inguinal lymphadenectomy/excision, and staging. The primary outcome was 5 year overall survival (OS), and secondary outcomes were 5 year disease-free survival (DFS) and progression-free survival (PFS). Results: Fourteen retrospective studies were included. The 5 year OS rate decreased with increasing 2009 FIGO stage and number of LN metastasis. FIGO stage I, II, III, and IV patients had 5 year OS rates of 84.0%, 74.6%, 47.8%, and 9.4%, respectively. Pooled estimates showed that the 5 year OS was 84.5% for patients without LN metastasis, and for patients with ≥3 LN metastases the 5 year OS rate was 30.1%. Similarly, the overall 5 year DFS and PFS decreased with the increasing number of LN metastases. The 5 year DFS rate was 87.2% for patients with no LN metastasis and for patients with ≥3 LN metastases was 35.4%. The 5 year PFS rate was 86.6% for patients with no LN metastasis and for patients with ≥3 LN metastases was 27.6%. Limitations: All studies were retrospective studies. DFS and PFS rates in patients with different 2009 FIGO stages and with different mean tumor sizes were not examined due to a limited number of reports. Conclusions: More advanced 2009 FIGO stage and greater number of LN metastases are associated with worse outcomes in patients with vulvar cancer. abstract_id: PUBMED:25759305 Validation of the new FIGO staging system (2009) for vulvar cancer in the Chinese population. Objective: A new FIGO staging system for vulvar cancer was issued in 2009. The aim of this study was to identify its value in estimating the outcome of patients with vulvar squamous cell carcinoma (VSCC) in the Chinese population. Methods: A total of 184 patients who underwent radical surgery for VSCC were recruited. Their medical records and pathology slides were reviewed. Disease reclassification was conducted according to the FIGO staging system (2009). The primary outcomes were cause-specific survival (CSS), relapse-free survival (RFS) and overall survival (OS). Results: A total of 76 patients (41.3%) were downstaged and no patients were upstaged in the new FIGO staging system (2009). The stage distribution was as follows: stage I (99), stage II (13), stage III (65) and stage IV (7). According to CSS, the patients were classified into 4 groups: stage IA (group 1), stage IB/II/IIIA (group 2), stage IIIB (group 3), and stage IIIC/IV (group 4) (5-year CSS: 100%, 85%, 34.6% and 0%, respectively). The 5-year CSS was similar among the patients with stage IB, II and IIIA carcinomas (84.4%, 84.6% and 84.8%, respectively, p=0.986), whereas, significant decline of the CSS was found with increased substages of stages IIIA, IIIB and IIIC (84.8%, 34.6%, and 0 respectively, p<0.001). Conclusions: The 2009 FIGO staging system for VSCC displayed good performance for the subdivisions of stage III VSCC, but it failed to stratify survival well between stages IB, II and IIIA. abstract_id: PUBMED:35565394 Vulvar Cancer: 2021 Revised FIGO Staging System and the Role of Imaging. Vulvar cancer is a rare gynecological malignancy. It constitutes 5-8% of all gynecologic neoplasms, and squamous cell carcinoma is the most common variant. This article aims to review the etiopathogenesis revised 2021 International Federation of Gynecology and Obstetrics (FIGO) classification and emphasize imaging in the staging of vulvar cancer. The staging has been regulated by FIGO since 1969 and is subjected to multiple revisions. Previous 2009 FIGO classification is limited by the prognostic capability, which prompted the 2021 revisions and issue of a new FIGO classification. Although vulvar cancer can be visualized clinically, imaging plays a crucial role in the staging of the tumor, assessing the tumor extent, and planning the management. In addition, sentinel lymph node biopsy facilitates the histopathological staging of the draining lymph node, thus enabling early detection of tumor metastases and better survival rates. abstract_id: PUBMED:25842047 Staging for vulvar cancer. Vulvar cancer has been staged by the International Federation of Gynaecology and Obstetrics (FIGO) since 1969, and the original staging system was based on clinical findings only. This system provided a very good spread of prognostic groupings. Because vulvar cancer is virtually always treated surgically, the status of the lymph nodes is the most important prognostic factor and this can only be determined with certainty by histological examination of resected lymph nodes, FIGO introduced a surgical staging system in 1988. This was modified in 1994 to include a category of microinvasive vulvar cancer (stage IA), because such patients have virtually no risk of lymph node metastases. This system did not give a reasonably even spread of prognostic groupings. In addition, patients with stage III disease were shown to be a heterogeneous group prognostically, and the number of positive nodes and the morphology of those nodes were not taken into account. A new surgical staging system for vulvar cancer was introduced by FIGO in 2009. Initial retrospective analyses have suggested that this new staging system has overcome the major deficiencies in the 1994 system. abstract_id: PUBMED:22367324 Validation of the FIGO 2009 staging system for carcinoma of the vulva. Background: The previous (1988) International Federation of Gynecology and Obstetrics (FIGO) vulval cancer staging system failed in 3 important areas: (1) stage 1 and 2 disease showed similar survival; (2) stage 3 represented a most heterogeneous group of patients with a wide survival range; and (3) the number and morphology of positive nodes were not taken into account. Objective: To compare the 1988 FIGO vulval carcinoma staging system with that of 2009 with regard to stage migration and prognostication. Methods: Information on all patients treated for vulval cancer at the Queensland Centre for Gynecological Cancers, Australia, between 1988 to the present was obtained. Data included patients' characteristics as well as details on histopathology, treatments, and follow-up. We recorded the original 1988 FIGO stage, reviewed all patients' histopathology information, and restaged all patients to the 2009 FIGO staging system. Data were analyzed using the Kaplan-Meier method to compare relapse-free survival and overall survival. Results: Data from 394 patients with primary vulval carcinoma were eligible for analysis. Patients with stage IA disease remained unchanged. Tumors formerly classified as stage II are now classified as stage IB. Therefore, FIGO 2009 stage II has become rare, with only 6 of 394 patients allocated to stage II. Stage III has been broken down into 3 substages, thus creating distinct differences in relapse-free survival and overall survival. Prognosis of patients with stage IIIC disease is remarkably poor. Conclusion: The FIGO 2009 staging system for vulval carcinoma successfully addresses some concerns of the 1988 system. Especially, it identifies high-risk patients within the heterogeneous group of lymph node-positive patients. abstract_id: PUBMED:28602173 Clinical significance and prognostic value of femoral lymph node metastasis in FIGO stage III vulvar carcinoma. Objectives: To determine the clinical significance and prognostic value of femoral lymph node metastasis (FLNM) in patients with International Federation of Gynecology and Obstetrics (FIGO) stage III vulvar carcinoma. Methods: The medical records of patients with vulvar carcinoma who underwent inguinofemoral lymphadenectomy between 1990 and 2013 were retrospectively reviewed. Results: Of 66 patients with stage III vulvar carcinoma, 42 had superficial lymph node metastasis (SLNM) only and 24 had FLNM. Significantly higher rates of extracapsular invasion (P = 0.008), multiple nodal metastasis (P = 0.042), and advanced FIGO substage (P = 0.026) as well as a larger tumor diameter (≥4 cm, P = 0.023) and greater depth of invasion (≥5 mm, P = 0.020) were observed among patients with FLNM compared to those with SLNM only. After a median follow-up of 46 months (range, 6-172 months), 35 patients experienced relapse and 30 died from disease. The 5-year cancer-specific survival (CSS) rates were 70.1% and 30.8% for patients with SLNM only and FLNM, respectively (P = 0.001). In multivariate analysis, only FLNM was found to be an independent risk factor for reduced recurrence-free survival (RFS) and CSS among patients with stage III vulvar cancer (hazard ratio [HR] = 2.277, P = 0.037 for RFS; HR = 2.360, P = 0.042 for CSS). When the FLNM cases were considered together as stage IIIC, significant differences emerged in the RFS (P = 0.002) and CSS (P = 0.004) among the re-divided FIGO substages. Conclusions: FLNM represented an unfavorable status of node metastasis with a worse prognosis compared to that of SLNM alone, and this should be considered in a future FIGO staging system for vulvar cancer. abstract_id: PUBMED:22704951 Impact of the new FIGO 2009 staging classification for vulvar cancer on prognosis and stage distribution. Objective: In 2009, FIGO modified staging of vulvar cancer--the performance of the new classification relative to the prior system has not been assessed. We sought to investigate the impact of the 2009 FIGO vulvar cancer staging system on stage distribution and prognostic ability of the 2009 sub-stage classifications in a large cohort of uniformly staged cases with long-term followup. Methods: Patients undergoing surgery for vulvar cancer were identified from 2 institutions (Mayo Clinic and Medical University, Gdansk, Poland) using a similar surgical approach. Inclusion criteria required primary surgery for invasive vulvar cancer for cases with >1 mm invasion with complete inguinal/femoral lymphadenectomy. The technique of inguinofemoral node dissection used in both institutions was designed to remove both superficial and deep inguinofemoral nodes. A retrospective review was performed and all cases were assigned stage using the 1988 and 2009 FIGO systems after reviewing pathology slides. Cause-specific survival (CSS, death due to cancer) was estimated using the Kaplan-Meier method and compared using the Cox proportional hazards model t for the first 10 years after surgery. Result: A total of 468 patients met inclusion criteria. Thirty-one percent (n=155) were down-staged, and 1 case up-staged using 2009 staging. The new system fails to effectively separate 10-yr CSS for stage I and II cases (p=0.52), while FIGO 1988 failed to separate stages II and III (p=0.41). We observed a difference in survival for stage I and II cases based on tumor diameter. For smaller stage II lesion (≤4 cm vs. >4 cm) we observed no difference in survival compared to all stage IB cases (p=0.25) Considering node positive disease, patients with 2009 FIGO stages ΙΙΙA, ΙΙΙB, and ΙΙΙC were not significantly different in terms of CSS (p=0.17). However, CSS approached significance between patients with extracapsular vs. intracapsular disease (p=0.072). For stages IIIA and IIIB (excluding extracapsular spread, IIIC), we observed that the number of positive nodes and diameter of lymph node metastasis were not significantly associated with CSS. When comparing bilateral nodal involvement vs. unilateral cases with at least 2 involved nodes, we found no statistical difference in CSS (p=0.30). Conclusion: This is the largest cohort study to evaluate the effect and prognostic performance of the new FIGO vulvar cancer staging system. The new staging does not stratify survival between stages I and II and reduces CSS in stage I cases. Our results suggest that lesion size in node negative cases is an important prognostic variable that could be addressed in future staging classifications. Among the node positive cases, the current classification results in slight differences in CSS, primarily between intra- and extra-capsular disease and not according to the number of positive nodes and lymph node metastasis diameter. Finally we observe that bilateral nodal disease does not appear to impact CSS, justifying it being omitted from the 2009 staging system and that separating node positive (2009 stage III) from node negative (2009 stage II) cases is justified. abstract_id: PUBMED:31390593 Role of Locoregional Treatment in Vulvar Cancer With Pelvic Lymph Node Metastases: Time to Reconsider FIGO Staging? Background: Vulvar cancer with pelvic nodal involvement is considered metastatic (M1) disease per AJCC staging. The role of definitive therapy and its resulting impact on survival have not been defined. Patients And Methods: Patients with pelvic lymph node-positive vulvar cancer diagnosed in 2009 through 2015 were evaluated from the National Cancer Database. Patients with known distant metastatic disease were excluded. Logistic regression was used to evaluate use of surgery and radiation therapy (RT). Overall survival (OS) was evaluated with log-rank test and Cox proportional hazards modeling (multivariate analysis [MVA]). A 2-month conditional landmark analysis was performed. Results: A total of 1,304 women met the inclusion criteria. Median follow-up was 38 months for survivors. Chemotherapy, RT, and surgery were used in 54%, 74%, and 62% of patients, respectively. Surgery was associated with prolonged OS (hazard ratio [HR], 0.58; P<.001) but had multiple significant differences in baseline characteristics compared with nonsurgical patients. In patients managed nonsurgically, RT was associated with prolonged OS (HR, 0.66; P=.019) in MVA. In patients undergoing surgery, RT was associated with better OS (3-year OS, 55% vs 48%; P=.033). Factors predicting use of RT were identified. MVA revealed that RT was associated with prolonged OS (HR, 0.75; P=.004). Conclusions: In this cohort of women with vulvar cancer and positive pelvic lymph nodes, use of RT was associated with prolonged survival in those who did not undergo surgery. Surgery followed by adjuvant RT was associated with prolonged survival compared with surgery alone. abstract_id: PUBMED:34996835 Validation of the 2021 FIGO staging schema for advanced vulvar cancer. Objective: The International Federation of Gynecology and Obstetrics (FIGO) revised the vulvar cancer staging schema in 2021. Previous stage IIIA-B diseases were reclassified based on nodal size (≤5 mm for stage IIIA compared with >5 mm for stage IIIB), and previous stage IVA1 disease based on non-osseous organ extension was reclassified to stage IIIA whereas osseous extension remained as stage IVA. This study sought to validate the 2021 FIGO vulvar cancer staging schema. Methods: This retrospective cohort study examined 889 women with stage III-IV vulvar cancer from 2010 to 2015 in the National Cancer Institute's Surveillance, Epidemiology, and End Results Program. Stage shift and overall survival were assessed by comparing the 2021 and 2009 FIGO staging schemas. Results: Stage shift occurred in 229 (25.8%) patients (upstaged 17.7% and downstaged 8.1%). When comparing the new and previous staging schemas, 5 year overall survival rates were 45.6% versus 48.9% for stage IIIA, 47.0% versus 44.2% for stage IIIB, and 13.9% versus 25.1% (interval change -11.2%) for stage IVA diseases. According to the revised staging schema, 5 year overall survival rates were similar for stage IVA and IVB diseases (13.9% vs 14.5%) and for stage IIIA and IIIB disease (45.6% vs 47.0%). For new stage IIIA disease, 5 year overall survival rates differed significantly based on the staging factors (nodal involvement vs non-nodal organ involvement, 48.9% vs 38.7%, difference 10.2%, p=0.038). Conclusion: The 2021 FIGO staging schema results in one in four cases of advanced vulvar cancer being reclassified. Survival rates of patients with new stage IVA disease worsened significantly whereas those of patients with new stage IIIA disease were heterogenous based on the staging factors. The discriminatory ability of the revised 2021 FIGO staging schema for 5 year overall survival rate between patients with stage IIIA and IIIB tumors and those with IVA and IVB tumors is limited in this study population. Answer: The survival outcomes for patients with stage IVB vulvar cancer with grossly positive pelvic lymph nodes suggest that there may be a need to reconsider the FIGO staging system. A study evaluating treatment outcomes for patients with vulvar cancer with grossly positive pelvic lymph nodes (PLNs) without distant metastasis at initial diagnosis found that aggressive locoregional treatment can lead to favorable outcomes for many patients with grossly involved PLNs. The 5-year overall survival and disease-specific survival rates were 43% and 48%, respectively, which is comparable to that of patients with grossly involved inguinal nodes only. This led to the recommendation for modification of the FIGO stage IVB classification to more accurately reflect the relatively favorable prognosis of patients with PLN involvement (PUBMED:25524458). Furthermore, a systematic review and meta-analysis examining the survival of patients with vulvar cancer based on the 2009 FIGO staging system showed that the 5-year overall survival rate decreased with increasing FIGO stage and number of lymph node metastases. Specifically, FIGO stage IV patients had a 5-year overall survival rate of only 9.4% (PUBMED:26959073). This indicates that the current FIGO staging system does stratify patients according to prognosis, but there may be room for refinement, especially for those with stage IVB disease. Additionally, the role of locoregional treatment in vulvar cancer with pelvic lymph node metastases has been questioned, with research suggesting that the use of radiation therapy, with or without surgery, is associated with prolonged survival in this patient population. This further supports the notion that the current FIGO staging system, which considers vulvar cancer with pelvic nodal involvement as metastatic (M1) disease, may need to be re-evaluated (PUBMED:31390593). In summary, the evidence suggests that the prognosis for patients with stage IVB vulvar cancer with grossly positive pelvic lymph nodes may be more favorable than previously thought, and that the FIGO staging system could be reconsidered to better reflect patient outcomes and guide treatment decisions.
Instruction: Do MDR1 and SLCO1B1 polymorphisms influence the therapeutic response to atorvastatin? Abstracts: abstract_id: PUBMED:23677857 Do MDR1 and SLCO1B1 polymorphisms influence the therapeutic response to atorvastatin? A study on a cohort of Egyptian patients with hypercholesterolemia. Background: Statins are among the most prescribed drugs worldwide to reduce the risk of cardiovascular events. Interindividual variability in drug response is a major clinical problem and is of concern during drug development. Statins, such as atorvastatin, are taken orally and access to their site of action in the liver is greatly facilitated by both intestinal and hepatic transporters. Objective: To examine the impact of polymorphisms of the multidrug resistance 1(MDR1) and solute carrier organic anion transporter 1B1 (SLCO1B1) genes on the therapeutic response to atorvastatin as well as the presence of gender-gene interaction. Methods: Serum lipid levels were determined at baseline and 4 weeks following 40 mg/day atorvastatin treatment in 50 Egyptian hypercholesterolemic patients (27 males and 23 females). Identification of MDR1 C3435T and SLCO1B1 A388G gene polymorphisms was performed using a polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) method. Results: Treatment with atorvastatin resulted in a mean reduction of total cholesterol (TC), low density lipoprotein cholesterol (LDL-C), and triglyceride (TG) of 8.7 %, 9.2 %, and 4.1 %, respectively, and a mean increase of high density lipoprotein cholesterol (HDL-C) of 1 %. Baseline and post-treatment HDL-C levels were statistically significantly higher in the MDR 1 TT homozygotes when compared with the CC wild type. The percentage change in TC, LDL-C, TG, and HDL-C did not show any statistically significant difference when compared among the different MDR 1 C3435T or SLCO1B1 A388G genotypes. The SLCO1B1 GG homozygotes showed a decrease in TG, whereas there was an increase in TG following atorvastatin treatment in AA and AG carriers in females; however, males did not show any statistically significant difference. There was no statistically significant association between either the coronary artery disease (CAD) risk factors (family history of CAD, hypertension, diabetes mellitus, smoking) or concomitant medications with the percentage change in different lipid parameters. Conclusion: MDR1 C3435T was associated with baseline and post-treatment HDL-C variation. SLCO1B1 A388G showed gender-related effects on TG change following atorvastatin treatment. None of the comorbidities or the concomitant medications influenced the percentage change of lipid parameters following atorvastatin treatment. The results of this study may lead to an improved understanding of the genetic determinants of lipid response to atorvastatin treatment. abstract_id: PUBMED:26932749 Genetic determinants of lipid-lowering response to atorvastatin therapy in an Indian population. What Is Known And Objective: Statins form the backbone of lipid-lowering therapy for the prevention of cardiovascular disease. However, there is large interindividual variability in clinical response to statin treatment. Several gene variants that can be aligned to either the pharmacokinetics or pharmacodynamics of statin have been proposed as potentially important determinants of statin response. We aimed to study the association of known variations in SLCO1B1, CYP3A4, ABCB1, CYP3A5, ABCG5 and CYP7A1 genes with lipid levels in response to atorvastatin therapy. Methods: Genotypes were determined using multiplex allele-specific polymerase chain reaction in 177 Indian patients, treated with 10 mg of atorvastatin for 8 weeks. Low-density lipoprotein-cholesterol (LDL-C) levels were recorded at baseline and after 8 weeks of atorvastatin treatment. Results And Discussion: A total of 177 hypercholesterolaemic patients were genotyped to study genetic determinants of atorvastatin response. The genotype distribution for all polymorphisms investigated was in Hardy-Weinberg equilibrium. In our study, patients with wild-type genotypes of CYP7A1 (rs3808607), CYP3A4 (rs2740574), SLCO1B1 (rs2306283) and variant allele-carrying genotype of ABCB1 (rs2032582, rs1045642) showed significantly greater LDL-cholesterol reductions in response to atorvastatin therapy. What Is New And Conclusion: The variable response to atorvastatin therapy in terms of LDL-cholesterol lowering due to genetic variations in CYP7A1, CYP3A4, SLCO1B1 and ABCB1 is a promising finding. Further validation in large Indian cohorts is required before it can be assessed for clinical utility. abstract_id: PUBMED:32483134 The atorvastatin metabolic phenotype shift is influenced by interaction of drug-transporter polymorphisms in Mexican population: results of a randomized trial. Atorvastatin (ATV) is a blood cholesterol-lowering drug used to prevent cardiovascular events, the leading cause of death worldwide. As pharmacokinetics, metabolism and response vary among individuals, we wanted to determine the most reliable metabolic ATV phenotypes and identify novel and preponderant genetic markers that affect ATV plasma levels. A controlled, randomized, crossover, single-blind, three-treatment, three-period, and six-sequence clinical study of ATV (single 80-mg oral dose) was conducted among 60 healthy Mexican men. ATV plasma levels were measured using high-performance liquid chromatography mass spectrometry. Genotyping was performed by real-time PCR with TaqMan probes. Four ATV metabolizer phenotypes were found: slow, intermediate, normal and fast. Six gene polymorphisms, SLCO1B1-rs4149056, ABCB1-rs1045642, CYP2D6-rs1135840, CYP2B6-rs3745274, NAT2-rs1208, and COMT- rs4680, had a significant effect on ATV pharmacokinetics (P < 0.05). The polymorphisms in SLCO1B1 and ABCB1 seemed to have a greater effect and were especially important for the shift from an intermediate to a normal metabolizer. This is the first study that demonstrates how the interaction of genetic variants affect metabolic phenotyping and improves understanding of how SLCO1B1 and ABCB1 variants that affect statin metabolism may partially explain the variability in drug response. Notwithstanding, the influence of other genetic and non-genetic factors is not ruled out. abstract_id: PUBMED:34403130 Pharmacogenomics of statins: lipid response and other outcomes in Brazilian cohorts. Statins are inhibitors of 3-hydroxy-3-methylglutaryl-CoA reductase, a key enzyme in cholesterol biosynthesis, that are highly effective in reducing plasma low-density lipoprotein (LDL) cholesterol and decreasing the risk of cardiovascular events. In recent years, a multitude of variants in genes involved in pharmacokinetics (PK) and pharmacodynamics (PD) have been suggested to influence the cholesterol-lowering response. However, the vast majority of studies have analyzed the pharmacogenetic associations in populations in Europe and the USA, whereas data in other populations, including Brazil, are mostly lacking. This narrative review provides an update of clinical studies on statin pharmacogenomics in Brazilian cohorts exploring lipid-lowering response, adverse events and pleiotropic effects. We find that variants in drug transporter genes (SLCO1B1 and ABCB1) positively impacted atorvastatin and simvastatin response, whereas variants in genes of drug metabolizing enzymes (CYP3A5) decreased response. Furthermore, multiple associations of variants in PD genes (HMGCR, LDLR and APOB) with statin response were identified. Few studies have explored statin-related adverse events, and only ABCB1 but not SLCO1B1 variants were robustly associated with increased risk in Brazil. Statin-related pleiotropic effects were shown to be influenced by variants in PD (LDLR, NR1H2) and antioxidant enzyme (NOS3, SOD2, MTHFR, SELENOP) genes. The findings of these studies indicate that statin pharmacogenomic associations are distinctly different in Brazil compared to other populations. This review also discusses the clinical implications of pharmacogenetic studies and the rising importance of investigating rare variants to explore their association with statin response. abstract_id: PUBMED:26857559 A pharmacogenetic pilot study reveals MTHFR, DRD3, and MDR1 polymorphisms as biomarker candidates for slow atorvastatin metabolizers. Background: The genetic variation underlying atorvastatin (ATV) pharmacokinetics was evaluated in a Mexican population. Aims of this study were: 1) to reveal the frequency of 87 polymorphisms in 36 genes related to drug metabolism in healthy Mexican volunteers, 2) to evaluate the impact of these polymorphisms on ATV pharmacokinetics, 3) to classify the ATV metabolic phenotypes of healthy volunteers, and 4) to investigate a possible association between genotypes and metabolizer phenotypes. Methods: A pharmacokinetic study of ATV (single 80-mg dose) was conducted in 60 healthy male volunteers. ATV plasma concentrations were measured by high-performance liquid chromatography mass spectrometry. Pharmacokinetic parameters were calculated by the non-compartmental method. The polymorphisms were determined with the PHARMAchip® microarray and the TaqMan® probes genotyping assay. Results: Three metabolic phenotypes were found in our population: slow, normal, and rapid. Six gene polymorphisms were found to have a significant effect on ATV pharmacokinetics: MTHFR (rs1801133), DRD3 (rs6280), GSTM3 (rs1799735), TNFα (rs1800629), MDR1 (rs1045642), and SLCO1B1 (rs4149056). The combination of MTHFR, DRD3 and MDR1 polymorphisms associated with a slow ATV metabolizer phenotype. Conclusion: Further studies using a genetic preselection method and a larger population are needed to confirm these polymorphisms as predictive biomarkers for ATV slow metabolizers. Trial Registration: Australian New Zealand Clinical Trials Registry: ACTRN12614000851662, date registered: August 8, 2014. abstract_id: PUBMED:35968761 SLCO1B1 and ABCB1 variants synergistically influence the atorvastatin treatment response in South Indian coronary artery disease patients. Introduction: Atorvastatin exhibits wide interindividual variability in treatment response, limiting the drug efficacy in coronary artery disease patients. Aim: To study the effect of genetic variants involved in atorvastatin transport/metabolism and correlate their lipid-lowering efficacy. Materials & methods: Genotyping was performed using 5'-hydrolysis probe method (n = 412), and the study evaluated the treatment response in 86 patients. Results: Significant reduction in total cholesterol and low-density lipoprotein cholesterol (LDL-C) were observed in SLCO1B1-rs4149056, rs4363657 and ABCB1-rs1045642 genotypes. The combined genotypes of ABCB1 and SLCO1B1 showed a strong synergistic effect in reducing the total cholesterol and LDL-C. Diabetes and smoking were observed to influence the LDL-C reduction. Conclusion: The genetic variants of SLCO1B1 and ABCB1 predict the lipid-lowering efficacy of atorvastatin, and this may be useful in genotype-guided statin therapy for coronary artery disease patients. abstract_id: PUBMED:21820929 Specific considerations on the prescription and therapeutic interchange of statins Objective: The pharmaceutical industry currently offers six different statins in Spain and there is one more soon to be available. Choosing the most appropriate drug and dose is determined by the therapeutic target (reduction in LDL-C levels). Statin doses that decrease LDL-C at the same percentage are considered equivalent. Evaluating the pharmacokinetic characteristics of each statin can be useful when setting selection criteria, helping to determine which statin may be more appropriate for a patient based on their individual characteristics and on the other co-administered drugs. Methods: We reviewed the pharmacokinetics properties of each statin and its possible involvement in drug interactions. Results: CYP3A4 was responsible for the metabolism of lovastatin, simvastatin and atorvastatin; fluvastatin depends on CYP2C9; P-glycoprotein is responsible for decreased atorvastatin, pravastatin, simvastatin and lovastatin concentrations. The OATPA1B1 transporter involved in all statins' access to the hepatocyte, except for fluvastatin, is essential for rosuvastatin and pravastatin. These circumstances cause those drugs inhibiting or inducing isoenzymes or transporters' activity not to have the same effect on the different statins. Conclusion: The pharmacokinetics is important when choosing the best statin and could be a limitation in the use of interchange therapeutic programmes when other drugs are present. abstract_id: PUBMED:38182100 Atorvastatin-associated myotoxicity: A toxicokinetic review of pharmacogenetic associations to evaluate the feasibility of precision pharmacotherapy. Atorvastatin (ATV) and other statins are highly effective in reducing cholesterol levels. However, in some patients, the development of drug-associated muscle side effects remains an issue as it compromises the adherence to treatment. Since the toxicity is dose-dependent, exploring factors modulating pharmacokinetics (PK) appears fundamental. The purpose of this review aims at reporting the current state of knowledge about the singular genetic susceptibilities influencing the risk of developing ATV muscle adverse events through PK modulations. Multiple single nucleotide polymorphisms (SNP) in efflux (ABCB1, ABCC1, ABCC2, ABCC4 and ABCG2) and influx (SLCO1B1, SLCO1B3 and SLCO2B1) transporters have been explored for their association with ATV PK modulation or with statin-related myotoxicities (SRM) development. The most convincing pharmacogenetic association with ATV remains the influence of the rs4149056 (c.521 T > C) in SLCO1B1 on ATV PK and pharmacodynamics. This SNP has been robustly associated with increased ATV systemic exposure and consequently, an increased risk of SRM. Additionally, the SNP rs2231142 (c.421C > A) in ABCG2 has also been associated with increased drug exposure and higher risk of SRM occurrence. SLCO1B1 and ABCG2 pharmacogenetic associations highlight that modulation of ATV systemic exposure is important to explain the risk of developing SRM. However, some novel observations credit the hypothesis that additional genes (e.g. SLCO2B1 or ABCC1) might be important for explaining local PK modulations within the muscle tissue, indicating that studying the local PK directly at the skeletal muscle level might pave the way for additional understanding. abstract_id: PUBMED:27296832 ABCB1 polymorphism is associated with atorvastatin-induced liver injury in Japanese population. Background: To investigate the associations between atorvastatin-induced liver injury (AILI) and polymorphisms in eight genes possibly involved in the hepatic metabolism (CYP2C9, CYP2C19, CYP3A4, CYP3A5 and UGT1A1) and membrane transport (ABCB1, ABCG2 and SLCO1B1) of atorvastatin, we genotyped 30 AILI and 414 non-AILI patients recruited at BioBank Japan for 15 single nucleotide polymorphisms (SNPs). Results: An SNP in ABCB1 (rs2032582: 2677G > T/A) was significantly associated with AILI (P = 0.00068, odds ratio (OR) = 2.59 with 95 % confidence interval (CI) of 1.49-4.50, G allele versus T and A alleles), indicating that the G allele might be a risk factor for AILI. The cytotoxicity test demonstrated that IC50 value of atorvastatin to inhibit the growth and/or viability of Flp-In-293/ABCB1 (2677G) cells was 5.44 ± 0.10 mM, which was significantly lower than those in Flp-In-293/ABCB1 (2677 T) (6.02 ± 0.07 mM) and Flp-In-293/ABCB1 (2677A) cells (5.95 ± 0.08 mM). Conclusions: These results indicate that ABCB1 rs2032582 may predict the risk of AILI in Japanese population. abstract_id: PUBMED:16765141 Exposure of atorvastatin is unchanged but lactone and acid metabolites are increased several-fold in patients with atorvastatin-induced myopathy. Background: The most serious side effect from statin treatment is myopathy, which may proceed to rhabdomyolysis. This is the first study to investigate whether the pharmacokinetics of either atorvastatin or its metabolites, or both, is altered in patients with atorvastatin-related myopathy compared with healthy controls. Methods: A 24-hour pharmacokinetic investigation was performed in 14 patients with atorvastatin-related myopathy. Relevant polymorphisms in SLCO1B1 (encoding organic anion transporting polypeptide 1B1), MDR1/ABCB1 (encoding P-glycoprotein), and CYP3A5 (encoding cytochrome P450 3A5) were determined. Data from 15 healthy volunteers were used as controls. Results: No statistically significant difference in systemic exposure of atorvastatin was observed between the 2 groups. However, patients with atorvastatin-related myopathy had 2.4-fold and 3.1-fold higher systemic exposures of the metabolites atorvastatin lactone (P<.01) and p-hydroxyatorvastatin (P<.01), respectively, compared with controls. There were no differences in frequencies of SLCO1B1, MDR1, and CYP3A5 polymorphisms between the 2 groups. Conclusions: This study disclosed a distinct difference in the pharmacokinetics of atorvastatin metabolites between patients with atorvastatin-related myopathy and healthy control subjects. These results are of importance in the further search for the mechanism of statin-induced myopathy. Answer: Yes, MDR1 and SLCO1B1 polymorphisms do influence the therapeutic response to atorvastatin. Studies have shown that these genetic variations can affect how patients metabolize and respond to the drug. For instance, a study on a cohort of Egyptian patients with hypercholesterolemia found that the MDR1 C3435T polymorphism was associated with variations in high-density lipoprotein cholesterol (HDL-C) levels both before and after atorvastatin treatment. Additionally, the SLCO1B1 A388G polymorphism showed gender-related effects on triglyceride (TG) changes following atorvastatin treatment, although no significant differences were observed in other lipid parameters among the different genotypes of MDR1 C3435T or SLCO1B1 A388G (PUBMED:23677857). Another study in an Indian population revealed that patients with certain genotypes of SLCO1B1 (rs2306283) showed significantly greater reductions in low-density lipoprotein cholesterol (LDL-C) in response to atorvastatin therapy (PUBMED:26932749). In a Mexican population, polymorphisms in SLCO1B1 and ABCB1 (another name for MDR1) were found to have a significant effect on atorvastatin pharmacokinetics, influencing the shift from an intermediate to a normal metabolizer phenotype (PUBMED:32483134). A study in Brazilian cohorts found that variants in the SLCO1B1 and ABCB1 drug transporter genes positively impacted the response to atorvastatin and simvastatin (PUBMED:34403130). In a South Indian study, significant reductions in total cholesterol and LDL-C were observed in genotypes of SLCO1B1 and ABCB1, with combined genotypes showing a strong synergistic effect in reducing these lipid parameters (PUBMED:35968761). Lastly, a study on atorvastatin-induced liver injury in the Japanese population found an association between an SNP in ABCB1 (rs2032582) and the risk of liver injury, suggesting that the G allele of this SNP might be a risk factor for atorvastatin-induced liver injury (PUBMED:27296832).
Instruction: Is circumferential pulmonary vein isolation preferable to stepwise segmental pulmonary vein isolation for patients with paroxysmal atrial fibrillation? Abstracts: abstract_id: PUBMED:17062959 Is circumferential pulmonary vein isolation preferable to stepwise segmental pulmonary vein isolation for patients with paroxysmal atrial fibrillation? Background: Stepwise segmental pulmonary vein isolation (SPVI) and circumferential pulmonary vein isolation (CPVI) have been developed to treat patients with atrial fibrillation (AF), but the preferable approach for paroxysmal AF (PAF) has not been established. Methods And Results: One hundred and ten patients with symptomatic PAF were randomized into a stepwise SPVI group (n=55) or CPVI group (n=55). Systemic SPVI combined with left atrial linear ablation tailored by inducibility of AF was performed in the stepwise SPVI group. Circumferential linear ablation around the left and right-sided pulmonary veins (PVs) guided by 3-dimensional electroanatomic mapping was performed in the CPVI group. The endpoints of ablation are non-induciblity of AF in the stepwise SPVI group and continuity of circular lesions combined with PV isolation in the CPVI group. After the initial procedures, atrial tachyarrhythmis (ATa) recurred within the first 3 months in 23 of the 55 patients (41.8%) who underwent stepwise SPVI and in 20 of the 55 patients (36.4%) who had CPVI (p=0.69). Repeat procedures were performed in 7 patients from the stepwise SPVI group and 5 from the CPVI group (p=0.76). During the 3-9 months after the last procedure, 46 patients (83.6%) from the CPVI group and 43 (78.2%) from the stepwise SPVI group did not have symptomatic ATa while not taking anti-arrhythmic drugs (p=0.63). Severe subcutaneous hematoma or PV stenosis occurred in 3 patients. Conclusions: The efficacy of stepwise SPVI is comparable to that of CPVI for patients with PAF. abstract_id: PUBMED:32514560 Identification of circumferential pulmonary vein isolation responders among patients with persistent atrial fibrillation: clinical value of the sequential low-dose ibutilide test. Aims: Circumferential pulmonary vein isolation can be effective as sole treatment for persistent atrial fibrillation. However, identifying those patients who will respond to this therapy remains a challenge. We investigated the clinical value of the sequential low-dose ibutilide test for identifying patients with persistent atrial fibrillation in whom pulmonary vein isolation is effective as sole therapy. Methods And Results: In a prospective cohort of 180 consecutive patients with persistent atrial fibrillation, intravenous low-dose (0.004 mg/kg) ibutilide was administered 3 days before ablation and after the completion of circumferential pulmonary vein isolation. In patients in whom ibutilide did not terminate atrial fibrillation pre-procedurally, but successfully terminated it intraprocedurally, no further atrial substrate modification was performed. Pre-procedural low-dose ibutilide failed to terminate the arrhythmia in all patients with persistent atrial fibrillation, while pulmonary vein isolation ± low-dose ibutilide terminated persistent atrial fibrillation in 55 (30.6%) of them (PsAF group 1). The remaining 125 (69.4%) patients underwent electrogram-based ablation (PsAF Group 2). The control group comprised 379 consecutive patients with paroxysmal atrial fibrillation who underwent pulmonary vein isolation over the same period. At 24 months follow-up, 39 (70.9%) patients in PsAF Group 1 and 276 (72.8%) patients in the control group were free from atrial tachyarrhythmias (P = NS); the arrhythmia-free rates in both groups were higher than that in PsAF group 2 (58.4%, P = 0.005). Conclusion: The sequential low-dose ibutilide test is a simple method for identifying patients with persistent atrial fibrillation in whom pulmonary vein isolation alone is an appropriate treatment strategy. abstract_id: PUBMED:30705634 Clinical Implications of Unmasking Dormant Conduction After Circumferential Pulmonary Vein Isolation in Atrial Fibrillation Using Adenosine: A Systematic Review and Meta-Analysis. Purpose: Circumferential pulmonary vein isolation (CPVI) is a routine ablation strategy of atrial fibrillation (AF). The adenosine test can be used to unmask dormant conduction (DC) of pulmonary veins after CPVI, thereby demonstrating possible pulmonary vein re-connection and the need for further ablation. However, whether adenosine test could help improve the long term successful rate of CPVI is still controversial. This systemic review and meta-analysis was to determine the clinical utility of the adenosine test. Methods: PubMed, EMBASE, Web of Science and Cochrane Library database were searched through July 2016 to identify relevant studies using the keywords "dormant pulmonary vein conduction," "adenosine test," "circumferential pulmonary vein isolation," and "atrial fibrillation." A random-effects model was used to compare pooled outcomes and tested for heterogeneity. Results: A total of 17 studies including 5,169 participants were included in the final meta-analysis. Two groups of comparisons were classified: (1) Long-term successful rate in those AF patients underwent CPVI with and without adenosine test [Group A (+) and Group A (-)]; (2) Long-term successful rate in those patients who had adenosine test with and without dormant conduction [Group DC (+) and Group DC (-)]. The overall meta-analysis showed that no significant difference can be observed between Group A (+) and Group A (-) (RR 1.08; 95% CI 0.97-1.19; P = 0.16; I2 = 66%) and between Group DC (+) and Group DC (-) (RR 1.01; 95% CI 0.91-1.12; P = 0.88; I2 = 60%). Conclusion: Pooled meta-analysis suggested adenosine test may not improve long-term successful rate in AF patients underwent CPVI. Furthermore, AF recurrence may not be decreased by eliminating DC provoked by adenosine, even though adenosine test was applied after CPVI. abstract_id: PUBMED:29250270 High Voltage Guided Pulmonary Vein Isolation in Paroxysmal Atrial Fibrillation. Background: Ablation of the pulmonary vein (PV) antrum using an electroanatomic mapping system is standard of care for point-by-point pulmonary vein isolation (PVI). Focused ablation at critical areas is more likely to achieve intra-procedural PV isolation and decrease the likelihood for reconnection and recurrence of atrial fibrillation (AF). Therefore this prospective pilot study is to investigate the short-term outcome of a voltage-guided circumferential PV ablation (CPVA) strategy. Methods: We recruited patients with a history of paroxysmal atrial fibrillation (AF). The EnSite NavX system (St. Jude Medical, St Paul, Minnesota, USA) was employed to construct a three-dimensional geometry of the left atrium (LA) and voltage map. CPVA was performed; with radiofrequency (RF) targeting sites of highest voltage first in a sequential clockwise fashion then followed by complete the gaps in circumferential ablation. Acute and short-term outcomes were compared to a control group undergoing conventional standard CPVA using the same 3D system. Follow-up was scheduled at 3, 6 and 12 months. Results: Thirty-four paroxysmal AF patients with a mean age of 40 years were included. Fourteen patients (8 male) underwent voltage mapping and 20 patients underwent empirical, non-voltage guided standard CPVA. A mean of 54 ± 12 points per PV antrum were recorded. Mean voltage for right and left PVs antra were 1.7±0.1 mV and 1.9±0.2 mV, respectively. There was a trend towards reduced radiofrequency time (40.9±17.4 vs. 48.1±15.5 mins; p=0.22). Conclusion: Voltage-guided CPVA is a promising strategy in targeting critical points for PV isolation with a lower trend of AF recurrence compared with a standard CPVA in short-term period. Extended studies to confirm these findings are warranted. abstract_id: PUBMED:35852306 Characterization of circumferential antral pulmonary vein isolation areas resulting from pulsed-field catheter ablation. Aims: The cornerstone of pulmonary vein (PV) isolation (PVI) is a wide-area circumferential ablation (WACA) resulting in an antral PVI area. Pulsed-field ablation (PFA) is a new nonthermal 'single-shot' PVI technique resulting in well-characterized posterior isolation areas. However, information on circumferential PVI area is lacking. Thus, we sought to characterize the circumferential antral PVI areas after PFA-PVI. Methods And Results: Atrial fibrillation (AF) patients underwent fluoroscopy-guided PVI with a pentaspline PFA catheter. Ultra-high-density voltage maps using a 20-polar circular mapping catheter were created before and immediately after PVI to identify and quantify (i) insufficient isolation areas per antral PV segment (10-segment model) and (ii) enlarged left atrial (LA) isolation areas (beyond the antral PV segments) per LA region (8-region model). The PFA-PVI with pre- (5469 ± 1822 points) and post-mapping (6809 ± 2769 points) was performed in 40 consecutive patients [age 62 ± 6 years, 25/40 (62.5%) paroxysmal AF]. Insufficient isolation areas were located most frequently in the anterior antral PV segments of the left PVs (62.5-77.5% of patients) with the largest extent (median ≥0.4 cm2) located in the same segments (segments 2/5/8). Enlarged LA isolation areas were located most frequently and most extensively on the posterior wall and roof region (89.5-100% of patients; median 1.1-2.7 cm2 per region). Conclusion: Fluoroscopy-guided PFA-PVI frequently results in insufficient isolation areas in the left anterior antral PV segments and enlarged LA isolation areas on the posterior wall/roof, which both may be extensive. To optimize the procedure, full integration of PFA catheter visualization into three-dimensional-mapping systems is needed. abstract_id: PUBMED:37866912 Comparison of Long-Term Clinical Outcomes Between Segmental and Circumferential Pulmonary Vein Isolation in Patients Undergoing Repeat Atrial Fibrillation Ablation. Background: Circumferential pulmonary vein isolation (CPVI) has supplanted segmental PVI (SPVI) as standard procedure for atrial fibrillation (AF). However, there is limited evidence examining the efficacy of these strategies in redo ablations. In this study, we investigated the difference in recurrence rates between SPVI and CPVI in redo ablations for PV reconnection.Methods and Results: This study retrospectively enrolled 543 patients who had undergone AF ablation between 2015 and 2017. Among them, 167 patients (30.8%, including 128 male patients and 100 patients with paroxysmal AF) underwent redo ablation for recurrent AF. Excluding 26 patients without PV reconnection, 141 patients [90 patients of SPVI (Group 1) and 51 patients of CPVI (Group 2)] were included. The AF-free survival rates were 53.3% and 56.9% in Group 1 and Group 2, respectively (P=0.700). The atrial flutter (AFL)-free survival rates were 90% and 100% in Group 1 and Group 2, respectively (P=0.036). The ablation time was similar between groups, and there no major complications were observed. Conclusions: For redo AF ablation procedures, SPVI and CPVI showed similar outcomes, except for a higher AFL recurrence rate for SPVI after long-term follow-up (>2 years). This may be due to a higher probability of residual PV gaps causing reentrant AFL. abstract_id: PUBMED:28496887 Functional Pulmonary Vein Isolation During Atrial Fibrillation Ablation. We report a case of apparent isolation of the right inferior pulmonary vein during atrial fibrillation ablation coincident with the onset of an atrial tachycardia. This report highlights the importance of assessing pulmonary vein conduction during sinus or paced rhythm at relatively long cycle length, rather than in atrial fibrillation or atrial tachycardia. abstract_id: PUBMED:36709438 Prognostic significance of variant left common pulmonary vein after transcatheter pulmonary vein isolation Introduction: Left common pulmonary vein is the most common anatomical variant of pulmonary veins that may affect the outcome of pulmonary vein isolation. Objective: Our aim was to compare procedural data and outcomes in patients with common trunk versus normal left atrial anatomy for radiofrequency and cryoballoon catheter ablation. Method: Data from patients who underwent pulmonary vein isolation for atrial fibrillation and had a preprocedural cardiac CT scan at our Institution between 01. 10. 2019 and 10. 03. 2022 were retrospectively evaluated. We defined a common trunk where the left superior and inferior pulmonary vein merged at least 5 mm before the left atrial ostium. Results: From the study population (n = 210), data from 42 patients with a left common trunk (LCPV group) and 60 patients with normal left atrial anatomy (control group) were examined. No significant differences were found between the common trunk and the control group in terms of demographic data. There was no significant difference between the two groups in procedural data for radiofrequency and cryoballoon ablation (procedure time, fluoroscopy time, left atrial dwelling time, radiation dose). After radiofrequency ablation, the success rate at 1-year follow-up was 72.0% in the common trunk group and 76.2% in the control group (p = 0.659). For cryoballoon ablation, the success rate was 64.7% and 69.2% for common trunk and normal anatomy, respectively (p = 0.641). Conclusion: There was no significant difference in the procedural parameters and clinical outcome between patients with left common pulmonary vein and those with normal left atrial anatomy. Both radiofrequency and cryoballoon ablation techniques are well suited for this population. Orv Hetil. 2023; 164(4): 140-147. abstract_id: PUBMED:25871772 A Novel Microwave Catheter Can Perform Noncontact Circumferential Endocardial Ablation in a Model of Pulmonary Vein Isolation. Introduction: Pulmonary vein isolation is an effective treatment for atrial fibrillation. Current endocardial ablation techniques require catheter contact for lesion formation. Inadequate or inconsistent catheter contact results in difficulty with achieving acute and long-term isolation and consequent atrial arrhythmia recurrence. Microwave energy produces radiant heating and therefore can be used for noncontact catheter ablation. We hypothesized that it is possible to design a microwave catheter to produce a circumferential transmural thermal lesion in an in vitro model of a pulmonary vein antrum. Methods And Results: A monopole microwave catheter with a sideways firing axially symmetrical heating pattern was designed. Noncontact ablations were performed in a perfused pulmonary vein model constructed from microwave myocardial phantom embedded with a sheet of thermochromic liquid crystal to permit visualization and measurement of thermal lesions from color changes. 1200 J ablations were performed at 150 W for 80 seconds and 120 W for 100 seconds at high (0.8 L/min) and low (0.06 L/min) flow through the modeled pulmonary vein. Myocardial tissue was substituted for the phantom material and ablations repeated at 150 W for 180 seconds and stained with nitro-blue tetrazolium. The catheter was able to induce deep circumferential antral lesions in myocardial phantom and myocardial tissue. Higher power and shorter ablations delivering the same amount of microwave energy resulted in larger lesions with less surface sparing. Conclusions: A microwave catheter can be designed to produce a circumferential thermal lesion on noncontact ablation and may have possible applications for pulmonary vein isolation. abstract_id: PUBMED:28400919 Paroxysmal atrial fibrillation ablation: Achieving permanent pulmonary vein isolation by point-by-point radiofrequency lesions. Pulmonary vein isolation by point-by-point radiofrequency catheter ablation constitutes the cornerstone of catheter ablation strategies for the treatment of atrial fibrillation. However, despite advances in pulmonary vein isolation ablation strategies, long-term success rates after ablation remain suboptimal, which highlights the need to develop techniques to achieve more durable lesions. Strategies proposed to improve the durability of pulmonary vein isolation can be divided into two groups: Those addressed to improving the quality of the lesion and those that optimize the detection of acute PV reconnection during the ablation procedure. This manuscript reviews the role and potential benefits of these techniques according to current clinical evidence. Answer: The comparison between circumferential pulmonary vein isolation (CPVI) and stepwise segmental pulmonary vein isolation (SPVI) for patients with paroxysmal atrial fibrillation (PAF) has been a subject of investigation. According to a study where 110 patients with symptomatic PAF were randomized into SPVI and CPVI groups, the efficacy of SPVI was found to be comparable to that of CPVI for these patients. After initial procedures, atrial tachyarrhythmias (ATa) recurred within the first 3 months in a similar proportion of patients in both groups, and the percentage of patients without symptomatic ATa during the 3-9 months after the last procedure was also similar between the two groups. This suggests that there is no significant preference for CPVI over SPVI in terms of efficacy for patients with PAF (PUBMED:17062959). Furthermore, a study comparing long-term clinical outcomes between SPVI and CPVI in patients undergoing repeat atrial fibrillation ablation found that both strategies showed similar outcomes in terms of AF-free survival rates after long-term follow-up. However, there was a higher atrial flutter (AFL) recurrence rate for SPVI, which may be due to a higher probability of residual PV gaps causing reentrant AFL (PUBMED:37866912). In conclusion, based on the available evidence, CPVI does not appear to be significantly preferable to SPVI for patients with paroxysmal atrial fibrillation, as both strategies have shown comparable efficacy. However, the choice of strategy may be influenced by specific patient characteristics and the potential for AFL recurrence.
Instruction: Do infants with mild prenatal hydronephrosis benefit from screening for vesicoureteral reflux? Abstracts: abstract_id: PUBMED:22704090 Do infants with mild prenatal hydronephrosis benefit from screening for vesicoureteral reflux? Purpose: Screening for vesicoureteral reflux in asymptomatic infants with low grade hydronephrosis is unjustified if this condition is benign. We present a cohort with prenatally detected hydronephrosis, some observed without voiding cystourethrography, and compare outcomes to a classic, screened group. Materials And Methods: A total of 206 consecutive children presenting with postnatally confirmed prenatally detected hydronephrosis (47% bilateral) were included in the study. Cases with associated renal or bladder anomalies were excluded. Children with low grade hydronephrosis either underwent screening voiding cystourethrography or did not, largely based on whether care was managed by a urologist or a nephrologist. Patients with high grade hydronephrosis routinely underwent voiding cystourethrography. Primary outcome was a symptomatic or febrile urinary tract infection. We used Mantel-Haenszel analysis to determine urinary tract infection risk factors during the first 2 years of life. Results: No urinary tract infection was observed in patients with grade I hydronephrosis. Urinary tract infections in low grade hydronephrosis were only seen in the voiding cystourethrogram group (7 patients), including 1 infection following voiding cystourethrogram. Urinary tract infection rate was 3.52 infections per 100 patient-years in children with low grade hydronephrosis and 11.1 infections per 100 patient-years in those with high grade hydronephrosis (p = 0.02). This increased risk of urinary tract infection in high grade hydronephrosis persisted after correcting for gender and circumcision status (IRR 3.17, p = 0.01). The association remained strong (IRR 2.48, 95% CI 0.96-6.44) but was not statistically significant (p = 0.053) after additionally correcting for vesicoureteral reflux status. Conclusions: Our data suggest that children with low grade hydronephrosis and otherwise normal kidneys and bladder do not benefit from voiding cystourethrographic screening. Interestingly high grade hydronephrosis appears to carry a threefold greater risk of urinary tract infection compared to low grade hydronephrosis. abstract_id: PUBMED:32638052 Voiding urosonography and voiding cystourethrography in primary vesicoureteral reflux associated with mild prenatal hydronephrosis: a comparative study. Background: Contrast-enhanced harmonic voiding urosonography has been introduced as a sensitive, radiation-free imaging method for the diagnosis of vesicoureteric reflux. Objective: To evaluate the occurrence/severity of vesicoureteric reflux in infants with mild prenatal hydronephrosis comparing voiding cystourethrography and voiding urosonography. Materials And Methods: Sixty infants with prenatal hydronephrosis were studied (anteriοposterior pelvic diameter 5-9 mm on ultrasound [US] at gestational weeks 21-30). Postnatal US was performed within the first month of life, as well as voiding cystourethrography and contrast-enhanced voiding urosonography at 1.5-2.5 months at the same session. Results: Vesicoureteric reflux was diagnosed on at least one modality in 19/60 (32%) infants, and more often on contrast-enhanced voiding urosonography (18/60, 30%) than on voiding cystourethrography (8/60, 13%), P=0.046. Among girls, reflux was more often seen on contrast-enhanced voiding urosonography (6/16, 38%) than on voiding cystourethrography (1/16, 6%), P=0.03. Vesicoureteric reflux missed by voiding cystourethrography was more severe (Grades I, II and III in one, nine and four kidney-ureter-units, respectively), compared with a single case missed by contrast-enhanced voiding urosonography (Grade I in one kidney-ureter-unit). Conclusion: In the absence of a reference standard, our results imply that voiding cystourethrography might underdiagnose reflux, and/or contrast-enhanced voiding urosonography may overdiagnose reflux. abstract_id: PUBMED:9263417 Mild pyelectasis ascertained with prenatal ultrasonography is pediatrically significant. Isolated 'mild renal collecting system dilatation' (mild pyelectasis) is a common prenatal sonographic finding. An association between mild pyelectasis and fetal aneuploidy has been established, but in the absence of a concomitant anomaly, mild pyelectasis is usually regarded as benign and of no clinical consequence, and follow-up is often not obtained after the initial ascertainment. To test this, we investigated the relationship between mild pyelectasis and (1) progression to hydronephrosis; (2) postnatal vesicoureteral reflux (VUR); and (3) postnatal surgery. Between 1 January 1992 and 1 January 1995, we performed 453 prenatal sonographic examinations of 306 patients with mild fetal pyelectasis and no other anomalies. During the course of a routine sonographic examination, we performed a detailed evaluation of the fetal genitourinary tract, Mild pyelectasis was defined as a pelvocalyceal fluid-filled space with the smallest of two transverse perpendicular sonographic measurements of > or = 4 mm and < 10 mm. Hydronephrosis was defined similarly, but with a measurement of > or = 10 mm. Postnatal urological assessment was obtained with routine renal ultrasonography, and voiding cystourethrography (VCUG), measurement of 99mTc-labelled diethylenetriamine pentaacetic acid (DPTA) mercaptacetyltriglycerine (MAG3), and intravenous pyelogram, as necessary. After exclusion criteria were applied, 294 (96%) patients with isolated fetal pyelectasis were considered. A total of 251 (82%) of these were followed with ultrasonography prenatally and during the postnatal period. Two or more prenatal examinations were carried out in 129 patients (51%) and in 35 (27%) of these cases the biometry progressed to frank hydronephrosis (> 10 mm). In only six (5%) of the cases followed prenatally did the measurement diminish to < 4 mm. We found a greater variability in prenatal renal biometry throughout gestation in infants found to have VUR, when compared with those who had no VUR. On postnatal follow-up, 84 patients had VCUG. Sixteen of these patients (6% of all patients [16/251] and 19% [16/84] of those who had the test) were found to have an abnormal finding. Whereas the prenatal appearance of mild pyelectasis improved in only a small number of cases, we found that it progressed to hydronephrosis in 27% of cases. Postnatal evaluation found VUR to be common in apparent uncomplicated mild prenatal pyelectasis. Although postnatal surgery was necessary in only a small number of cases, surgical intervention was necessary in 33% (four of 12) of those with VUR. abstract_id: PUBMED:24863985 Urinary tract infection and vesicoureteral reflux in children with mild antenatal hydronephrosis. Objective: The postnatal management of mild antenatal hydronephrosis (ANH) remains controversial. The purpose of this study was to evaluate the incidence of UTI and VUR in children with mild ANH in order to determine the necessity of antibiotic prophylaxis (ABP) and VCUG. Method: The data of 1511 patients with various grades of ANH who were referred to Department of Urology, Boston Children's Hospital between January 1998 and January 2010 were reviewed and 760 patients who had mild ANH were identified. The inclusion criteria were: 1) A confirmed report of ANH or actual prenatal ultrasound (US) images. 2) Postnatal evaluation and management conducted at the hospital. 3) Persistent mild hydronephrosis on the first US done between two weeks and three months of age. 4) No other US findings such as ureteral dilatation, duplication anomalies or bladder abnormalities. 5) At least one three-month follow up. Univariate statistical analysis was performed using a Student's t test. Results: Of the 760 patients who were identified, 608 (80%) were males, and 225 (30%) had bilateral mild hydronephrosis. Of these, 475 patients (63%) underwent an initial screening VCUG. VUR was identified in 13 patients (1.7%) with grades varying from 1 to 5. At follow up, hydronephrosis resolved in 67% of the renal units and worsened in 3.3%. Among the 692 patients with available follow-up data, 23 (3.3%) had a documented UTI. Twelve of these children had an initial screening VCUG that was negative for VUR. Of these 12 patients, seven underwent a subsequent RNC with none having VUR; five of the 12 patients did not undergo a repeat evaluation for VUR (four had a UTI after the screening VCUG and one had an afebrile UTI). Eleven of the 23 children with mild ANH did not have an initial screening VCUG, and all underwent a subsequent VCUG/RNC. Only two children were then found to have VUR Grade 4-5. Conclusion: The incidence of UTI and VUR in children with mild ANH is low. Consequently, routine VCUG screening for VUR and the use of long-term ABP is not necessary for all patients with asymptomatic mild AHN. Evaluation for VUR in children with mild ANH should be reserved for those who subsequently present with a UTI. abstract_id: PUBMED:24427498 Outcome of vesicoureteral reflux in infants: impact of prenatal diagnosis. Objective: Vesicoureteral reflux (VUR) is an important disorder that could be diagnosed in antenatal or postnatal period. The natural history of VUR seems to be different between prenatal or postnatal forms of the disease. We compared the natural history and outcome of vesicoureteral reflux in infants less than one year old diagnosed prenatally or postnatally. Methods: All infants less than 12 months old with VUR were enrolled in two groups. Group 1 composed of patients with antenatal hydronephrosis and group2, infants with diagnosis of VUR because of UTI or other postnatal problems. We followed patients for an average of 33 months. Outcome was assessed by several factors: somatic growth, need for surgery, resolution, occurrence of UTI and scar formation. Findings: We studied 236 renal units in 152 patients (70 boys, 82girls), 67 patients in group 1 and 85 patients in group2. Occurrence of recurrent UTI was 10.6% with no significant difference between two groups. Reflux resolved in postnatal group more significantly than in the other group (73% vs 49%). Scar developed similarly in both groups (15% vs 25% of renal units). Surgery performed in 7 (8.2%) of postnatal and 4 (6%) of prenatal group with no significant differences between the two groups. Conclusion: VUR diagnosed prenatally has similar importance and outcome as postnatal diagnosed one. We suggest performing the same imaging and treatment procedures for prenatally and postnatally diagnosed VUR. abstract_id: PUBMED:10037416 Vesicoureteral reflux in infants with prenatal hydronephrosis confirmed at birth: racial differences. Purpose: We compared the incidence of vesicoureteral reflux in black and nonblack patients in whom prenatal hydronephrosis was confirmed postnatally. Materials And Methods: We reviewed the records of 58 black and 51 nonblack patients with confirmed hydronephrosis who underwent voiding cystourethrography. Results: Reflux was present in 9 cases (8.3%). The prevalence of reflux in black and nonblack patients was 0 and 17.6%, respectively. Conclusions: The absence of vesicoureteral reflux in black infants with prenatal hydronephrosis and the 17.6% incidence in nonblack infants suggest that voiding cystourethrography should not be routinely performed in the black population, although it should continue to be done on a routine basis in the nonblack population. abstract_id: PUBMED:22649872 Postnatal evaluation and outcome of infants with antenatal hydronephrosis. This study was aimed at evaluating the clinical outcome of infants with antenatally diagnosed hydronephrosis. Our objective was also to determine whether there is a significant correlation between anterior posterior pelvic diameter (APPD) and urinary tract abnormalities detected. We retrospectively analyzed data of 145 infants collected between January 2000 and May 2010. Inclusion criteria were the presence of APPD > or = 5 mm on prenatal US scan after 20 weeks of gestation, at least 6-month follow-up and at least two postnatal US scans. Most patients underwent renal scintigraphy (n = 140, 96.6%) and micturating cystourethrography (n = 141, 97.2%). Of 145 infants, 77 (53.1%) had idiopathic or transient hydronephrosis. The second most common diagnosis was vesicoureteral reflux found in 21 (14.4%) infants, followed by ureteropelvic junction obstruction without significant kidney damage found in 18 (12.4%) infants. The relative risk of significant urologic abnormality according to the degree of antenatal hydronephrosis (ANH) was 21.25 (95% CI: 2.95-156.49) for severe ANH, 1.57 (95% CI: 0.94-2.62) for moderate ANH and 0.47 (95% CI: 0.33-0.66) for mild ANH. There was a significant increase in the riskper increasing degree of hydronephrosis. In 19 out of 145 (13.2%) infants, immediate surgery was required. These data support the need of antenatal detection and long-term postnatal follow-up of infants with ANH. abstract_id: PUBMED:1405428 Prenatal diagnosis of primary megaureter 22 of 114 infants with prenatal diagnosis of urinary tract malformations (15 boys) had obstruction of the ureterovesical junction caused by primary megaureter (n = 14), ureterocele (n = 7), or ectopic ureter (n = 1). Infants with infravesical obstruction or refluxing systems were excluded. All infants had a normal renal function and probably only 5 of them (23%) would have been diagnosed during infancy without prenatal diagnosis. A total number of 26 renal units was evaluated. Six infants had ipsilateral duplication with additional contralateral duplication (n = 2) or vesicoureteric reflux (n = 2). A nonfunctioning kidney or upper pole of duplication was noted in 7 units (5 with ureteroceles); in two instances a moderately dysplastic kidney was present. One boy with urinary infections developed renal scarring during follow-up. Conservative treatment was performed in 14 renal units while 12 units of 11 children had 22 operations (9 temporary diversions) at a mean age of 4,6 months. A severe operative complication was noted in one infant. Conservative treatment is justified in many infants with primary obstructive megaureters but severe renal dysplasia may be present even neonatally. Yet we feel that prenatal diagnosis benefitted most infants, especially those with the most severe malformations. abstract_id: PUBMED:24679866 Risk factors for febrile urinary tract infection in infants with prenatal hydronephrosis: comprehensive single center analysis. Purpose: We assessed risk factors for urinary tract infection in children with prenatal hydronephrosis Materials And Methods: We identified 376 infants with prenatal hydronephrosis in an institutional database. The occurrence of febrile urinary tract infection in the first 2 years of life was ascertained by chart review. Febrile urinary tract infection was defined as a positive culture from a catheterized urine specimen in a patient with a fever of 38.0C or greater. Multivariate logistic regression was used to assess gender, circumcision status, hydronephrosis grade, vesicoureteral reflux grade and antibiotic prophylaxis as predictors of the risk of urinary tract infection. Results: Included in analysis were 277 males and 99 females. Hydronephrosis was high grade in 128 infants (34.0%) and vesicoureteral reflux was present in 79 (21.0%). Antibiotic prophylaxis was prescribed in 60.4% of patients, preferentially to females vs males (70.7% vs 56.7%), those with high vs low grade hydronephrosis (70.3% vs 55.2%) and those with vs without vesicoureteral reflux (96.2% vs 50.8%). On multivariate analysis there was an association between high grade hydronephrosis and an increased risk of urinary tract infection (adjusted OR 2.40, 95% CI 1.26-4.56). Females (adjusted OR 3.16, 95% CI 0.98-10.19) and uncircumcised males (adjusted OR 3.63, 95% CI 1.18-11.22) were also at higher risk than circumcised males. Antibiotic prophylaxis was not associated with a decreased risk of urinary tract infection (adjusted OR 0.93, 95% CI 0.45-1.94). Conclusions: High grade hydronephrosis, female gender and uncircumcised status in males are independent risk factors for febrile urinary tract infection in infants with prenatal hydronephrosis. Antibiotic prophylaxis did not reduce the risk of urinary tract infection in the study group. abstract_id: PUBMED:10458470 Does every patient with prenatal hydronephrosis need voiding cystourethrography? Purpose: Prenatal ultrasound has allowed early identification of urinary tract abnormalities that may require urological followup or early intervention. While all children with prenatal hydronephrosis should undergo ultrasound within the first few weeks of life, we believe that there is a subset of postnatal hydronephrosis for which voiding cystourethrography can be avoided if careful observation is continued. Materials And Methods: For 5 years 175 infants with a history of prenatal hydronephrosis were evaluated by ultrasound. Of 60 infants with less than Society for Fetal Urology grade II unilateral or bilateral hydronephrosis 44 underwent voiding cystourethrography as part of the early evaluation and 16 were observed without voiding cystourethrography. Four infants for whom we would routinely obtain voiding cystourethrography were excluded from study due to severe prenatal hydronephrosis, renal duplication, hydroureter, ipsilateral small or echogenic kidney and grade II or higher hydronephrosis. Results: Voiding cystourethrography was positive in 6 of the 40 infants (15%) with less than grade II hydronephrosis. Of these cases 3 had grade III or higher vesicoureteral reflux and 1 with high grade reflux required reimplantation. None of the 16 patients followed without voiding cystourethrography has required further evaluation or intervention. In all patients with negative or no voiding cystourethrography parenchyma was preserved and hydronephrosis stabilized or resolved. Conclusions: Prenatal and postnatal ultrasound in infants should be used to guide further urological evaluation. Among infants with less than grade II hydronephrosis postnatally 15% had reflux on voiding cystourethrography, which is significantly higher than that reported among normal children (approximately 1%). However, none of the 16 infants observed without voiding cystourethrography on short-term antibiotic prophylaxis had deleterious renal events with 6 months to 4.5 years of followup. Therefore, we question the actual significance of the reflux detected in the first cohort of infants. Voiding cystourethrography can provide a definitive answer. However, we also believe that it is not absolutely mandatory based on the outcome in the observed group. With careful counseling and followup most patients with less than grade II hydronephrosis can be observed without urological sequela. Answer: Infants with mild prenatal hydronephrosis may not necessarily benefit from routine screening for vesicoureteral reflux (VUR). A study found that no urinary tract infections (UTIs) were observed in patients with grade I hydronephrosis, and UTIs in low-grade hydronephrosis were only seen in the voiding cystourethrogram (VCUG) group, including one infection following the procedure. The data suggested that children with low-grade hydronephrosis and otherwise normal kidneys and bladder do not benefit from VCUG screening (PUBMED:22704090). Another study reported that the incidence of UTI and VUR in children with mild antenatal hydronephrosis (ANH) is low, suggesting that routine VCUG screening for VUR and the use of long-term antibiotic prophylaxis is not necessary for all patients with asymptomatic mild ANH. Evaluation for VUR in children with mild ANH should be reserved for those who subsequently present with a UTI (PUBMED:24863985). However, it is important to note that the occurrence of VUR was common in apparent uncomplicated mild prenatal pyelectasis, and although postnatal surgery was necessary in only a small number of cases, surgical intervention was necessary in a significant proportion of those with VUR (PUBMED:9263417). Additionally, another study found that high-grade hydronephrosis, female gender, and uncircumcised status in males are independent risk factors for febrile UTI in infants with prenatal hydronephrosis, but antibiotic prophylaxis did not reduce the risk of UTI in their study group (PUBMED:24679866). In conclusion, while some studies suggest that routine screening for VUR in infants with mild prenatal hydronephrosis may not be necessary, especially if they are asymptomatic, other studies highlight the potential risks associated with VUR. Therefore, the decision to screen should be individualized based on the presence of additional risk factors, the severity of hydronephrosis, and the clinical presentation of the infant.
Instruction: Are HIV care providers talking with patients about safer sex and disclosure? Abstracts: abstract_id: PUBMED:12351956 Are HIV care providers talking with patients about safer sex and disclosure?: A multi-clinic assessment. Objectives: To examine HIV-positive patients' reports of whether HIV care providers ever talked with them about practicing safer sex and disclosing seropositive status to sex partners. Design: Cross-sectional survey (1998-1999) of HIV-positive men and women sampled randomly at six public HIV clinics in California. Methods: Participants were interviewed and asked whether applicable clinic providers (physician, physician assistant, nurse practitioner, nurse, social worker, health educator, psychologist, psychiatrist) ever talked with them about safer sex or disclosure. Responses were analyzed by clinic site, HIV medical status (viral load), demographic, and behavioral variables (unprotected intercourse, non-disclosure). Results: The sample (n = 839) included heterosexual men (n = 127), men who have sex with men (MSM; n = 607), and women (n = 105). Thirty-nine percent were white, 36% Hispanic, 17% black, and 8% other/mixed ethnicity. Overall, 71% reported that an applicable provider had talked with them at least once about safer sex (range across clinics, 52-94%); 50% reported discussion of disclosure (range across clinics, 31-78%). Discussion of safer sex was more prevalent with physicians than with other clinic staff. In multivariate analyses, in addition to significant clinic differences, MSM (versus heterosexual men) and whites (versus blacks or Hispanics) were less likely to receive prevention messages on these topics. Patients' behaviors (unsafe sex, non-disclosure) and HIV medical status were not independently associated with provider communication. Conclusions: HIV clinics differed substantially in the percentage of patients who reported that they received prevention messages from clinic staff. Care providers should assess and overcome barriers to providing prevention messages to patients. abstract_id: PUBMED:24629843 Challenges with couples, serodiscordance and HIV disclosure: healthcare provider perspectives on delivering safer conception services for HIV-affected couples, South Africa. Introduction: Safer conception interventions should ideally involve both members of an HIV-affected couple. With serodiscordant couples, healthcare providers will need to manage periconception risk behaviour as well tailor safer conception strategies according to available resources and the HIV status of each partner. Prior to widespread implementation of safer conception services, it is crucial to better understand provider perspectives regarding provision of care since they will be pivotal to the successful delivery of safer conception. This paper reports on findings from a qualitative study exploring the viewpoints and experiences of doctors, nurses, and lay counsellors on safer conception care in a rural and in an urban setting in Durban, South Africa. Methods: We conducted six semistructured individual interviews per site (a total of 12 interviews) as well as a focus group discussion at each clinic site (a total of 13 additional participants). All interviews were coded in Atlas.ti using a grounded theory approach to develop codes and to identify core themes and subthemes in the data. Results: Managing the clinical and relationship complexities related to serodiscordant couples wishing to conceive was flagged as a concern by all categories of health providers. Providers added that, in the HIV clinical setting, they often found it difficult to balance their professional priorities, to maintain the health of their clients, and to ensure that partners were not exposed to unnecessary risk, while still supporting their clients' desires to have a child. Many providers expressed concern over issues related to disclosure of HIV status between partners, particularly when managing couples where one partner was not aware of the other's status and expressed the desire for a child. Provider experiences were that female clients most often sought out care, and it was difficult to reach the male partner to include him in the consultation. Conclusions: Providers require support in dealing with HIV disclosure issues and in becoming more confident in dealing with couples and serodiscordance. Prior to implementing safer conception programmes, focused training is needed for healthcare professionals to address some of the ethical and relationship issues that are critical in the context of safer conception care. abstract_id: PUBMED:29947563 Disclosure of Same-Sex Behaviors to Health-care Providers and Uptake of HIV Testing for Men Who Have Sex With Men: A Systematic Review. To promote HIV-testing and offer optimal care for men who have sex with men (MSM), health-care providers (HCPs) must first be aware of their patients' sexual behaviors. Otherwise, HCPs may overlook MSM's risks for HIV infection and their special health-care needs. For MSM, reporting their same-sex behaviors to HCPs (disclosure to HCPs) may promote their linkage to HIV prevention and treatment cascade and improve their health outcomes. No literature review has been conducted to examine the relationship between disclosure to HCPs and uptake of HIV-testing among MSM. The current study reviewed and synthesized findings from 29 empirical studies published in English by 2016. We summarized the rates of MSM's disclosure to HCPs, investigated the association between disclosure and HIV-testing among MSM, identified potential facilitators and barriers for disclosure, and discussed the implications of our findings in research and clinical practices. The disclosure rates varied across subgroups and study settings, ranging from 16% to 90% with a median of 61%. Disclosure to HCPs was positively associated with uptake of HIV-testing. African American MSM were less likely to disclose to HCPs. MSM who lived in urban settings with higher education attainment and higher income were more likely to disclose. MSM tended to perceive younger or gay-friendly doctors as safer targets of disclosure. Clinics with LGBT-friendly signs were viewed as safer contexts for disclosure. Having previous communications about substance use, sex, and HIV with HCPs could also facilitate disclosure. The main reasons for nondisclosure included lack of probing from HCPs, concerns on confidentiality breach and stigma, and perceived irrelevance with services. Providing appropriate trainings for HCPs and creating gay-friendly clinical settings can be effective strategies to facilitate disclosures of same-sex behaviors among MSM and meet their specific medical needs. Interventions to promote disclosure should give priorities to MSM from the most marginalized subgroups (e.g., MSM in rural areas, MSM of ethnic minorities). abstract_id: PUBMED:30449137 Correlates of same-sex behavior disclosure to health care providers among Black MSM in the United States: implications for HIV prevention. Disclosure of same-sex behavior to health care providers (HCPs) by men who have sex with men (MSM) has been argued to be an important aspect of HIV prevention. However, Black MSM are less likely to disclose compared to white MSM. This analysis of data collected in the United States from 2006-2009 identified individual and social network characteristics of Black MSM (n = 226) that are associated with disclosure that may be leveraged to increase disclosure. Over two-thirds (68.1%) of the sample had ever disclosed to HCPs. Part-time employment (AOR = 0.32, 95% CI = 0.11-0.95), bisexual identity (AOR = 0.29, 95% CI = 0.12-0.70), and meeting criteria for alcohol use disorders (AOR = 0.32, 95% CI = 0.14-0.75) were negatively associated with disclosure. Disclosers were more likely to self-report being HIV-positive (AOR = 4.47, 95% CI = 1.54-12.98), having more frequent network socialization (AOR = 2.15, 95% CI = 1.24-3.73), and having a social network where all members knew the participant had sex with men (AOR = 4.94, 95% CI = 2.06-11.86). These associations were not moderated by self-reported HIV status. Future interventions to help MSM identify social network members to safely disclose their same-sex behavior may also help disclosure of same-sex behavior to HCPs among Black MSM. abstract_id: PUBMED:15516708 Secrets and safety in the age of AIDS: does HIV disclosure lead to safer sex? To fuel the HIV/AIDS epidemic, HIV-seropositive individuals must interact unsafely with HIV-seronegative individuals. Research indicates that up to one third of individuals diagnosed with HIV continue to have unprotected sex, at times without informing partners, who may be of negative or unknown serostatus. Some research and public health interventions have focused on encouraging HIV-positive individuals to reveal their serostatus to their sexual partners, predicated upon the assumption that disclosure will increase the safety of subsequent sexual activity with informed partners. This review examines the empirical literature on disclosure of HIV status and subsequent sexual risk behaviors of HIV-positive individuals. Only 15 of the 23 studies reviewed provided data that allowed us to examine the association between disclosure and safer sex. Fewer still provided a methodologically sound analysis, and those that did provided conflicting results, often with significant effects limited to only 1 subgroup of participants. However, this failure to demonstrate a consistent association does not necessarily mean that disclosure is irrelevant to the practice of safer sex. We discuss limitations of the research to date and implications for policy and practice. abstract_id: PUBMED:27214322 Communication about safer sex and serostatus disclosure in HIV-positive adolescents with haemophilia. Objectives: To assess the attitudes, beliefs and feelings of adolscents and young men with severe haemophilia with respect to discussing safer sex and disclosing their human immunodeficiency virus (HIV) seropositivity to potential sex partners. Methods: Fifty-nine males with haemophilia from throughout the US answered open-ended questions. Results: Talking about avoidance of transmitting AIDS and disclosing one's seropositivity was beneficial, moral and wise. Nevertheless, this was exceedingly, difficult, unpleasant, and fraught with fear of rejection and alienation. Communication was approved by family, friends, and health-care providers. Facilitators of communication were: knowledge and an accepting attitude about persons with HIV, a supportive person to assist with discussion, and environmental cues. Conclusion: This first report of HIV-infected adolescents and young adults reveals that although they endorse discussing safer sex and disclosing their HIV seropositivity, they are painfully aware of the social and interpersonal risks of such extremely difficult communications. abstract_id: PUBMED:12745398 Serostatus disclosure, sexual communication and safer sex in HIV-positive men. This study assessed HIV-positive men's sexual behaviours with partners at risk for infection, and examined the extent to which safer sex was associated with interpersonal communication variables, namely, (1). disclosure of one's seropositive status and (2). specific communication with partners about safer-sex practices. A total of 105 HIV-positive men (43% homosexual, 38% bisexual, 19% heterosexual), randomly sampled at an HIV outpatient clinic in Los Angeles, completed a behavioural questionnaire assessing events in their most recent sexual encounter with an HIV-negative or unknown serostatus partner. Results indicated that men who disclosed their seropositive status and explicitly discussed the topic of safer sex with their at-risk partners had a significantly higher prevalence of protected anal or vaginal intercourse than did men who disclosed only. The findings suggest that post-test counselling regarding the importance of disclosing one's seropositive status to sex partners should be augmented by behavioural interventions that enhance seropositive persons' skills in communicating explicitly with partners about safer sex to help reduce transmission of HIV. abstract_id: PUBMED:27570762 Perspectives and Practice of HIV Disclosure to Children and Adolescents by Health-Care Providers and Caregivers in sub-Saharan Africa: A Systematic Review. Background: Sub-Saharan Africa (SSA) has the highest prevalence of HIV globally, and this is due to persistent new HIV infections and decline in HIV/AIDS-related mortality from improved access to antiretroviral (ART) therapy. There is a limited body of work on perspectives of health-care providers (HCPs) concerning disclosing outcomes of HIV investigations to children and adolescents in SSA. Most studies are country-specific, indicating a need for a regional scope. Objective: To review the current literature on the perspectives of HCPs and caregivers of children and adolescents on age group-specific and culture-sensitive HIV disclosure practice. Methods: Electronic database search in PubMed, Google scholar, and the University of South Florida Library Discovery Tool (January 2006 up to February 2016). Further internet search was conducted using the journal author name estimator search engine and extracting bibliographies of relevant articles. Search terms included "disclosure*," "HIV guidelines," "sub-Saharan Africa," "clinical staff," "ART," "antiretroviral adherence," "people living with HIV," "pediatric HIV," "HIV," "AIDS," "health care provider," (HCP), "caregiver," "adolescent," "primary care physicians," "nurses," and "patients." Only studies related to HIV/AIDS disclosure, HCPs, and caregivers that clearly described perspectives and interactions during disclosure of HIV/AIDS sero-status to affected children and adolescents were included. Independent extraction of articles was conducted by reviewers using predefined criteria. Nineteen articles met inclusion criteria. Most studies were convenience samples consisting of combinations of children, adolescents, HCPs, and caregivers. Key findings were categorized into disclosure types, prevalence, facilitators, timing, process, persons best to disclose, disclosure setting, barriers, and outcomes of disclosure. Conclusion: Partial disclosure is appropriate for children in SSA up to early adolescence. Caregivers should be directly involved in disclosing to children but they require adequate disclosure support from HCPs. Full disclosure is suitable for adolescents. Adolescents prefer disclosure by HCPs and they favor peer-group support from committed peers and trained facilitators, to reduce stigma. HCPs need continuous training and adequate resources to disclose in a patient-centered manner. abstract_id: PUBMED:32008363 Addressing fertility desires and preconception care needs of men living with HIV: perspectives from HIV providers about addressing the reproductive needs of male patients. Preconception care is an essential component of health, particularly among women and men living with HIV and can optimize medical and psychosocial outcomes. However, there is a paucity of data on this topic, especially when evaluating provider communication with male patients. We conducted a multi-site qualitative study in 7 cities in the United States (US) with 92 providers to assess their attitudes and practices regarding preconceptual counseling, safer conception, and preconception care with their patients living with HIV. Providers were contacted to schedule a phone interview. Recorded interviews were transcribed and coded for a priori and emergent themes. Providers reported infrequent communication with male patients with HIV about their reproductive plans and the use of safer conception, acknowledging they were more likely to initiate such communication with female patients. A small percentage of providers reported talking to all of their patients about reproductive options, including men having sex with men (MSM). Currently, there is no consensus or evidence-based guideline for the delivery of preconception care specific to men. Based on our results, we recommend that providers offer preconception care to all men as part of optimizing family planning and pregnancy outcomes; enhancing reproductive health; preparing men for fatherhood; and in the setting of HIV infection, preventing transmission to an uninfected partner. abstract_id: PUBMED:27005783 Disclosure of HIV Status to Health Care Providers in the Netherlands: A Qualitative Study. We qualitatively investigated perspectives on HIV disclosure to health care providers (HCP) by people living with HIV (PLWH). Perspectives varied across PLWH and between PLWH and HCP. Some PLWH felt they should always disclose so that HCP could take necessary precautions or because disclosure optimized care. Others felt that disclosure was not an obligation but a courtesy. Still others felt that disclosure was unnecessary as all HCP should apply universal precautions or because HIV status was not relevant to care. Most HCP claimed they should be informed about patients' HIV status as this would reduce occupational risk of infection and improve care. HCP also felt that disclosure concerns by PLWH were unnecessary given the HCP' duty of professional confidentiality. Some acknowledged that disclosure was not always necessary but still indicated wanting to be informed. Perspectives on HIV disclosure in health care settings differed substantially between PLWH and HCP. Answer: Yes, HIV care providers are talking with patients about safer sex and disclosure, but the extent and nature of these discussions vary. A cross-sectional survey of HIV-positive patients at six public HIV clinics in California found that 71% of patients reported that a provider had talked with them at least once about safer sex, and 50% reported discussion of disclosure (PUBMED:12351956). However, the frequency and depth of these conversations can differ significantly by clinic, patient demographics, and provider type. Healthcare providers in South Africa expressed concerns about managing the complexities related to serodiscordant couples wishing to conceive, including issues related to disclosure of HIV status between partners (PUBMED:24629843). Providers require support in dealing with HIV disclosure issues and need focused training to address ethical and relationship issues critical in the context of safer conception care. A systematic review found that disclosure rates of same-sex behaviors to healthcare providers among men who have sex with men (MSM) varied widely, and disclosure was positively associated with uptake of HIV testing (PUBMED:29947563). However, African American MSM were less likely to disclose, and nondisclosure was often due to lack of probing from healthcare providers, concerns about confidentiality breach and stigma, and perceived irrelevance with services. Another study found that over two-thirds of Black MSM had disclosed same-sex behavior to healthcare providers, and disclosure was associated with self-reporting as HIV-positive, more frequent network socialization, and having a social network where all members knew the participant had sex with men (PUBMED:30449137). In the context of HIV-positive adolescents with hemophilia, communication about safer sex and disclosing HIV seropositivity was seen as beneficial, moral, and wise, but also difficult and unpleasant due to fear of rejection and alienation (PUBMED:27214322). Overall, while HIV care providers are engaging in discussions about safer sex and disclosure with patients, there are challenges and variations in practice that need to be addressed to ensure effective communication and support for all patients (PUBMED:12351956, PUBMED:24629843, PUBMED:29947563, PUBMED:30449137, PUBMED:27214322).
Instruction: Do the limits of serum prolactin in disconnection hyperprolactinaemia need re-definition? Abstracts: abstract_id: PUBMED:27540328 Factors Influencing Disconnection Hyperprolactinemia and Reversal of Serum Prolactin after Pituitary Surgery in a Non-Functioning Pituitary Macroadenoma. Background: To investigate factors influencing disconnection hyperprolactinemia, including tumour volume, degree of pituitary stalk displacement and extent of tumour growth based on a modified Wilson-Hardy classification in a non-functioning pituitary macroadenoma and to confirm reductions in serum prolactin levels after endoscopic transphenoidal surgery. Methods: This prospective, descriptive study was conducted in the Department of Neurosurgery, General Hospital Kuala Lumpur from Jan 1, 2011 to Jan 1, 2013. Forty patients fulfilling the inclusion criteria were enrolled. All patients underwent endoscopic transphenoidal resection of non-functioning pituitary macroadenoma. Pituitary stalk angle, tumour volume and extent of tumour growth were measured from Magnetic Resonance Imaging (MRI) pre- and post-operatively. These variables were compared to serum prolactin levels measured pre and post operatively. SPSS 21 was used to perform statistical analyses. Results: In 40 patients, the mean tumour volumes were 10.58 cm(3) (SD 7.81) pre-operatively and 3.1 cm(3) (SD 3.45) post-operatively. There was a 70% reduction in tumour volume post-operatively (P < 0.01). The mean serum prolactin was 457 mIU/L (SD 66.93) pre-operatively and 297 mIU/L (SD 6.73) post-operatively. There was a 65% reduction in prolactin serum levels after surgery (P < 0.01). The mean pituitary stalk angles were 93.45 ± 3.89 degrees pre-operatively and 51.45 ± 1.46 degrees post-operatively (P = 0.01). The mean pituitary stalk angle in the control group was 50.4 ± 8.80 degrees. Hence, there was a 98% reduction in pituitary stalk angle after surgery (P < 0.01). This study showed a linear correlation between the pre-operative and post-operative tumour volumes and serum prolactin levels (P = 0.01 pre-and post-operative) and between serum prolactin levels and pituitary stalk angle (P = 0.20 pre-operative; P = 0.01 post-operative). Conclusion: Tumour volume and pituitary stalk angle displacement have positive predictive values for disconnection hyperprolactinemia in non-functioning pituitary macroadenoma. However, a larger sample size and further objective studies are needed to confirm these findings. abstract_id: PUBMED:16984247 Do the limits of serum prolactin in disconnection hyperprolactinaemia need re-definition? A study of 226 patients with histologically verified non-functioning pituitary macroadenoma. Background: The differentiation of a pituitary non-functioning macroadenoma from a macroprolactinoma is important for planning appropriate therapy. Serum PRL levels have been suggested as a useful diagnostic indicator. However, values between 2500 and 8000 mU/l are a grey area and are currently associated with diagnostic uncertainty. Objective: We wished therefore, to investigate the serum PRL values in a large series of patients presenting with apparently non-functioning pituitary macroadenomas. Patients And Methods: All patients presenting to the Department of Endocrinology in Oxford with clinically non-functioning pituitary macroadenomas (later histologically verified) between 1990 and 2005 were studied. Information documented in the notes on the medications and on the presence of conditions capable of affecting the serum PRL levels at the time of blood sampling was also collected. Results: Two hundred and twenty-six patients were identified (median age at diagnosis 55 years, range 18-88 years; 146 males/80 females; 143 gonadotroph, 46 null cell, 25 plurihormonal and 12 silent ACTH adenomas). All tumours had suprasellar extension. At the time of blood sampling 41 subjects were taking medications capable of increasing serum PRL. Hyperprolactinaemia was found in 38.5% (87/226) of the patients. The median serum PRL values in the total group were 386 mU/l (range 16-3257) (males: median 299 mU/l, range 16-1560; females: median 572 mU/l, range 20-3257) and in those not taking drugs capable of increasing serum PRL 363 mU/l (range 16-2565) (males: median 299 mU/l, range 16-1560; females: median 572 mU/l, range 20-2565). Serum PRL < 2000 mU/l was found in 98.7% (223/226) of the total group and in 99.5% (184/185) of those not taking drugs. Among the three subjects with serum PRL > 2000 mU/l, two were taking oestrogen preparations. Conclusions: Based on a large series of histologically confirmed cases, serum PRL > 2000 mU/l is almost never encountered in nonfunctioning pituitary macroadenomas. Values above this limit in the presence of a macroadenoma should not be surrounded by diagnostic uncertainty (after acromegaly or Cushing's disease have been excluded); a prolactinoma is the most likely diagnosis and a dopamine agonist should be considered as the treatment of choice. abstract_id: PUBMED:23385474 Serum prolactin concentration at presentation of non-functioning pituitary macroadenomas. Objective: Serum PRL levels at presentation may be useful in distinguishing between disconnection hyperprolactinemia in non-secretory pituitary adenomas and prolactinomas in order to guide appropriate therapy; however, there is a debate regarding the discriminatory PRL thresholds. We aimed to examine PRL concentrations at presentation in a cohort of histologically proven non-functioning pituitary adenomas (NFPA). Design And Methods: Retrospective case note analysis was performed. Clinical, biochemical, histopathological and radiological data were recorded and analyzed. Complete data were available for 250 subjects with NFPA. Results: Of the study population, 44.8% were hyperprolactinemic at presentation, 55.3% of whom were female. Of those with hyperprolactinemia, 73.2% had PRL<1000 mIU/l on presentation, 24.1% had PRL between 1000 and 1999 mIU/l. Only 2.7% (no.=3 females, 1.2% whole cohort) had PRL>2000 mIU/l (94.3 ng/ml), 2 of whom were pregnant. No male subject and no subjects with an intrasellar macroadenoma had serum PRL>1000 mIU/l (47.2 ng/ml). Overall, serum PRL was not higher among 43 subjects taking medications known to raise PRL. Conclusions: Our data support recent evidence that the serum PRL concentration is rarely >1000 mIU/l in males, or >2000 mIU/l in females, with non-functioning macroadenomas and that, once other contributing factors to the hyperprolactinemia have been excluded, a trial of dopamine agonist therapy for such lesions is indicated. abstract_id: PUBMED:15481638 Serum prolactin in human immunodeficiency virus infection. Patients infected with human immunodeficiency virus (HIV) have higher serum prolactin compared to healthy controls but this is controversial. As part of a laboratory audit investigating the interference of macroprolactin in our prolactin assay, we investigated whether low biological activity macroprolactin could account for the increased serum prolactin concentrations observed in HIV infection. We, therefore, compared serum total prolactin and free prolactin in 32 subjects infected with HIV (HIV+ve) with 52 subjects not infected with HIV (HIV-ve). Serum total prolactin concentrations were similar in HIV+ve and HIV-ve patients [median (95% confidence limits); 167.0 (122.4 - 313.8) vs 206.5 (187.8 - 248.4) mU/L respectively]. Serum free prolactin concentrations were lower (p <0.005) in HIV+ve subjects than in HIV-ve subjects [112.0 (91.1-141.8) vs 171.0 (154.5 - 200.9) mU/L respectively; p<0.0005]. These results are consistent with the notion that low biological activity macroprolactin contributes to circulating prolactin concentrations in HIV+ve subjects. abstract_id: PUBMED:9286727 Prolactin serum levels in postmenopausal women receiving long-term hormone replacement therapy. In the literature there are different opinions about serum prolactin levels after the menopause and in women using hormone replacement therapy. We report serum prolactin levels of 165 women after surgical or natural menopause who received different kinds of hormone replacement therapies over 2 years and 6 months. Surgically or naturally menopausal women showed no significant differences in serum prolactin levels. After starting hormone replacement therapy, the serum prolactin levels decreased. During subsequent treatment cycles, serum prolactin showed varying levels within the reference limits. Women using hormone replacement therapy developed no new manifestation of hyperprolactinemia. abstract_id: PUBMED:7271910 Effect of hyperprolactinaemia due to pituitary tumour on serum albumin, protein and oncotic pressure. 1. Serum albumin, total protein, oncotic pressure and osmolality were measured in a group of patients with hyperprolactinaemia due to a prolactin secreting tumour, normal subjects and normoprolactinaemic acromegalic patients. 2. A significant increase in albumin, total protein and oncotic pressure was found in the prolactinoma patients when compared with those of the other two groups. 3. There was no difference in serum osmolality between the three groups. 4. These studies provide further evidence that prolactin may have anabolic effects. abstract_id: PUBMED:18482912 False-positive polyethylene glycol precipitation tests for macroprolactin due to increased serum globulins. Background: The polyethylene glycol (PEG) precipitation test is widely used to detect hyperprolactinaemia caused by macroprolactin. We report two cases of hyperprolactinaemia in which a low recovery of serum prolactin (PRL) after PEG precipitation indicated the presence of macroprolactin, but no macroprolactin was detected by gel filtration chromatography (GFC). Both cases had elevated concentrations of serum globulin (IgG myeloma and polyclonal hypergammaglobulinaemia due to human immunodeficiency virus [HIV] infection), which prompted us to investigate further the effect of serum globulin on the specificity of the PEG precipitation procedure. Methods: The effect of increasing concentrations of gamma globulin on the precipitation of PRL by PEG was studied by adding purified human gamma globulin to serum. Ten samples from HIV-infected patients, which showed a low recovery of PRL after PEG precipitation (<60%) were studied with GFC. Results: Addition of gamma globulin decreased the recovery of PRL following precipitation with PEG and gamma globulin concentrations correlated inversely with PRL concentrations (r = 0.9429, P < 0.0167) and percentage recovery of PRL (r = -1.000, P < 0.005). Only one out of 10 samples from HIV-infected patients with PRL recoveries of <60% following PEG precipitation showed a substantial macroprolactin component on GFC. Conclusions: Monomeric PRL is co-precipitated with serum globulins by PEG. Increased serum globulin concentrations can increase the amount of monomeric PRL precipitated by PEG giving a false estimate of the monomeric PRL and the erroneous impression that macroprolactin is present. The results of the PEG precipitation test should be interpreted with caution in patients with elevated serum globulin concentrations. abstract_id: PUBMED:631637 Serum prolactin in liver disease and its relationship to gynaecomastia. Serum immunoreactive prolactin was measured in 150 patients with liver disease of varying aetiology and severity and in 45 control subjects. The upper limit of the reference range for serum prolactin was 331 mU/l. Eighteen patients with liver disease (12%) had unexplained hyperprolactinaemia. No relationship existed between the prolactin value and the sex of the patient, the aetiology of the liver disease, the severity of the liver disease, or the presence of gynaecomastia. The cause of the hyperprolactinaemia in patients with liver disease and its clinical implications need further investigation. abstract_id: PUBMED:19082309 Severe hyperprolactinemia associated with internal carotid artery aneurysm: differential diagnosis between prolactinoma and hypothalamic-pituitary disconnection. Sellar and parasellar masses blocking inhibitory hypothalamic dopaminergic tonus can produce hyperprolactinemia. One of these conditions, seldom reported, is internal carotid artery aneurysm causing pituitary stalk compression and hyperprolactinemia, the majority of which is related to small increases in serum prolactin levels. The aim of this study is to report the case of a patient with an internal carotid aneurysm and severe hiperprolactinemia. A 72 years old female patient, on oncology follow-up for clinically controlled cervical carcinoma, was evaluated due to worsening chronic headaches. During the investigation, computed tomography and magnetic resonance imaging (MRI) showed a sellar mass associated with high prolactin level (1.403 microg/L) that initially was considered a macroprolactinoma, and treated with bromocriptine. However, subsequent pituitary MRI suggested an internal carotid aneurysm, which was confirmed by an angioresonance imaging of cerebral vessels. On low bromocriptine dose (1.25 mg/day), there was a prompt normalization of prolactin levels with a great increase (> 600 microg/L) after withdrawal, which was confirmed several times, suggesting HPD. We report a patient with internal carotid artery aneurysm with severe hyperprolactinemia never reported before in patients with HPD, and the need for a differential diagnosis with macroprolactinomas even considering high prolactin levels. abstract_id: PUBMED:17253440 Hyperprolactinemia: causes, diagnosis, and treatment The basic data on hyperprolactinemia (i.e. an excess of PRL above a reference laboratory's upper limits), the most common endocrine disorder of the hypothalamic-pituitary axis are given in this review. The following issues are discussed: regulation of prolactin (Prl) secretion, definition of hyperprolactinemia, its etiology and pathogenesis as well as its symptoms, diagnosis, and treatment (including medical and surgical therapy). It should be stressed that finding of elevated PRL serum concentrations constitute the beginning of diagnostic procedure and, after exclusion of physiologic, pharmacologic, and other organic causes of increased PRL levels, should be followed by detailed diagnosis including MRI. In patients in whom hyperprolactinemia has been confirmed the treatment with dopamine agonists (with prevalence of cabergoline, followed by quinagoline) is currently considered first-choice therapy. Surgery should be performed only in the patients resistant or intolerant to these agents, or in patients who refuse long-term therapy. Answer: The limits of serum prolactin in disconnection hyperprolactinemia have been a subject of debate, particularly in distinguishing non-functioning pituitary macroadenomas from macroprolactinomas, which is crucial for planning appropriate therapy. According to a study of 226 patients with histologically verified non-functioning pituitary macroadenomas, serum prolactin levels greater than 2000 mU/l are almost never encountered in such cases. The study suggests that values above this limit should not be surrounded by diagnostic uncertainty, and a prolactinoma is the most likely diagnosis, indicating that dopamine agonist should be considered as the treatment of choice (PUBMED:16984247). Another study supports this evidence, indicating that serum prolactin concentration is rarely greater than 1000 mU/l in males or greater than 2000 mU/l in females with non-functioning macroadenomas. The study concludes that once other contributing factors to hyperprolactinemia have been excluded, a trial of dopamine agonist therapy for such lesions is indicated (PUBMED:23385474). These findings suggest that the current limits for serum prolactin in disconnection hyperprolactinemia may not need redefinition, as they appear to be adequate for distinguishing between non-functioning pituitary macroadenomas and prolactinomas. However, it is important to consider individual patient factors and to exclude other potential causes of hyperprolactinemia before making a diagnosis and deciding on the appropriate therapy.
Instruction: Is pain from mammography reduced by the use of a radiolucent MammoPad? Abstracts: abstract_id: PUBMED:27101790 Is pain from mammography reduced by the use of a radiolucent MammoPad? Local experience in Hong Kong. Introduction: Screening mammogram can decrease the mortality of breast cancer. Studies show that women avoid mammogram because of fear of pain, diagnosis, and radiation. This study aimed to evaluate the effectiveness of a radiolucent pad (MammoPad; Hologic Inc, Bedford [MA], US) during screening mammogram to reduce pain in Chinese patients and the possibility of glandular dose reduction. Methods: This case series was conducted in a private hospital in Hong Kong. Between November 2011 and January 2012, a total of 100 Chinese patients were recruited to our study. Left mammogram was performed without MammoPad and served as a control. Right mammogram was performed with the radiolucent MammoPad. All patients were then requested to complete a simple questionnaire. The degree of pain and discomfort was rated on a 0-10 numeric analogue scale. Significant reduction in discomfort was defined as a decrease of 10% or more. Results: Of the 100 patients enrolled in this study, 66.3% of women reported at least a 10% reduction in the level of discomfort with the use of MammoPad. No statistical differences between age, breast size, and the level of discomfort were found. Conclusion: The use of MammoPad significantly reduced the level of discomfort experienced during mammography. Radiation dose was also reduced. abstract_id: PUBMED:15239794 Reduction of discomfort during mammography utilizing a radiolucent cushioning pad. The purpose of this study was to evaluate the use of a radiolucent cushioning pad during routine screening mammography to determine the degree of reduction in discomfort, any correlation of that reduction with various clinical factors, and the pad's impact on image quality, compression force, and radiation dose. A total of 512 patients presenting for routine screening mammography were evaluated. The radiolucent cushioning pads (MammoPad, Biolucent Inc., Aliso Viejo, CA) were placed on the bucky and compression paddle of the mammographic equipment (M-IV, Lorad Corp., Danbury, CT). For each patient, the radiolucent pads were used for imaging one breast while the opposite breast, which served as a control, was imaged without the pads. In all patients, the breast was positioned and compressed in standard fashion, using routine craniocaudal (CC) and mediolateral oblique (MLO) views. Patients completed a questionnaire before and after the mammogram. A visual analog scale (VAS) was used to record anticipated discomfort as well as the discomfort experienced during compression with and without the pads. Age, hormone replacement status, prior experience with mammography, compression force, and breast composition were recorded, as were dose parameters (mAs and kVp), from which midglandular dose values were calculated. On the side where the pads were used to cushion the breast, 73.5% (371/505) of women experienced a significant decrease in discomfort. Of those who benefited, there was a 47% decrease in discomfort. On the pad side, compression force was increased an average of 14%, and there was a 4% decrease in dose for the CC view and no increase in dose for the MLO view. When comparing benefit versus nonbenefit groups, a significant reduction in discomfort did not correlate with any of the clinical factors evaluated, with the exception of experienced discomfort reported after the mammogram. Use of the MammoPad radiolucent cushion on the bucky and compression paddle was an effective means of reducing discomfort during compression for a majority (73.5%) of our patients undergoing screening mammography. Image quality and the ability to detect all pertinent anatomy were maintained. abstract_id: PUBMED:15191098 Clinical assessment of a radiolucent cushion for mammography. Purpose: To determine whether use of a radiolucent cushion could significantly decrease pain during screening mammography without compromising image quality or other technical factors. Material And Methods: 838 patients presenting for routine screening mammography were evaluated. The radiolucent cushions were placed on the compression surfaces of the mammographic equipment and were used while imaging the right breast. No pads were used while imaging the left breast. Patient age, hormonal status, compression force, and radiation dose values were collected on all patients. Each subject completed a visual analog pain scale (VAS) rating the degree of pain experienced with and without the cushions. All mammographic images obtained (CC and MLO views) were compared, side by side (cushioned versus non-cushioned) by the readers. Results: Use of radiolucent cushions reduced pain by 10% or more in 66% (555/838) of women. Patients in this "benefited group" experienced an average pain reduction of 53%. No compromise of image quality was observed. Compression force and radiation dose values were highly correlated between the cushioned and non-cushioned sides. Conclusion: Two-thirds of women experienced a significant reduction of pain when the radiolucent cushions were used during mammography. Pain reduction was accomplished without any clinically significant change in compression force, radiation dose values, or image quality. abstract_id: PUBMED:19135922 Reduced compression mammography to reduce breast pain. This study evaluated whether reduced compression mammography to relieve breast tenderness is feasible. Women can better tolerate a compression force of approximately 90 N in mammography. The breast thickness increased approximately 3 mm when the compression force was reduced to 90 N, and although the radiation dose increased approximately 20%, the image quality was identical to that with standard compression. Many patients experience breast pain with a compression force of 120 N. Reduced compression force mammography is acceptable in women whose breasts are particularly sensitive. abstract_id: PUBMED:37306146 Proximal radiolucent lines around fully hydroxyapatite-coated tapered femoral stems: should we be concerned? Introduction: Radiolucent lines occasionally develop around the proximal aspect of fully hydroxyapatite (HA)-coated tapered femoral stems after total hip arthroplasty (THA). It was hypothesised that distal wedging of stems may predispose to proximal radiolucent line formation, which may negatively impact clinical outcome. Methods: All primary THA performed with a collarless fully HA-coated stem that had a minimum of 1 year of radiographic follow-up were identified in a surgical database (n = 244). Radiographic measures of proximal femoral morphology and femoral canal fill at the middle and distal thirds of the stem were analysed for association with the presence of proximal radiolucent lines. Linear regression was used to explore any association between radiolucent lines and patient reported outcome measures (PROMs), available in 61% of patients. Results: Proximal radiolucent lines developed in 31 cases (12.7%) at final follow-up. Dorr A femoral morphology and increased canal-fill at the distal ⅓ of the stem correlated with the development of radiolucent lines (p < 0.001). No correlation was observed between pain or PROMs and the presence of proximal radiolucent lines. Discussion: We observed an unexpectedly high incidence of proximal femoral radiolucent lines about collarless fully HA coated stems. Distal-only implant wedging in Dorr A bone may compromise proximal fixation. Although this finding did not correlate with short-term outcomes, the long-term clinical impact requires further study. abstract_id: PUBMED:15767195 Mammography with breast cushions. Objective: We conducted a randomized clinical trial to determine the impact on pain and image quality when breast cushions were used to pad the surfaces of the mammography equipment during film-screen mammography. Methods: We recruited a consecutive volunteer sample of 394 participants. Breast cushions were used for only one breast, with laterality and sequence of use assigned randomly. Data collected from participants included demographic data, rating of pain from previous mammography, and rating of pain from present mammography using both a numeric rating scale and a visual analogue scale. Research assistants also collected breast compression and radiation exposure data. Radiologists were blinded to the laterality of cushion assignment while reading the mammograms and assessing image quality. Results: Participants were primarily white women (75.3%), mean age 55.4 years. Most (94.4%) reported having previous mammography. Eight percent (n = 32) of those surveyed had thought about skipping or delaying mammography because of the pain involved. The pain associated with mammography was significantly (p < .001) less during oblique and craniocaudal views when breast cushions were used during the procedure. Retakes were required for 2% of the 1576 views with the most common reason being positioning (53%). Conclusion: The use of breast cushions significantly reduced the pain during film-screen mammography. Image quality with the cushions was reduced in a very small subset of women probably due to the difficulty in positioning the breast without visual clues. More research needs to be done prior to the routine use of these cushions in clinical practice. abstract_id: PUBMED:1861047 The influence of ethnicity, socioeconomic status, and psychological barriers on use of mammography. This study assessed the relative influence of psychological barriers, SES, and ethnic differences in mammography use for a community sample of 586 White, 227 Black, and 150 Hispanic women. Confirmatory factor analyses with latent variables indicated plausible factor structures for all groups on items related to barriers to mammography. Summed indicators of SES, fear of radiation, embarrassment, pain, anxiety, and cost concerns were correlated significantly with mammography use for the pooled group. Separate analyses by ethnicity indicated a substantial relationship between mammography use and cost concerns by White and Black women, and fear of pain by Black and Hispanic women. Use of mammography was associated more highly with SES among Hispanic women. Pooled logistic regression analyses controlling for SES and ethnicity showed that the psychological barriers, especially concern about cost, remained important independent predictors of mammography use. We explore sociocultural explanations for less mammography use by Hispanic women, especially those less acculturated. abstract_id: PUBMED:20857346 Predictors of mammography use in older women with disability: the patients' perspectives. To determine the factors associated with mammography use among Medicare beneficiaries and reasons for nonuse. Cohort of 4610 community-dwelling Medicare beneficiaries ≥ 65 years included in the 2004-2005 Medicare Current Beneficiary Survey. Regression models evaluated the association of disability with mammography use. Reasons for underuse are described. Women with disability were more likely than women with no disability to report lower mammography use (unadjusted, moderate disability OR = 0.76; 95% CI = 0.64, 0.91; severe disability OR = 0.46; 95% CI = 0.40, 0.54). Lower use was significant for women with severe disability (adjusted, OR = 0.67; 95% CI = 0.54, 0.83) and women with fair-poor self-rated health, no HMO enrollment and ≥ 3 comorbidities. No physician recommendation, no need, dislike/pain during the test and forget it were reasons for underutilization. Mammography use decreases with increasing level of disability. Common reasons for underutilization are no physician recommendation, no need, dislike/pain during the test and forgot it. Screening guidelines should be used to target women with disabilities who can benefit from mammography. abstract_id: PUBMED:26272030 Mammography with and without radiolucent positioning sheets: Comparison of projected breast area, pain experience, radiation dose and technical image quality. Purpose: To compare projected breast area, image quality, pain experience and radiation dose between mammography performed with and without radiolucent positioning sheets. Methods: 184 women screened in the Dutch breast screening programme (May-June 2012) provided written informed consent to have one additional image taken with positioning sheets. 5 cases were excluded (missing data). Pain was scored using the Numeric Rating Scale. Radiation dose was estimated using the Dance model and projected breast area using computer software. Two radiologists and two radiographers assessed image quality. Results: With positioning sheets significantly more pectoral muscle, lateral and medial breast tissue was projected (CC-views) and more and deeper depicted pectoral muscle (MLO-views). In contrast, visibility of white and darker areas was better on images without positioning sheets, radiologists were therefore better able to detect abnormalities (MLO-views). Women experienced more pain with positioning sheets (MLO-views only, mean difference NRS 0.98; SD 1.71; p=0,00). Conclusion: Mammograms with positioning sheets showed more breast tissue. Increased breast thickness after compression with sheets resulted in less visibility of white and darker areas and thus reduced detection of abnormalities. Also, women experienced more pain (MLO-views) due to the sheet material. A practical consideration is the fact that more subcutaneous fat tissue and skin are being pulled forward leading to folds in the nipple area. On balance, improvement to the current design is required before implementation in screening practice can be considered. abstract_id: PUBMED:33858459 Radiographic assessment of radiolucent lines around a highly porous titanium cup (Tritanium) using digital tomosynthesis, after total hip arthroplasty. Background: The objectives of this study were to assess radiolucent lines around a highly porous titanium cup (Tritanium) using digital tomosynthesis and to investigate the clinical and radiographic factors associated with radiolucent lines on tomosynthesis. Methods: Fifty-five patients underwent total hip arthroplasty using a Tritanium cup, and digital tomosynthesis and plain radiography were performed at 1 week, 6 months, 1 year, and 2 years after surgery. The radiolucent lines around the cup were measured on both DTS and plain radiography at each postoperative period. Clinical evaluations were performed by the Japanese Orthopaedic Association hip disease evaluation questionnaire (JHEQ), and revision surgeries were examined. Based on the presence of radiolucent lines on digital tomosynthesis at 2 years postoperatively, patients were divided into RL (+) and RL (-) groups and investigated for related factors. Results: There were 20 cases in the RL (+) group and 35 cases in the RL (-) group, and no revision surgeries were required. Statistically, there were more cases with radiolucent lines on digital tomosynthesis (45.4% at 1 week and 36.3% at 2 years) than on plain radiography (9.1% at 1 week and 9.1% at 2 years) at each postoperative point. Logistic analysis showed no significant associations between the presence of radiolucent lines at 2 years on digital tomosynthesis, and the JHEQ parameters of pain (p = 0.937), movement (p = 0.266), or mental status (p = 0.404). Conclusion: In a short-term evaluation up to 2 years, digital tomosynthesis detected more radiolucent lines around the titanium cups than plain radiography. The occurrence of radiolucent lines was not related to the postoperative clinical evaluation. Answer: Yes, pain from mammography is reduced by the use of a radiolucent MammoPad. Studies have shown that the use of MammoPad significantly reduced the level of discomfort experienced during mammography. In Hong Kong, 66.3% of women reported at least a 10% reduction in the level of discomfort with the use of MammoPad, and radiation dose was also reduced (PUBMED:27101790). Another study found that 73.5% of women experienced a significant decrease in discomfort with the use of MammoPad, with an average of 47% decrease in discomfort (PUBMED:15239794). Similarly, 66% of women experienced a significant reduction of pain when the radiolucent cushions were used, with an average pain reduction of 53%, without any clinically significant change in compression force, radiation dose values, or image quality (PUBMED:15191098). However, it is important to note that while the use of MammoPad can reduce discomfort, the impact on image quality and other technical factors should also be considered. For instance, one study found that while mammograms with positioning sheets showed more breast tissue, the increased breast thickness after compression with sheets resulted in less visibility of white and darker areas and thus reduced detection of abnormalities. Additionally, women experienced more pain with positioning sheets during MLO-views (PUBMED:26272030). Therefore, while MammoPads can reduce pain, their use should be balanced with considerations for maintaining image quality.
Instruction: Is there a relationship between preferred posture and positioning in early life and the direction of subsequent asymmetrical postural deformity in non ambulant people with cerebral palsy? Abstracts: abstract_id: PUBMED:18796054 Is there a relationship between preferred posture and positioning in early life and the direction of subsequent asymmetrical postural deformity in non ambulant people with cerebral palsy? Background: It has been suggested that asymmetrical positioning of an infant with reduced mobility may lead to postural deformity becoming established over time. However, evidence to support or question this line of thinking is lacking. Therefore, the aim of this retrospective cohort study was to test the association between asymmetrical positioning in the first 12 months of life and the subsequent direction of postural deformity in non-ambulant people with cerebral palsy. Methods: The direction of scoliosis, pelvic obliquity and windswept hip pattern and also the side of unilateral hip subluxation/dislocation were determined for 246 young people ranging in age from 1 year and 2 months to 19 years (median age 10 years and 3 months). Parents/carers of the participants were interviewed to establish holding and feeding positions and preferred lying posture adopted in early life. Univariate analyses and multivariate logistic regression analyses were carried out. Results: The study provided evidence of an association between asymmetrical lying posture adopted in the first year of life and the direction of the subsequent pattern of postural deformity. If the child's head had been rotated to the right during supine lying, it was more likely that the scoliosis would be convex to the left, pelvic obliquity would be lower on the left, windswept hip pattern would be to the right and hip subluxation/dislocation would occur on the left. The likelihood of the deformities occurring in the same direction was also increased if consistent side lying on the right had been preferred. Conclusions: Clinicians should be aware of positioning for children with severe disabilities particularly those who prefer supine lying with their head rotated to the side and those who prefer consistent side lying. abstract_id: PUBMED:20030660 Is there a relationship between foetal position and both preferred lying posture after birth and pattern of subsequent postural deformity in non-ambulant people with cerebral palsy? Background: A pattern of postural deformity was observed in a previous study that included an association between direction of spinal curvature and direction of windsweeping with more windswept deformities occurring to the right and lateral spinal curvatures occurring convex to the left. The direction of this pattern was found to be associated with preferred lying posture in early life. The aim of this study was to test the association between foetal position and both the preferred lying posture after birth, and the direction of subsequent postural deformity in non-ambulant children with cerebral palsy (CP). Methods: A retrospective cohort study was carried out involving 60 participants at level five on the gross motor function classification for CP. Foetal position during the last month of pregnancy was taken from antenatal records and parents were interviewed to identify preferred lying posture in the first year of life. At the time of the physical assessment ages ranged from 1 year and 1 month to 19 years with a median age of 13 years and 1 month. Results: Foetal presentation was found to be associated with the preferred lying posture with participants carried in a left occipito-anterior/lateral position more likely to adopt a supine head right lying posture, and vice versa. An association was also observed between the foetal position and asymmetrical postural deformity occurring later in life with participants carried in a left occipito-anterior/lateral presentation more likely to have a convex left spinal curve, a lower left pelvic obliquity, and a windswept hip pattern to the right. Conclusions: Clinicians should be aware of the association between foetal presentation, asymmetrical lying posture, and the direction of subsequent postural deformity for severely disabled children. A hypothesis is described that might help to explain these findings. abstract_id: PUBMED:18042604 Patterns of postural deformity in non-ambulant people with cerebral palsy: what is the relationship between the direction of scoliosis, direction of pelvic obliquity, direction of windswept hip deformity and side of hip dislocation? Objective: To investigate: (a) associations between the direction of scoliosis, direction of pelvic obliquity, direction of windswept deformity and side of hip subluxation/ dislocation in non-ambulant people with cerebral palsy; and (b) the lateral distribution of these postural asymmetries. Design: Cross-sectional observational study. Setting: Posture management services in three centres in the UK. Subjects: Non-ambulant people at level five on the gross motor function classification system for cerebral palsy. Main Measures: Direction of pelvic obliquity and lateral spinal curvature determined from physical examination, direction of windswept hip deformity derived from range of hip abduction/adduction, and presence/side of unilateral hip subluxation defined by hip migration percentage. Results: A total of 747 participants were included in the study, aged 6-80 years (median 18 years 10 months). Associations between the direction of scoliosis and direction of pelvic obliquity, and between the direction of windswept hip deformity and side hip subluxation/dislocation were confirmed. A significant association was also seen between the direction of scoliosis and the direction of the windswept hip deformity (P<0.001) such that the convexity of the lateral spinal curve was more likely to be opposite to the direction of windsweeping. Furthermore, significantly more windswept deformities to the right (P=0.007), hips subluxed on the left (P=0.002) and lateral lumbar/lower thoracic spinal curves convex to the left (P=0.03) were observed. Conclusions: The individual asymmetrical postural deformities are not unrelated in terms of direction and not equally distributed to the left/right. A pattern of postural deformity was observed. abstract_id: PUBMED:29295638 Postural asymmetry in non-ambulant adults with cerebral palsy: a scoping review. Purpose: Non-ambulant adults with cerebral palsy are vulnerable to development of postural asymmetry and associated complications. The primary aim of this scoping review was to identify postural deformities in non-ambulant adults with cerebral palsy. Materials And Methods: Comprehensive searches were undertaken in EMBASE, CINAHL, AMED, Cochrane, Psych INFO, and Joanna Briggs (1986-Jan 2017), supplemented by hand searching. Two reviewers independently extracted data using a customised tool focusing on study design, participant characteristics, postural descriptors, measurement tools, and interventions. Results: From 2546 potential records, 17 studies were included. Variability in populations, reporting methodology, and measurement systems was evident. Data suggest more than 30% of this population have hip migration percentage in excess of 30%, more than 75% experience "scoliosis", and more than 40% demonstrate pelvic obliquity. Estimates ranged from 14% to 100% hip and 32% to 87% knee contracture incidence. Conservative interventions were infrequently and poorly described. Conclusion: Many non-ambulant adults with cerebral palsy experience postural asymmetry associated with windswept hips, scoliosis, pelvic obliquity, and limb contracture. Options for non-radiographic monitoring of postural asymmetry should be identified, and conservative interventions formally were evaluated in this population. Implications for rehabilitation The common postural asymmetries of windswept hips, scoliosis, pelvic obliquity, and limb contracture require standardised clinical measurement. Radiography is most commonly used to monitor postural asymmetry in this population, but standardised positioning is not applied and may not be feasible indicating a need for alternate methods and rigorous documentation. The Posture and Postural Ability Scale may be considered for use in the management of body shape in adults with CP. abstract_id: PUBMED:35275575 Evidence-informed clinical perspectives on postural management for hip health in children and adults with non-ambulant cerebral palsy. Postural management is a multi-disciplinary approach incorporating a comprehensive schedule of daily and night-time positions, equipment and physical activity to help maintain or improve body structures and function and increase activity and participation. Postural management may play a role in preventing contracture, deformity, pain, and asymmetry. This article provides an overview of the evidence supporting use of postural management to positively influence hip health in individuals with cerebral palsy, functioning as Gross Motor Classification System (GMFCS) levels IV or V. Sitting or lying without changing position for more than 8 hours, unsupported supine lying and asymmetrical or windswept postures are associated with pain and hip subluxation/dislocation. Although high-quality experimental research is still limited by many factors, there is limited evidence of harm, and most individuals at GMFCS IV or V require positioning supports to enable participation and function and ease caregiving. Clinical recommendations combining research and clinical opinion support the early use of comfortable positioning routines and/or equipment to reduce time spent in sustained asymmetrical or potentially harmful sitting and lying positions. Supported standing, active weightbearing and stepping are recommended to promote active movement and position change when possible, depending on individual, family and caregiver routines and preferences. abstract_id: PUBMED:26300497 Childhood cerebral palsy and the use of positioning systems to control body posture: Current practices. Introduction: One of the consequences of poor postural control in children with cerebral palsy is hip dislocation. This is due to the lack of weight-bearing in the sitting and standing positions. Orthotic aids can be used to prevent onset and/or progression. Objective: The aim of this study is to analyse the effectiveness of positioning systems in achieving postural control in patients with cerebral palsy, and discuss these findings with an emphasis on what may be of interest in the field of neurology. Discussion: We selected a total of 18 articles on interventions in cerebral palsy addressing posture and maintenance of ideal postures to prevent deformities and related problems. The main therapeutic approaches employed combinations of botulinum toxin and orthoses, which reduced the incidence of hip dislocation although these results were not significant. On the other hand, using positioning systems in 3 different positions decreases use of botulinum toxin and surgery in children under 5 years old. The drawback is that these systems are very uncomfortable. Conclusion: Postural control systems helps control hip deformities in children with cerebral palsy. However, these systems must be used for prolonged periods of time before their effects can be observed. abstract_id: PUBMED:23138456 Asymmetric skull deformity in children with cerebral palsy: frequency and correlation with postural abnormalities and deformities. Objective: Asymmetrical skull deformity is frequently seen in children with cerebral palsy, and may contribute to postural abnormalities and deformities. The aim of this cross-sectional- survey was to determine the frequency of asymmetrical skull deformity and its correlation with clinical parameters. Methods: A 10-item checklist for asymmetrical skull deformity, postural abnormalities, and deformities was developed, and its inter-rater reliability was tested. A total of 110 participants aged 1-18 years (mean age 9.3 years (standard deviation 4.7)) was assessed using the checklist. The frequency of asymmetrical skull deformity was analysed and related to the Gross Motor Function Classification System (GMFCS), postural abnormalities, and deformities. Results: The reliability of the checklist was satisfactory (κ > 0.8). Asymmetrical skull deformity was observed in 44 children, 24 showing right and 20 showing left flat occipital deformity. Its frequency was significantly related to GMFCS and with the patterns of asymmetrical posture and deformities (p < 0.05). Children with right flat occipital asymmetrical skull deformity showed predominantly rightward facial direction and right-side-dominant asymmetrical tonic neck reflex, left convex scoliosis, right-side-elevated pelvic obliquity, and left-sided hip dislocation. Those with left flat occipital asymmetrical skull deformity demonstrated the reverse tendency. Conclusion: Asymmetrical skull deformity is frequent in cerebral palsy and closely related to asymmetrical posture and deformities. This information will be useful to manage these problems. abstract_id: PUBMED:28830044 Postural orientation and standing postural alignment in ambulant children with bilateral cerebral palsy. Background: Standing postural alignment in children with cerebral palsy is usually altered by central postural control disorders. The primary aim of this study is to describe body alignment in a quiet standing position in ambulatory children with bilateral cerebral palsy compared with children with typical development. Methods: Fifty-eight children with bilateral cerebral palsy (aged 7-13years) and 45 age-matched children with typical development underwent a surface topography examination based on Moiré topography and were classified according to their sagittal postural profiles. Findings: The following eight grouping variables were extracted using a data reduction technique: angle of trunk inclination, pelvic tilt, and lordosis, the difference between kyphosis and lordosis, angle of vertebral lateral curvature, shoulder inclination, and shoulder and pelvic rotation. According to the cluster analysis results, 25% of the participants were classified into Cluster 1, 9% into Cluster 2, 49% in Cluster 3, and 17% in Cluster 4. Interpretation: Three different postural patterns emerged in accordance with the sagittal postural profiles in children with bilateral cerebral palsy and were defined as follows: 1) a lordotic postural pattern corresponding to forward-leaning posture; 2) a swayback postural pattern corresponding to backward-leaning posture; and 3) a balanced postural pattern corresponding to balanced posture. abstract_id: PUBMED:37747975 Effect of Adaptive Seating Systems on Postural Control and Activity Performance: A Systematic Review. Purpose: To systematically review the effectiveness of adaptive seating systems on sitting posture, postural control, and seated activity performance in children with cerebral palsy (CP). Summary Of Key Points: From 5 databases, 3 of 21 (14%) articles were of good quality based on the Downs and Black checklist. Commercial modular contoured seating and paper-based low-cost, and contoured foam seating were effective at improving sitting posture, postural control, and seated activity performance. Parents and service providers reported that seating systems reduced stress, burden and psychosocial well-being, and quality of life in children with CP. Conclusion: Limited evidence demonstrated that adaptive seating systems were effective at improving sitting ability and postural control. Randomized controlled trials with objective outcome measures of seating performance in children with CP are needed to evaluate effectiveness. Recommendations For Clinical Practice: Adaptive seating devices are preferred by parents and therapists for children with CP; however, objective measures of seating outcomes are needed. abstract_id: PUBMED:26130659 Psychometric evaluation of the Posture and Postural Ability Scale for children with cerebral palsy. Objective: To evaluate construct validity, internal consistency and inter-rater reliability of the Posture and Postural Ability Scale for children with cerebral palsy. Design: Evaluation of psychometric properties. Setting: Five child rehabilitation centres in the south of Sweden, in November 2013 to March 2014. Subjects: A total of 29 children with cerebral palsy (15 boys, 14 girls), 6-16 years old, classified at Gross Motor Function Classification System (GMFCS) levels II (n = 10), III (n = 7), IV (n = 6) and V (n = 6). Main Measures: Three independent raters (two physiotherapists and one orthopaedic surgeon) assessed posture and postural ability of all children in supine, prone, sitting and standing positions, according to the Posture and Postural Ability Scale. Construct validity was evaluated based on averaged values for the raters relative to known-groups in terms of GMFCS levels. Internal consistency was analysed with Cronbach's alpha and corrected Item-Total correlation. Inter-rater reliability was calculated using weighted kappa scores. Results: The Posture and Postural Ability Scale showed construct validity and median values differed between GMFCS levels (p < 0.01). There was a good internal consistency (alpha = 0.95-0.96; item-total correlation = 0.55-0.91), and an excellent inter-rater reliability (kappa score = 0.77-0.99). Conclusion: The Posture and Postural Ability Scale shows high psychometric properties for children with cerebral palsy, as previously seen when evaluated for adults. It enables detection of postural deficits and asymmetries indicating potential need for support and where it needs to be applied. Answer: Yes, there is a relationship between preferred posture and positioning in early life and the direction of subsequent asymmetrical postural deformity in non-ambulant people with cerebral palsy. A retrospective cohort study provided evidence of an association between asymmetrical lying posture adopted in the first year of life and the direction of the subsequent pattern of postural deformity. If a child's head had been rotated to the right during supine lying, it was more likely that the scoliosis would be convex to the left, pelvic obliquity would be lower on the left, windswept hip pattern would be to the right, and hip subluxation/dislocation would occur on the left. The likelihood of deformities occurring in the same direction was also increased if consistent side lying on the right had been preferred (PUBMED:18796054). Additionally, an association was observed between foetal position and asymmetrical postural deformity occurring later in life. Participants carried in a left occipito-anterior/lateral position during the last month of pregnancy were more likely to adopt a supine head right lying posture after birth and more likely to have a convex left spinal curve, a lower left pelvic obliquity, and a windswept hip pattern to the right (PUBMED:20030660). Furthermore, a cross-sectional observational study confirmed associations between the direction of scoliosis, direction of pelvic obliquity, direction of windswept hip deformity, and side of hip subluxation/dislocation. A pattern of postural deformity was observed, with more windswept deformities occurring to the right and lateral spinal curvatures occurring convex to the left (PUBMED:18042604). Clinicians are advised to be aware of the association between early life positioning and the direction of subsequent postural deformity for severely disabled children (PUBMED:20030660). The evidence suggests that early use of comfortable positioning routines and/or equipment can reduce time spent in sustained asymmetrical or potentially harmful sitting and lying positions, which may positively influence hip health in individuals with cerebral palsy (PUBMED:35275575).
Instruction: Was the death of Alexander the Great due to poisoning? Abstracts: abstract_id: PUBMED:24369045 Was the death of Alexander the Great due to poisoning? Was it Veratrum album? Objective: To investigate the death of Alexander the Great to determine if he died from natural causes or was poisoned and, if the latter, what was the most likely poison. Methods: OVID MEDLINE (January 1950-May 2013) and ISI Web of Science (1900-May 2013) databases were searched and bibliographies of identified articles were screened for additional relevant studies. These searches identified 53 relevant citations. Classical literature associated with Alexander's death. There are two divergent accounts of Alexander's death. The first has its origins in the Royal Diary, allegedly kept in Alexander's court. The second account survives in various versions of the Alexander Romance. Nature of the terminal illness. The Royal Diary describes a gradual onset of fever, with a progressive inability to walk, leading to Alexander's death, without offering a cause of his demise. In contrast, the Romance implies that members of Alexander's inner circle conspired to poison him. The various medical hypotheses include cumulative debilitation from his previous wounds, the complications of alcohol imbibing (resulting in alcohol hepatitis, acute pancreatitis, or perforated peptic ulcer), grief, a congenital abnormality, and an unhealthy environment in Babylon possibly exacerbated by malaria, typhoid fever, or some other parasitic or viral illness. Was it poisoning? Of all the chemical and botanical poisons reviewed, we believe the alkaloids present in the various Veratrum species, notably Veratrum album, were capable of killing Alexander with comparable symptoms to those Alexander reportedly experienced over the 12 days of his illness. Veratrum poisoning is heralded by the sudden onset of epigastric and substernal pain, which may also be accompanied by nausea and vomiting, followed by bradycardia and hypotension with severe muscular weakness. Alexander suffered similar features for the duration of his illness. Conclusion: If Alexander the Great was poisoned, Veratrum album offers a more plausible cause than arsenic, strychnine, and other botanical poisons. abstract_id: PUBMED:15081504 The death of Alexander the Great: malaria or typhoid fever? Alexander the Great had a profound effect on world history. His conquests covered the entire known world at the time, and he was responsible for the spread of Greek culture throughout the ancient world. In Babylon in 323 BC, Alexander died when he was nearly 33 years old. Possible explanations for his death have included alcoholic liver disease and strychnine poisoning, but little data support either condition as the cause of his death. Alexander most likely died from malaria or typhoid fever, which were rampant in ancient Babylon. The description of his final illness from the royal diaries is consistent with typhoid fever or malaria but is most characteristic of typhoid fever. abstract_id: PUBMED:12751970 Alexander the Great's relationship with alcohol. Aims: This study sought to clarify if Alexander the Great indulged pathologically in alcohol and whether it contributed to his death. Design: The texts of the historians Diodorus of Sicily, Plutarch, Arrian, Curtius Rufus, Athenaeus, Aelian and Justin were studied, with their information concerning wine consumption by Macedonians, and especially Alexander, and were evaluated. The surviving historical texts, all later than Alexander's epoch, are based on a series of contemporary histories and especially on the 'Royal Journals', an official diary written in the imperial court. Findings And Conclusions: Alexander consumed large quantities of undiluted wine periodically, reaching pathological intoxication. However, the existing data do not provide convincing evidence that Alexander the Great manifested abuse of or dependence on alcohol according to DSM-IV or ICD-10 criteria and it seems unlikely that alcohol was involved in his untimely death. abstract_id: PUBMED:14725285 Alexander the Great and West Nile virus encephalitis. Alexander the Great died in Babylon in 323 BC. His death at age 32 followed a 2-week febrile illness. Speculated causes of death have included poisoning; assassination, and a number of infectious diseases. One incident, mentioned by Plutarch but not considered by previous investigators, may shed light on the cause of Alexander's death. The incident, which occurred as he entered Babylon, involved a flock of ravens exhibiting unusual behavior and subsequently dying at his feet. The inexplicable behavior of ravens is reminiscent of avian illness and death weeks before the first human cases of West Nile virus infection were identified in the United States. We posit that Alexander may have died of West Nile virus encephalitis. abstract_id: PUBMED:37098511 Declining life expectancy in the Great Lakes region: contributors to Black and white longevity change across educational attainment. Background: The East North Central Census division (aka the Great Lakes region) experienced a decrease in life expectancy of 0.3 years from 2014 to 2016 - one of the largest declines across the nine Census divisions. Disadvantaged groups that typically have below-average life expectancy, including Black individuals and those without a college education, may have been disproportionately affected by this longevity shift. This investigation examines life expectancy changes among different sex, race, and education groups in the Great Lakes region, and how specific causes of death contributed to within-group longevity changes over time and across age. Methods: We used 2008 to 2017 death counts from the National Center for Health Statistics and American Community Survey population estimates to measure within-group change in life expectancy at age 25 among non-Hispanic Black and white males and females by educational attainment. We decomposed life expectancy change over time for each subgroup by 24 causes of death and measured their contribution to longevity change across 13 age groups. Results: Among persons with ≤ 12 years of education, white males and females experienced 1.3- and 1.7-year longevity declines respectively, compared to a 0.6-year decline among Black males and a 0.3-year decline among Black females. Life expectancy declined among all groups with 13-15 years of education, but especially Black females, who experienced a 2.2-year loss. With the exception of Black males, all groups with 16 + years of education experienced longevity gains. Homicide contributed 0.34 years to longevity decline among Black males with ≤ 12 years of education. Drug poisoning made large contributions to longevity losses among Black females with ≤ 12 years of education (0.31 years), white males and females with 13-15 years of education (0.35 and 0.21 years, respectively), and white males and females with ≤ 12 years of education (0.92 and 0.65 years, respectively). Conclusions: Public health efforts to reduce the risks of homicide among Black males without a college education and drug poisoning among all groups could improve life expectancy and reduce racial and educational longevity disparities in the Great Lakes region. abstract_id: PUBMED:4919830 Mozart's death as described by Alexander Pushkin N/A abstract_id: PUBMED:24466699 Brain death and transplantological issues in patients deceased due to poisonings The classical definition of human death, based on the statement of the definitive cessation of blood circulation, is still in use except the cases, where the death took the brain, but the blood circulation remained active for some time. In these cases, a "new definition of death" based on the statement of brain death, should be used. The diagnosis of brain death is made by performing of clinical tests and ancillary investigations in order to confirm patient's irreversible coma, lack of brainstem reflexes, lack of motor activity and apnea. Brain death results from the massive damage of brain tissue caused by various pathological processes including the direct and indirect effects of xenobiotics poisoning. Patients who deceased due to poisoning can be donors of tissues and organs for transplantation. abstract_id: PUBMED:7875623 Death due to benzhexol toxicity. A rare case of death due to benzhexol toxicity is reported in a 48-year-old schizophrenic male with a resolving empyema and underlying patchy, mild bronchopneumonia. Toxicological analysis revealed the benzhexol blood and liver concentrations to be 0.12 mg/l and 0.5 mg/kg, respectively. Gastric contents contained 0.4 mg of benzhexol. Other drugs were not detected. It is suggested that for fatalities to occur following benzhexol intoxication, secondary contributory factors, which probably further alter the patient's conscious state, are necessary. abstract_id: PUBMED:28197261 Sudden Death Due to Giant Cell Myocarditis: A Case Report. The definition of sudden death is variable and there has been no single all-purpose definition. Sudden death can be defined as sudden or unexpected death in an otherwise healthy individual who is not known to have been suffering from any dangerous disease, injury or poisoning and is found dead or dies within 24 hours after the onset of terminal illness. Some authorities limit the duration to 1 hour. Because of the sudden and unexpected nature of death, inquest is conducted in such cases to rule out foul play and ascertain the cause of death. A vast majority of cases are due to cardiac origin followed by respiratory, neurological, gastrointestinal and genitourinary causes. The most common cardiac disease to cause sudden death is ischemic heart disease as a result of coronary atherosclerosis. Coronary artery disease, cardiomyopathies and electrophysiologic abnormalities are the common causes of sudden cardiac deaths. We present a rare case of sudden death in a healthy adult male due to giant cell myocarditis. abstract_id: PUBMED:38484484 Sudden death due to butane abuse - An overview. Volatile Solvents Abuse (VSA) poses major health risks, especially for young people and those living in precarious socio-economic conditions. Such substances can in fact bring about psychoactive effects such as euphoria, and even lead to sudden death from cardiac arrhythmias, respiratory depression, myocardial infarction, laryngospasm, encephalopathy, and rhabdomyolysis. The present case report is centered around a 23-year-old man who died in prison due to inhalation of a cooker gas mixture (n-butane, propane, and isobutane) inside a plastic bag. External examination and autopsy showed non-specific signs of asphyxia associated with edema and brain swelling. Histological signs of early myocardial damage and hypoxic-ischemic injury (HII) were highlighted in the brain and cerebellum, as well as activated macrophages and anthracotic-like material in the lungs. Toxicological investigations revealed the presence of propane, isobutane and n-butane in liquids and biological samples. Besides the cardiotoxic effect, there was an asphyctic component due to the plastic bag that may have facilitated death. The assessment of cerebral HII and cardiopulmonary damage in acute cases is very important to prove death by butane inhalation. In the forensic field, it may be useful to shed more light on intoxications, deaths, and butane encephalopathies, as the latter can be mistaken for a hypoxic-ischemic encephalopathy. Answer: The death of Alexander the Great has been a subject of speculation and debate among historians and medical experts. Various theories have been proposed regarding the cause of his death, including natural causes and poisoning. One hypothesis suggests that Alexander the Great could have been poisoned, with Veratrum album being a plausible poison due to the symptoms Alexander reportedly experienced over the 12 days of his illness, which are consistent with Veratrum poisoning (PUBMED:24369045). Veratrum species contain alkaloids that could have caused the sudden onset of epigastric and substernal pain, nausea, vomiting, bradycardia, hypotension, and severe muscular weakness, all of which Alexander is said to have suffered from. However, other studies have proposed different causes for Alexander's death. One study argues that Alexander most likely died from malaria or typhoid fever, which were common in ancient Babylon. The symptoms described in the royal diaries are said to be consistent with typhoid fever or malaria, and particularly characteristic of typhoid fever (PUBMED:15081504). Another study examined Alexander's relationship with alcohol, concluding that although he consumed large quantities of undiluted wine periodically, there is no convincing evidence that he had an alcohol use disorder according to DSM-IV or ICD-10 criteria, or that alcohol was involved in his death (PUBMED:12751970). Additionally, a different perspective was offered by a study suggesting that Alexander could have died from West Nile virus encephalitis, citing an incident involving a flock of ravens exhibiting unusual behavior and dying, which could be reminiscent of avian illness related to West Nile virus (PUBMED:14725285). In conclusion, while poisoning with Veratrum album is a plausible hypothesis, there is no consensus among researchers, and other causes such as typhoid fever, malaria, or West Nile virus encephalitis have also been suggested as possible explanations for the death of Alexander the Great.
Instruction: Are Israeli women in a low risk group for developing squamous cell carcinoma of the uterine cervix? Abstracts: abstract_id: PUBMED:7604692 Are Israeli women in a low risk group for developing squamous cell carcinoma of the uterine cervix? Objective: Invasive cervical squamous cell carcinoma is not a common disease among Jewish women. In the present study an attempt was made to find what is the distribution of the known risk factors in this population. Methods: Three hundred and ninety-one patients with various cervical cytological abnormalities comprised the study group. They were compared to 350 Israeli women with negative cytology. The risk factors were extracted from an anonymous questionnaire. The results were statistically analyzed and compared to the equivalent figures published in the literature pertaining to non-Jewish women. Results: The prevalence of abnormal cytology among the Israeli women was almost the same as that of the non-Jewish group: 24/1000 and 26/1000, respectively. Significant statistical differences were found between the following risk factors: number of sexual partners, age at first intercourse, age at first pregnancy. Women in the study group were younger when they became sexually active and at first pregnancy, they had a larger number of sexual partners, fewer of them were married and fewer of them were observant of Jewish ritual practices. In addition, they had achieved higher academic levels than the women in the control group. Conclusions: It was found that the distribution of the known risk factors among the study group was practically the same as those in the non-Jewish population. Moreover, it was found that there was an increase of 29.2% (from 17/1000 to 24/1000) in the prevalence of cervical premalignant lesions among the Israeli women. We believe that this population can no longer be considered as being at low risk for this disease. abstract_id: PUBMED:657109 Adenocarcinoma of the uterine cervix in Jewish women: a distinct epidemiological entity. During an 11-year period, 532 Jewish women with invasive cancer of the uterine cervix were diagnosed in Israel. Of these, 441 (82.9%) had squamous cell carcinoma (SUC) and 52 (9.8%) had adenocarcinoma (AUC). Different characteristics were noted for these two histological categories with regard to age, ethnic distribution, and chronological time trends. The ratio between AUC and SUC differed considerably between the various ethnic groups, ranging from about 1:6 in the Asian and European born to less than 1:14 in the North African. This indicates that the relatively higher incidence of AUC in Israeli Jewish women is limited to those ethnic groups that have a low risk for cervical cancer. The different patterns of SUC and AUC in the population support the contention of a varying etiology. abstract_id: PUBMED:23400355 Cervical digital photography for screening of uterine cervix cancer and its precursor lesions in developing countries. Purpose: This study aims to evaluate and to compare the performance of cervical digital photography (CDP) to the visual inspection with acetic acid (VIA) and visual inspection with Lugol's iodine (VILI) methods for screening the uterine cervix cancer and its precursor lesions in developing countries. Methods: A cross-sectional study was performed in Brazil. 176 women were evaluated by VIA, VILI, CDP with acetic acid and CDP with Lugol's iodine. Kappa statistic was used to estimate the interobserver and intermethod agreement. Sensitivity, specificity and diagnostic accuracy of the four methods (VIA, VILI, CDP with acetic acid, CDP with Lugol's iodine) was calculated. Results: Interobserver agreement for CDP with acetic acid was K = 0.441 and for CDP with Lugol's iodine was K = 0.533; intermethod agreement of VIA and CDP with acetic acid, K = 0.559; and of VILI and CDP with Lugol's iodine, K = 0.507. Sensitivity and specificity of CDP with acetic acid were 84.00 and 95.83 %, and of CDP with Lugol's iodine were 88.00 and 97.26 %, respectively. The diagnostic accuracy of CDP with acetic acid and CDP with Lugol's iodine was 92.78 and 94.90 %, respectively. Conclusion: This was the first study to assess the CDP with Lugol's iodine performance, which had similar performance to the CDP with acetic acid. CDP is considered a promising method for screening the uterine cervix cancer and its precursor lesions in developing countries. abstract_id: PUBMED:14572367 Contribution of high risk human papillomavirus testing to the management of premalignant and malignant lesions of the uterine cervix Background And Objective: High risk human papillomaviruses (HR-HPV) are consistently associated with premalignant and malignant lesions of the uterine cervix. Thus, the use of molecular techniques to detect HPV has been proposed to improve the results of conventional diagnostic strategies. In the present study, we evaluated the usefulness of the detection of HR-HPV in a cervical pathology unit. Patients And Method: 1005 women (mean age [SD], 38.4 [12.3]; range, 16-83) were referred for a cytology of atypical cells of unknown origin (ASCUS), squamous intraepithelial lesion (SIL) or carcinoma in the six months previous to the admission. All patients underwent a colposcopy, Pap test and/or histological study as well as HR-HPV detection using the Hybrid Capture II test. Results: HR-HPV was detected in 96% high grade-SIL, 86% carcinomas of the uterine cervix and 86% low grade-SIL, but only in 25% women with no cervical lesions and 0% women with metastatic carcinomas to the cervix (p<0.001). The sensitivity of this test for high grade-SIL or higher was 90.2% and the negative predictive value was 96.5% (odds ratio=18.9; 95% confidence interval, 10.9-33.1). In patients with ASCUS, a negative result for HR-HPV nearly excluded the presence of a cervical lesion (negative predictive value, 98.52%). Conclusions: HR-HPV detection using Hybrid Capture II is useful in the study of lesions of the uterine cervix. It displays a high sensitivity for the diagnosis of squamous intraepithelial lesions and invasive carcinomas and a high usefulness in the management of ASCUS cases. abstract_id: PUBMED:36761980 Human gastrin- releasing peptide receptor expression in women with uterine cervix cancer. Introduction: 212Pb-DOTAM-GRPR1 is a pharmaceutical radioimmunoconjugate consisiting of an α-particle-emitting radionuclide lead-212 (212Pb), a metal chelator DOTAM (1,4,7,10-tetrakis(carbamoylmethyl)-1,4,7,10-tetraazacyclododecane), and a gastrin-releasing peptide receptor (GRPR)-targeted antagonist currently being evaluated as therapy in uterine cervix and other cancer types. Previous studies have revealed that a variable proportion of uterine cervix cancer tumors overexpress the radiopharmaceutical target GRPR when assessed by cell proportion and staining intensity immunoreactive scores (IRS). Tumor response to 212Pb-DOTAM-GRPR1 strongly associates with GRPR overexpression, and therefore, it seems reasonable to assess uterine cervix cancer GRPR immunoreactivity for greater insight into the feasibility of using 212Pb-DOTAM-GRPR1 as a radiopharmaceutical treatment. Methods: We examined a series of 33 uterine cervix cancer paraffin-embedded tumors in order to establish whether this tumor type overexpresses GRPR at an IRS score of 6 or higher, as 212Pb-DOTAM-GRPR1 is currently being evaluated in clinical trials against tumors showing such a level of expression. Results: The results show that five of five (100%) primary adenocarcinomas and 10 of 16 (63%) primary squamous cell tumors overexpress GRPR at an IRS score of 6 or higher. Discussion: The frequency of overexpression in this study suggests that 212Pb-DOTAM-GRPR1 radiopharmaceutical treatment may be useful in the management of persistent, recurrent, or metastatic uterine cervix cancer patients. A phase I clinical trial involving patients with metastatic uterine cervix cancer is currently underway (NCT05283330). abstract_id: PUBMED:7851708 Risk of hematogenous metastasis of cancer of the uterine cervix Metastasis development in cervical cancer was analysed retrospectively to determine whether haematogenic metastases occurred with higher frequency in younger women than in older women. Based on clinical and morphologic data, parameters for the identification of a high-risk group for the development of haematogenic metastases were established. The data of 533 patients who underwent therapy at the University Women's Hospital Berlin-Charlottenburg from 1970 to 1984 were evaluated. 11.8% of these patients developed haematogenic metastases. No significant difference was found in the 5-year risk for development of haematogenic metastases between different age groups. Furthermore, no increase in the incidence of haematogenic metastases was found in women under 35 years in the course of time. The influence of pretreatment characteristics for the development of haematogenic metastases was assessed in 185 patients who underwent surgery for cervical cancer from 1979 to 1984. Univariate as well as multivariate regression analysis of histopathologic characteristics in the surgical specimens revealed blood vessel invasion, tumour grading, dissociated tumour growth, and number of mitoses as important parameters for the occurrence of haematogenic metastases. In this study group, patient age strongly influenced the occurrence of haematogenic metastases. In contrast, prognostic parameters for the development of local recurrence of the disease were derived from morphologic data which described the locoregional spread of the tumour (staging, tumour growth beyond the cervix, involvement of the corpus uteri, and lymph node involvement).(ABSTRACT TRUNCATED AT 250 WORDS) abstract_id: PUBMED:29239064 Case of rapidly progressing condylomatous squamous cell carcinoma of the uterine cervix associated with low-risk human papillomavirus type 6. Condylomatous, or warty squamous cell carcinoma (SCC) of the uterine cervix, is a rare variant of invasive SCC. Here, we describe a case of aggressive condylomatous SCC of the uterine cervix. A 43-year-old woman was monitored for 3 years for suspected low-grade squamous intraepithelial lesion. A whitish papillary mass occupied the cervix, and the colposcopic diagnosis was condyloma acuminatum. A cervical biopsy revealed papillary proliferating thick squamous epithelium, which consisted of koilocytes and atypical cells with enlarged nuclei. Hysterectomy specimens showed a thick layer of atypical squamous epithelium with koilocytosis invading the stroma. Immunohistochemistry revealed negative p16 expression. Hysterectomy specimens tested positive for low-risk human papillomavirus type 6, but negative for other high-risk human papillomavirus types. The bilateral pelvic and para-aortic lymph nodes were positive for metastases. In spite of adjuvant chemotherapy, the case relapsed with multiple lymph nodes and lung metastases shortly after the operation. abstract_id: PUBMED:2786300 Epidemiology and clinical aspects of adenomatous and squamous epithelial cancers of the uterine cervix Compared studies on the epidemiology and clinic of adeno- and squamous-cell carcinoma of the uterine cervix are being carried out by means of a retrospective investigation. 1,481 cervix cancer patients were treated from 1972-1983. Among them were 64 adenocarcinoma patients who were compared with a group of squamous-cell cancer patients equal in number. It could be found out that the number of adenomatous cervical cancer has not become more frequent during the last 20 years - with 4.3% taking only a comparatively small portion within cancer of cervix uteri. There are not any appreciable differences between both cancer groups regarding such epidemiological features like average age, menarche, kohabitarche, number of pregnancies and deliveries as well as high risk behaviour and contraceptive habits. Chronic or frequent occurring cervicitis seems to be a common risk factor. For that reason it is of great importance to pay special attention to diagnosis and therapy of inflammations in the uterine cervix. Although 36% of the women treated took oral contraceptives a causal connection between the taking in of these preparations and the development of neoplasia could not be confirmed. Progress of the disease and healing tendencies have shown that adequate therapy - being in accordance to the respective stages - is effective in the mentioned histological variants of cervical cancer. Adeno- and squamous-cell carcinoma do not differ essentially concerning epidemiological and clinical features. It is assumed that pathological epithelial qualities show similar carcinogeneses. abstract_id: PUBMED:11016150 Epidemiology of cancer of the cervix: global and national perspective. Cancer of the uterine cervix is one of the leading causes of cancer death among women worldwide. The estimated new cancer cervix cases per year is 500,000 of which 79% occur in the developing countries. Cancer cervix occupies either the top rank or second among cancers in women in the developing countries, whereas in the affluent countries cancer cervix does not even find a place in the top 5 leading cancers in women. The truncated rate (TR) in the age group 35-64 years in Chennai, India, is even higher (99.1/100,000; 1982-95) than rate reported from Cali, Colombia (77.4/100,000, 1987-91). The cervical cancer burden in India alone is estimated as 100,000 in 2001 AD. The differential pattern of cervical cancer and the wide variation in incidence are possibly related to environmental differences. Aetiologic association and possible risk factors for cervical carcinoma have been extensively studied. The factors are: Sexual and reproductive factors, socio-economic factors (education and income), viruses e.g., herpes simplex virus (HSV), human papillomavirus (HPV), human immunodeficiency virus (HIV) in cervical carcinogenesis and other factors like smoking, diet, oral contraceptives, hormones, etc. The accumulated evidence suggests that cervical cancer is preventable and is highly suitable for primary prevention. Sexual hygiene, use of barrier contraceptives and ritual circumcision can undoubtedly reduce cervical cancer incidence. Education, cervical cancer screening of high risk groups and improvement in socio-economic status can reduce cervical cancer morbidity and mortality significantly. abstract_id: PUBMED:12440828 Treating intraepithelial lesions of the uterine cervix by laser CO2. Evaluation of the past, appraisal for the future. Background And Objective: Carbon dioxide laser (CO2) has been widely used in the past for the treatment of squamous intraepithelial lesions (SIL) of the uterine cervix. We present our 10-year experience of using this modality while evaluating its current and future use. Materials And Methods: From 1988 to 1998, 3,078 women were treated for an intraepithelial lesion of the uterine cervix (SIL) by laser CO2 either by vaporization or conization. The procedure was performed on an outpatient basis and was well tolerated by the great majority of patients. The mean age of the women treated by vaporization was 27.5 years whereas of those managed by conization, 34.8 years. Results: From the 3,078 women, 750 (24.4%) underwent laser vaporization and the remaining 2,328 (75.6%), conization of the cervix. Complications were minimal and consisted of intraoperative and postoperative bleeding (0.56%), pelvic infections (0.04%) and cervical stenosis (1.1%). Mean follow-up time was 83 months (range 24-142). Relapsing disease (either persistent or recurrent) was detected in 5.6% of the vaporization and 3.9% of the conization group. Conclusions: The management of SIL of the uterine cervix by laser CO2 offers excellent success rates with minor complications. The preservation of the anatomical integrity of the cervical tissue offers a better follow-up of these patients and the potential for repeat treatment. Although other treatment modalities are available, we believe that laser CO2 represents an excellent surgical tool for the management of intraepithelial lesions of the uterine cervix. Answer: No, Israeli women can no longer be considered as being in a low risk group for developing squamous cell carcinoma of the uterine cervix. A study comparing 391 patients with various cervical cytological abnormalities to 350 Israeli women with negative cytology found that the distribution of known risk factors among the study group was practically the same as those in the non-Jewish population. Additionally, there was an increase of 29.2% in the prevalence of cervical premalignant lesions among the Israeli women, indicating that the risk for this disease has increased (PUBMED:7604692).
Instruction: Does long-term use of silver nanoparticles have persistent inhibitory effect on H. pylori based on Mongolian gerbil's model? Abstracts: abstract_id: PUBMED:24864246 Does long-term use of silver nanoparticles have persistent inhibitory effect on H. pylori based on Mongolian gerbil's model? Background: It is urgent to find alternative agents due to increasing failure rate of Helicobacter pylori (H. pylori) eradication. The study surveyed the long-term effect of silver nanoparticles (AgNP) on H. pylori based on Mongolian gerbil's model. Materials And Methods: Fifty gerbils were randomly allocated to six groups (A-F). Group (Gr) A: the gerbils were fed with broth; Gr B and D: the gerbils were fed with AgNP/clay complex (0.1% of weight); Gr C and E: the gerbils were fed with AgNP/clay complex(1% of weight); and Gr D, E, and F: the gerbils were inoculated with H. pylori. At the 20th experimental week, the gerbils were sacrificed. Histology was evaluated according to the classification of the Sydney system. P < 0.05 was considered to be statistically significant. Results: The AgNP/clay has more obvious inhibitory effect on H. pylori in vitro. There was a trend of higher concentrations of AgNP with stronger inhibitory effect on H. pylori growth (P = 0.071). There were no significant differences of inflammation among groups D, E, and F (P = 0.688). Conclusion: AgNP/clay would be a potential and safe agent for inhibiting H. pylori. It should be helpful for eradication of H. pylori infection. abstract_id: PUBMED:27246040 The Mongolian Gerbil: A Robust Model of Helicobacter pylori-Induced Gastric Inflammation and Cancer. The Mongolian gerbil is an efficient, robust, and cost-effective rodent model that recapitulates many features of H. pylori-induced gastric inflammation and carcinogenesis in humans, allowing for targeted investigation of the bacterial determinants and environmental factors and, to a lesser degree, host constituents that govern H. pylori-mediated disease. This chapter discusses means through which the Mongolian gerbil model has been used to define mechanisms of H. pylori-inflammation and cancer as well as the current materials and methods for utilizing this model of microbially induced disease. abstract_id: PUBMED:17717596 In vivo bactericidal activities of Japanese rice-fluid against H. pylori in a Mongolian gerbil model. Purpose: The antibiotic effect of rice-fluid on Helicobacter pylori infection was investigated using a Mongolian gerbil model. Methods: Gerbils were divided into four groups: H. pylori -infected, rice-fluid-treated animals (group A); H. pylori -infected, untreated animals (group B); uninfected, rice-fluid-treated animals (group C); and uninfected, untreated animals (group D). Group A and B animals were killed 14 weeks after H. pylori infection and group C and D animals were killed at the same age. The stomachs were examined for histology, 5'-bromo-2'-deoxyuridine (BrdU) labeling, and the bacterial burden. Serum anti-H. pylori antibody titers were also tested. Results: The positive incidence of H. pylori -culture was 25 and 84 % in groups A and B, respectively (p<0.01). Both the degree of inflammation and the BrdU labeling index in group A were significantly lower than those in group B. Conclusions: Rice-fluid showed an antibiotic effect on H. pylori and an anti-inflammatory effect on the H. pylori -associated gastritis. abstract_id: PUBMED:18956630 Establishment of Mongolian gerbil model of long-term Helicobacter pylori infection Objective: To establish a model of long-term infection with Helicobacter pylori (Hp) in Mongolian gerbil (Meriones unguiculatus), and to investigate if Hp combined with N-methyl-N'-nitro-N-nitrosoguanidine (MNNG) has a synergistic effect to induce gastric mucosa injury. To investigate pathological changes of gastric mucosa during long-term Hp infection in Mongolian gerbil model. Methods: 90 healthy male Mongolian gerbils were randomly divided into 4 groups: Hp group (n = 24) undergoing gastric perfusion of Hp suspension of the line NCTC11637 in brain-heart infusion (BHI) 10(8)-10(9) CFU/ml once a day for 10 days and then gastric perfusion of 1 ml normal saline (NS) once a day for 10 days since the 4th week after Hp perfusion, Hp + MNNG group (n = 24) undergoing gastric perfusion of Hp solution once a day for 10 days and then MNNG 1 ml (2 mg/ml) once a day for 10 days, MNNG group (n = 20) undergoing gastric perfusion of BHI once a day for 10 days and then gastric perfusion of MNNC once a day for 10 day since the 4th week after BHI perfusion, and control group (n = 22) undergoing gastric perfusion of BHI once a day for 10 days and then gastric perfusion of NS again once a day for 10 day since the 4th week after the BHI perfusion. 4 and 8 weeks 1 gerbil from the control group and 2 gerbils from the Hp and Hp + MNNG groups each were killed to observe the pathological changes and Hp colonization by liquid-based urease test and Warthin-Starry silver staining. 20 and 40 weeks after the Hp inoculation 10 gerbils from each group were killed to observe the pathology of the gastric mucosa. Results: (1) A Mongolian gerbil model of long-term Hp infection was successfully established. (2) Hp induced the process progressing from normal gastric mucosa --> chronic atrophic gastritis --> intestinal metaplasia --> dysplasia. Until 40 weeks after Hp infection, the gastric mucosa of the control group remained normal. Twenty weeks after Hp infection 3 gerbils in the Hp group and 1 gerbil in the Hp + MNNC group showed glandular atrophy and intestinal metaplasia respectively, and 40 weeks after infection, glandular atrophy, intestinal metaplasia, and dysplasia at different degrees in the gastric mucosa were seen in the three experimental groups. The pathological changes of the Hp + MNNG group were the most severe. The incidence rates of precancerous lesions of the Hp + MNNG group were significantly higher than those of the other groups, but no gastric carcinoma was found in the experimental animals. Conclusion: Hp colonizes stably in the glandular gastric mucosa of Mongolian gerbils. The histological changes after infection are similar to those of the Hp infected human being. Hp and MNNG both cause the injury of gastric mucosa. With synergistic effect, the two pathogenic agents attack the gastric mucosa, they cause more severe injury. abstract_id: PUBMED:24558304 Protective effect of Korean Red Ginseng extract against Helicobacter pylori-induced gastric inflammation in Mongolian gerbils. Helicobacter pylori-induced gastric inflammation includes induction of inflammatory mediators interleukin (IL)-8 and inducible nitric oxide synthase (iNOS), which are mediated by oxidant-sensitive transcription factor NF-κB. High levels of lipid peroxide (LPO) and increased activity of myeloperoxidase (MPO), a biomarker of neutrophil infiltration, are observed in H. pylori-infected gastric mucosa. Panax ginseng Meyer, a Korean herb medicine, is widely used in Asian countries for its biological activities including anti-inflammatory efficacy. The present study aims to investigate whether Korean Red Ginseng extract (RGE) inhibits H. pylori-induced gastric inflammation in Mongolian gerbils. One wk after intragastric inoculation with H. pylori, Mongolian gerbils were fed with either the control diet or the diet containing RGE (200 mg RGE/gerbil) for 6 wk. The following were determined in gastric mucosa: the number of viable H. pylori in stomach; MPO activity; LPO level; mRNA and protein levels of keratinocyte chemoattractant factor (KC, a rodent IL-8 homolog), IL-1β, and iNOS; protein level of phospho-IκBα (which reflects the activation of NF-κB); and histology. As a result, RGE suppressed H. pylori-induced mRNA and protein levels of KC, IL-1β, and iNOS in gastric mucosa. RGE also inhibited H. pylori-induced phosphorylation of IκBα and increases in LPO level and MPO activity of gastric mucosa. RGE did not affect viable H. pylori colonization in the stomach, but improved the histological grade of infiltration of polymorphonuclear neutrophils, intestinal metaplasia, and hyperplasia. In conclusion, RGE inhibits H. pylori-induced gastric inflammation by suppressing induction of inflammatory mediators (KC, IL-1β, iNOS), MPO activity, and LPO level in H. pylori-infected gastric mucosa. abstract_id: PUBMED:29796347 Helicobacter pylori genetic diversification in the Mongolian gerbil model. Helicobacter pylori requires genetic agility to infect new hosts and establish long-term colonization of changing gastric environments. In this study, we analyzed H. pylori genetic adaptation in the Mongolian gerbil model. This model is of particular interest because H. pylori-infected gerbils develop a high level of gastric inflammation and often develop gastric adenocarcinoma or gastric ulceration. We analyzed the whole genome sequences of H. pylori strains cultured from experimentally infected gerbils, in comparison to the genome sequence of the input strain. The mean annualized single nucleotide polymorphism (SNP) rate per site was 1.5e-5, which is similar to the rates detected previously in H. pylori-infected humans. Many of the mutations occurred within or upstream of genes associated with iron-related functions (fur, tonB1, fecA2, fecA3, and frpB3) or encoding outer membrane proteins (alpA, oipA, fecA2, fecA3, frpB3 and cagY). Most of the SNPs within coding regions (86%) were non-synonymous mutations. Several deletion or insertion mutations led to disruption of open reading frames, suggesting that the corresponding gene products are not required or are deleterious during chronic H. pylori colonization of the gerbil stomach. Five variants (three SNPs and two deletions) were detected in isolates from multiple animals, which suggests that these mutations conferred a selective advantage. One of the mutations (FurR88H) detected in isolates from multiple animals was previously shown to confer increased resistance to oxidative stress, and we now show that this SNP also confers a survival advantage when H. pylori is co-cultured with neutrophils. Collectively, these analyses allow the identification of mutations that are positively selected during H. pylori colonization of the gerbil model. abstract_id: PUBMED:24349780 Long-Term Use of Probiotic-Containing Yogurts Is a Safe Way to Prevent Helicobacter pylori: Based on a Mongolian Gerbil's Model. Background. The suppression of Helicobacter pylori (H. pylori) decreases H. pylori-related diseases. The probiotics have an inhibitory effect on H. pylori. Aim. We investigated the effects of long-term use of yogurt on H. pylori based on Mongolian gerbils' model. Materials and Methods. Yogurt (containing a supplement of Lactobacillus acidophilus, Bifidobacterium lactis, etc.) was used. Forty-six gerbils were divided into five groups. All groups were inoculated with H. pylori for 5 to 8 weeks. The yogurt was given as follows: Group (Gr.) A: from 1st to 4th week; Gr. B from 5th to 8th week; Gr. C: from 17th week to sacrifice; Gr. D: from 5th week to sacrifice. Gerbils were sacrificed on the 52nd week. Histology was evaluated according to the Sydney system. Results. The positive rates of H. pylori were 60% (Gr. A), 75% (Gr. B), 67% (Gr. C), 44% (Gr. D), and 100% (Gr. E). Gr. D showed lower inflammatory score. Only Gr. E (60%) had intestinal metaplasia. Gr. D showed higher IL-10 and lower TNF- α expression than Gr. E. Conclusion. Long-term intake of yogurt could decrease H. pylori infection. The long-term use of yogurt would be an alternative strategy to manage H. pylori infection. abstract_id: PUBMED:21432329 Effect of black tea aqueous non-dialysate onHelicobacter pylori infection in Mongolian gerbils. Objectives: Recently, the appearance ofHelicobacter pylori (H. pylori) resistant to antibiotics has been reported. The development of an antibiotic therapy which would not induce resistant strains ofH. pylori is anticipated. In the present study, the antibiotic effect of black tea aqueous non-dialysate (BTND), the fraction different from tea catechins, onH. pylori was investigated using Mongolian gerbils infected withH. pylori. Methods: BTND was extracted from black tea leaves. A 0.1 w/v% solution of BTND or green tea catechins (GTC) was provided as drinking water to male NGS/Sea Mongolian gerbils infected withH. pylori (ATCC43504) for two weeks. Their stomachs were then excised, the mucosal surfaces were macroscopically observed, and colony forming units (CFU) ofH. pylori were counted. The data were compared between the BTND and GTC groups. Results: The CFU ofH. pylori were significantly decreased by intake of BTND. The body weight of the animals tended to be larger in the group supplied with BTND than in that supplied with GTC. Gastric mucosal injury tended to be smaller in the animals supplied with BTND than in those with GTC. Conclusions: These results suggest that BTND may have an inhibitory effect onH. pylori infection. abstract_id: PUBMED:25337555 Oxidative DNA Damage Response in Helicobacter pylori-Infected Mongolian Gerbils. Helicobacter pylori (H. pylori) induced DNA damage which may be related to gastric cancer development. The DNA damage response coordinates DNA repair, cell-cycle transition, and apoptosis through activation of DNA damage response molecules. The damaged DNA is repaired through non-homologous end joining (NHEJ) or homologous recombination (HR). In the present study, we investigated the changes of HR DNA repair proteins (ataxia-telangiectasia-mutated; ATM, ATM and Rad3-related; ATR), NHEJ repair proteins (Ku70/80), cell cycle regulators (Chk1, Chk2), and apoptosis marker (p53/p-p53) were determined in H. pylori-infected Mongolian gerbils. In addition, the effect of an antioxidant N-acetylcysteine (NAC) on H. pylori-induced DNA damage response was determined to assess the involvement of oxidative stress on DNA damage of the animals infected with H. pylori. One week after intragastric inoculation with H. pylori, Mongolian gerbils were fed with basal diet with or without 3% NAC for 6 weeks. After 6 week, the expression levels of DNA repair proteins (Ku70/80, ATM, ATR), cell cycle regulators (Chk1, Chk2) and apoptosis marker (p-p53/p53) were increased in gastric mucosa of Mongolian gerbils, which was suppressed by NAC treatment. In conclusion, oxidative stress mediates H. pylori-induced DNA damage response including NHEJ and HR repairing processes, cell cycle arrest and apoptosis in gastric mucosa of Mongolian gerbils. abstract_id: PUBMED:31558915 Genome-wide mutation analysis of Helicobacter pylori after inoculation to Mongolian gerbils. Background: Helicobacter pylori is a pathogenic bacterium that causes various gastrointestinal diseases in the human stomach. H. pylori is well adapted to the human stomach but does not easily infect other animals. As a model animal, Mongolian gerbils are often used, however, the genome of the inoculated H. pylori may accumulate mutations to adapt to the new host. To investigate mutations occurring in H. pylori after infection in Mongolian gerbils, we compared the whole genome sequence of TN2 wild type strain (TN2wt) and next generation sequencing data of retrieved strains from the animals after different lengths of infection. Results: We identified mutations in 21 loci of 17 genes of the post-inoculation strains. Of the 17 genes, five were outer membrane proteins that potentially influence on the colonization and inflammation. Missense and nonsense mutations were observed in 15 and 6 loci, respectively. Multiple mutations were observed in three genes. Mutated genes included babA, tlpB, and gltS, which are known to be associated with adaptation to murine. Other mutations were involved with chemoreceptor, pH regulator, and outer membrane proteins, which also have potential to influence on the adaptation to the new host. Conclusions: We confirmed mutations in genes previously reported to be associated with adaptation to Mongolian gerbils. We also listed up genes that mutated during the infection to the gerbils, though it needs experiments to prove the influence on adaptation. Answer: Yes, the long-term use of silver nanoparticles (AgNP) has a persistent inhibitory effect on Helicobacter pylori (H. pylori) based on the Mongolian gerbil model. A study indicated that AgNP/clay complex had a more obvious inhibitory effect on H. pylori in vitro, and there was a trend of higher concentrations of AgNP with a stronger inhibitory effect on H. pylori growth. Although there were no significant differences in inflammation among the groups inoculated with H. pylori, the conclusion of the study was that AgNP/clay could be a potential and safe agent for inhibiting H. pylori, which could be helpful for the eradication of H. pylori infection (PUBMED:24864246).
Instruction: Is paper-based documentation in an emergency medical service adequate for retrospective scientific analysis? Abstracts: abstract_id: PUBMED:20713363 Is paper-based documentation in an emergency medical service adequate for retrospective scientific analysis? An evaluation of a physician-run service. Objective: To investigate if paper-based documentation in the authors' emergency medical service (EMS) satisfies scientific requirements. Methods: From 1 July 2007 to 28 February 2008, data from all paper-based protocols of a physician-run EMS in Aachen, Germany, were transferred to a SQL database (n=4815). Database queries were conducted after personal data had been anonymised. Documentation ratios of 11 individual parameters were analysed at two points in time (T1, scene; T2, arrival in emergency department). The calculability of the Mainz Emergency Evaluation Score (MEES, embracing seven vital parameters) was investigated. The calculability of the Revised Trauma Score (RTS) was also determined for all trauma patients (n=408). Fisher's exact test was used to compare differences in ratios at T1 versus T2. Results: The documentation ratios of vital parameters ranged from 99.33% (Glasgow Coma Scale, T1) to 40.31% (respiratory rate, T2). The calculability of the MEES was poor (all missions: 28.31%, T1; 22.40%, T2; p<0.001). In missions that required cardiopulmonary resuscitation (n=87), the MEES was calculable in 9.20% of patients at T1 and 29.89% at T2 (p<0.001). In trauma missions, the RTS was calculable in 37.26% at T1 and 27.70% at T2 (p=0.004). Conclusions: Documentation of vital parameters is carried out incompletely, and documentation of respiratory rate is particularly poor, making calculation of accepted emergency scores infeasible for a significant fraction of a given test population. The suitability of paper-based documentation is therefore limited. Electronic documentation that includes real-time plausibility checks might improve data quality. Further research is warranted. abstract_id: PUBMED:38180801 Documentation Completeness and Nurses' Perceptions of a Novel Electronic App for Medical Resuscitation in the Emergency Room: Mixed Methods Approach. Background: Complete documentation of critical care events in the accident and emergency department (AED) is essential. Due to the fast-paced and complex nature of resuscitation cases, missing data is a common issue during emergency situations. Objective: This study aimed to evaluate the impact of a tablet-based resuscitation record on documentation completeness during medical resuscitations and nurses' perceptions of the use of the tablet app. Methods: A mixed methods approach was adopted. To collect quantitative data, randomized retrospective reviews of paper-based resuscitation records before implementation of the tablet (Pre-App Paper; n=176), paper-based resuscitation records after implementation of the tablet (Post-App Paper; n=176), and electronic tablet-based resuscitation records (Post-App Electronic; n=176) using a documentation completeness checklist were conducted. The checklist was validated by 4 experts in the emergency medicine field. The content validity index (CVI) was calculated using the scale CVI (S-CVI). The universal agreement S-CVI was 0.822, and the average S-CVI was 0.939. The checklist consisted of the following 5 domains: basic information, vital signs, procedures, investigations, and medications. To collect qualitative data, nurses' perceptions of the app for electronic resuscitation documentation were obtained using individual interviews. Reporting of the qualitative data was guided by Consolidated Criteria for Reporting Qualitative Studies (COREQ) to enhance rigor. Results: A significantly higher documentation rate in all 5 domains (ie, basic information, vital signs, procedures, investigations, and medications) was present with Post-App Electronic than with Post-App Paper, but there were no significant differences in the 5 domains between Pre-App Paper and Post-App Paper. The qualitative analysis resulted in main categories of "advantages of tablet-based documentation of resuscitation records," "challenges with tablet-based documentation of resuscitation records," and "areas for improvement of tablet-based resuscitation records." Conclusions: This study demonstrated that higher documentation completion rates are achieved with electronic tablet-based resuscitation records than with traditional paper records. During the transition period, the nurse documenters faced general problems with resuscitation documentation such as multitasking and unique challenges such as software updates and a need to familiarize themselves with the app's layout. Automation should be considered during future app development to improve documentation and redistribute more time for patient care. Nurses should continue to provide feedback on the app's usability and functionality during app refinement to ensure a successful transition and future development of electronic documentation records. abstract_id: PUBMED:38284634 A retrospective observational study investigating safety netting documentation in the emergency department. Background: Safety netting is an important aspect of medical practice, especially for first contact clinicians. Adequate safety netting provision and documentation is vital for optimal patient care and medicolegal security. This is the first study to report safety netting documentation practices in the emergency department, a specialty with a variety of high-acuity presentations and which is particularly vulnerable to medicolegal issues. Methods: A retrospective observational study was conducted at University Hospital Wales (UHW) Emergency Department (ED) in the months January-February and August 2023. Randomized paper notes review was performed for adult patients who were discharged from the ED without admission to hospital. Patient characteristics, time of consultation, grade of emergency medicine practitioner (EMP) and presence/level of safety netting documentation was recorded. The statistical software STATA was used to identify any associations between these factors on safety netting practices. Results: A total of 500 notes were included. Safety netting advice was documented in 45% of consultations (n=224). This was 'basic' in 166 cases, 'general' in 32, 'specific' in 15 and both 'general and specific' in 11. Frequency of safety netting documentation was higher for ages < 65 years and consultations within working hours (09:00 - 17:00). Conclusion: This study shows that there is no 'universal' level of safety netting documentation amongst practicing EMPs. Although there is well-established guidance about when, what and how safety netting should be provided, there is little guidance in terms of documentation details. Further audit and quality improvement to embed safety netting documentation in ED clerking is required to further improve safety netting practices. abstract_id: PUBMED:17554515 Paper-assisted digital Mission documentation in air rescue services. Quality management in preclinical emergency medicine Background: Recording of adequate mission data is of utmost importance in prehospital emergency medicine. For this, a nationwide uniform core dataset for prehospital data reporting, the so-called MIND 2, was introduced. With this procedure adequate information about structure and outcome quality, but only little information about process quality, can be obtained. Regarding the quality of data recording, primarily computer-based techniques are superior to other techniques. Against this background, the aim of this study was to develop a documentation system, which sets new standards regarding documentation dataset and documentation quality. Results: A primarily computer-based documentation system based on the "digital paper" technology was achieved. This technology allows conventional data entry via a (digital) pen and documentation on conventional paper. As the core-dataset MIND 2 was realized - furthermore, the measurements performed during prehospital management as well as data on vital signs (blood pressure, heart frequency, S(a)O(2), p(et)CO(2) etc.) were included into routine data recording. Integration of this documentation system into an already existing medical quality management system was achieved via a defined interface. Testing of this new system over a 3-month period at the helicopter emergency medical service (HEMS) "Christoph 22" showed a high degree of functionality and stability of the system. Serious problems, especially a total break-down of the whole system, were not observed during the study period. Conclusions: The new data recording concept, which is based on the "digital paper" technology, has proven to be completely satisfactory with respect to functionality and documentation quality during the test period. abstract_id: PUBMED:31399096 The way from pen and paper to electronic documentation in a German emergency department. Background: Some of the advantages of implementing electronic emergency department information systems (EDIS) are improvements in data availability and simplification of statistical evaluations of emergency department (ED) treatments. However, for multi-center evaluations, standardized documentation is necessary. The AKTIN project ("National Emergency Department Register: Improvement of Health Services Research in Acute Medicine in Germany") has used the "German Emergency Department Medical Record" (GEDMR) published by the German Interdisciplinary Association of Intensive and Emergency Care as the documentation standard for its national data registry. Methods: Until March 2016 the documentation standard in ED was the pen-and-paper version of the GEDMR. In April 2016 we implemented the GEDMR in a timeline-based EDIS. Related to this, we compared the availability of structured treatment information of traumatological patients between pen-and-paper-based and electronic documentation, with special focus on the treatment time. Results: All 796 data fields of the 6 modules (basic data, severe trauma, patient surveillance, anesthesia, council, neurology) were adapted for use with the existing EDIS configuration by a physician working regularly in the ED. Electronic implementation increased availability of structured anamnesis and treatment information. However, treatment time was increased in electronic documentation both immediately (2:12 ± 0:04 h; n = 2907) and 6 months after implementation (2:18 ± 0:03 h; n = 4778) compared to the pen-and-paper group (1:43 ± 0:02 h; n = 2523; p < 0.001). Conclusions: We successfully implemented standardized documentation in an EDIS. The availability of structured treatment information was improved, but treatment time was also increased. Thus, further work is necessary to improve input time. abstract_id: PUBMED:33512892 Implementing Paper Documentation During an Influenza Surge in a Pediatric Emergency Department. Objective: We hypothesized that a paper documentation and discharge bundle can expedite patient care during an influenza-related surge. Methods: Retrospective cohort study of low-acuity patients younger than 21 years surging into a pediatric emergency department between January and March 2018 with influenza-like illness. Patient visits documented using a paper bundle were compared with those documented in the electronic medical record on the same date of visit. The primary outcome of interest was time from physician evaluation to discharge for patient visits documented using the paper bundle compared with those documented in the electronic medical record. Secondary outcome was difference in return visits within 72 hours. We identified patient and visit level factors associated with emergency department length of stay. Results: A total of 1591 patient visits were included, 1187 documented in the electronic health record and 404 documented using the paper bundle. Patient visits documented using the paper bundle had a 21% shortened median time from physician evaluation to discharge (41 minutes; interquartile range, 27-62.8 minutes) as compared with patient visits documented in the electronic health record (52 minutes; interquartile range, 35-61 minutes; P < 0.001). There was no difference in return visits (odds ratio, 0.7; 95% confidence interval, 0.2, 2.2). Conclusions: Implementation of paper charting during an influenza-related surge was associated with shorter physician to discharge times when compared with patient visits documented in the electronic health record. A paper bundle may improve patient throughput and decrease emergency department overcrowding during influenza or coronavirus disease-related surge. abstract_id: PUBMED:24996509 A comparison of paper documentation to electronic documentation for trauma resuscitations at a level I pediatric trauma center. Introduction: Although the electronic medical record reduces errors and improves patient safety, most emergency departments continue to use paper documentation for trauma resuscitations. The purpose of this study was to compare the completeness of paper documentation with that of electronic documentation for trauma resuscitations. Methods: The setting was a level I pediatric trauma center where 100% electronic documentation was achieved in August 2012. A random sample of trauma resuscitations documented by paper (n=200) was compared with a random sample of trauma resuscitations documented electronically (n=200) to identify the presence or absence of the documentation of 11 key data elements for each trauma resuscitation. Results: The electronic documentation more frequently captured 5 data elements: time of team activation (100% vs 85%, P<.00), primary assessment (94% vs 88%, P<.036), arrival time of attending physician (98% vs 93.5%, P<.026), intravenous fluid volume in the emergency department (94% vs 88%, P<.036), and disposition (100% vs 89.5%, P<.00). The paper documentation more often recorded one data element: volume of intravenous fluids administered prior to arrival (92.5% vs 100%, P<.00). No statistical difference in documentation rates was found for 5 data elements: vital signs, treatment by emergency medical personnel, arrival time in the emergency department, and level of trauma alert activation. Discussion: Electronic documentation produced superior records of pediatric trauma resuscitations compared with paper documentation. Because the electronic medical record improves patient safety, it should be adopted as the standard documentation method for all trauma resuscitations. abstract_id: PUBMED:36717771 Data quality assessment in emergency medical services: an objective approach. Background: In emergency medical services, high quality data are of great importance for patient care. Due to the unique nature of this type of services, the purpose of this study was to assess data quality in emergency medical services using an objective approach. Methods: This was a retrospective quantitative study conducted in 2019. The research sample included the emergency medical records of patients who referred to three emergency departments by the pre-hospital emergency care services (n = 384). Initially a checklist was designed based on the data elements of the triage form, pre-hospital emergency care form, and emergency medical records. Then, data completeness, accuracy and timeliness were assessed. Results: Data completeness in the triage form, pre-hospital emergency care form, and emergency medical records was 52.3%, 70% and 57.3%, respectively. Regarding data accuracy, most of the data elements were consistent. Measuring data timeliness showed that in some cases, paper-based ordering and computer-based data entry was not sequential. Conclusion: Data quality in emergency medical services was not satisfactory and there were some weaknesses in the documentation processes. The results of this study can inform the clinical and administrative staff to pay more attentions to these weaknesses and plan for data quality improvement. abstract_id: PUBMED:35316370 Quality of documentation and treatment in the non-physician staffed ambulance: a retrospective analysis of emergency protocols from the city of Aachen Background: Each year there are 7.3 million emergencies for the German rescue service, trend rising and around 59% of the emergency patients are treated by paramedics only; however, most of the studies focus on physicians, while their practical skills at the scene are rarely necessary. Accordingly, the responsibility for the patient lies with the paramedics most of the time. Their duty is to execute life-saving measures, stabilize the patient for the transport and the regular documentation of the operation. Retrospectively, the emergencies can only be analyzed based on the emergency protocols, which are mostly paper-based and handwritten. That causes an increased effort in the evaluation, which makes studies for the whole country hardly feasible. As of now there are only few data on quality of healthcare and documentation by the paramedics. Both were analyzed in this survey based on the emergency protocols. Method: A retrospective analysis of emergency protocols from June to July 2018 took place in Aachen, a major German city. A specific feature of Aachen is a 24‑h available emergency physician via telemedicine. The quality of documentation and healthcare was analyzed by including standard operating procedures. Primary endpoints were the frequency of documentation, the achievement of complete documentation, the correct indications for a physician, the development of critical vital signs and the average on-scene time of the ambulance. Results: Overall, 1935 protocols were analyzed. A complete documentation was achieved in 1323 (68.4%) suspected diagnoses, 456 (23.6%) anamneses, 350 (18.1%) initial and 52 (2.7%) vital signs at handover. Based on the documentation, there were 531 cases (27%) of patients treated by paramedics only, even though a physician would have been indicated. Out of those patients 410 critical initial vital signs were documented of which 69 (16.8%) improved, while there was no documentation of vital signs at handover in 217 (52.9%). Also, there was a significantly prolonged on-scene time for patients with belated indications for an emergency physician with 15:02 min in comparison to 13:05 min for patients without indications. Conclusion: Deficient documentation was found in multiple cases and several important vital signs for a complete differential diagnosis were missing. Furthermore, a quarter of all patients might have benefited from an emergency physician as they were taken to hospital with no or insufficient treatment, despite standard operating procedures. From a forensic point of view there is an alarmingly incomplete documentation of vital signs at handover. The on-scene time in general was within the predetermined time frame, but can still be reduced in different scenarios. Overall, we recommend strict adherence to the standard operating procedures and algorithms, to remove unnecessary documentation and implement a structured quality assurance. Moreover, the quality of treatment might benefit from the rising number of more specialized paramedics and an increasing use of telemedicine. abstract_id: PUBMED:25782779 Implementation of a new emergency room protocol at a University Medical Center in Germany: basis for improved flow of information, adequate quality management and scientific assessment Background: After analyzing the existing documentation protocol for the emergency room (ER), the department of anesthesiology of the Medical University of Göttingen (UMG) developed a new department-specific ER protocol. Aim: The objective was to improve the flow of patient information from the preclinical situation through the emergency room to the early inpatient period. With this in mind a new emergency protocol was developed that encompasses the very heterogeneic patient collective in the ER as well as forming a basis for quality management and scientific investigation, taking user friendliness and efficiency into consideration. Material And Methods: A strategical development of a new emergency room protocol is represented, which was realized using a self-developed 8-step approach. Technical support and realization was carried out using the Scribus 1.4.2 open source desktop and GIMP 2.8.4 GNU image manipulation graphic programs. Results: The new emergency room protocol was developed based on scientific knowledge and defined targets. The following 13 sections represent the contents of the new protocol: general characteristics, emergency event, initial findings and interventions, vital parameters, injury pattern, vascular access, hemodynamics, hemogram/blood gas analysis (BGA), coagulopathy, diagnostics, emergency interventions, termination of ER treatment and final evaluation. Conclusion: The structured and elaborated documentation was limited to the target of two sides and succeeds in incorporating trauma patients as well as non-trauma patients in the ER. Answer: Based on the provided abstracts, paper-based documentation in emergency medical services (EMS) has been found to be inadequate for retrospective scientific analysis in several studies. The documentation of vital parameters is often carried out incompletely, with particularly poor documentation of respiratory rate, making the calculation of accepted emergency scores like the Mainz Emergency Evaluation Score (MEES) and the Revised Trauma Score (RTS) infeasible for a significant fraction of the test population (PUBMED:20713363). This suggests that the suitability of paper-based documentation for scientific analysis is limited. Furthermore, a study comparing paper documentation to electronic documentation for trauma resuscitations at a level I pediatric trauma center found that electronic documentation more frequently captured key data elements, indicating that electronic medical records improve patient safety and should be adopted as the standard documentation method for trauma resuscitations (PUBMED:24996509). Another study that implemented a new emergency room protocol at a University Medical Center in Germany aimed to improve the flow of patient information and form a basis for quality management and scientific investigation, suggesting that improvements to documentation protocols are necessary for adequate scientific analysis (PUBMED:25782779). In contrast, during an influenza-related surge in a pediatric emergency department, the implementation of a paper documentation and discharge bundle expedited patient care, indicating that in specific surge situations, paper documentation may have some advantages (PUBMED:33512892). However, this does not necessarily imply adequacy for retrospective scientific analysis. Overall, the evidence suggests that while paper-based documentation may have some situational uses, it is generally inadequate for retrospective scientific analysis due to incomplete and inconsistent data capture. Electronic documentation systems, which offer better data completeness and accuracy, are preferred for scientific analysis and quality management in emergency medical services (PUBMED:20713363, PUBMED:24996509, PUBMED:25782779).
Instruction: Can health workers reliably assess their own work? Abstracts: abstract_id: PUBMED:37492558 Work Time Control and Job Crafting as Predictors of Work Method Control Among Health Workers. Introduction: Nigerian dilapidated health sector is confronted with unprecedented work method control challenges caused by the ongoing COVID-19 crisis, which has changed the world of work, health, and well-being attainment. The COVID-19 pandemic has challenged work method control employed by the Nigerian health sector as they seek to mobilize leftover resources from embezzlement required to accomplish desired tasks and, as a result, altered the work methods, job descriptions, characteristics, and demands the attainment of organizational and personal goals. Objective: Based on Job Demands-Resources Theory (JD-R) and Self-Determination Theory (SDT), the researchers' principal objective was to investigate the moderating role of job crafting in the relationship between work time control and work method control among Nigerian health workers during the COVID-19 pandemic. Methods: The participants for this study comprised (220) health employees sampled through the purposive and convenient method. Three instruments, Breaugh's Work Autonomy Scale, Work Time Control Scale, and Job Crafting Questionnaire, were used for data collection, and hierarchical multiple regression was employed for data analysis. Results: Results of this study showed that work time control and job crafting were positively associated with work method control of Nigerian health workers. Job crafting moderated the relationship between work time control and work method control. This research deepened the knowledge of work method control while integrating work time control, and job crafting, with other health and work challenges of health workers in the face of the COVID-19 pandemic. Second, COVID-19 has turned neglected Nigerian health workers into heroes as their work time control and job crafting helped manage the pandemic despite the poor infrastructure, corruption, bigotry, and ethnocentrism. Conclusion: During pandemics, the work method must be understood and used with work time and job crafting to improve patient recovery, health workers' well-being, the nation, and the universe. abstract_id: PUBMED:24550196 Shift work and burnout among health care workers. Background: Burnout, defined as a syndrome derived from prolonged exposure to stressors at work, is often seen in health care workers. Shift work is considered one of the occupational risks for burnout in health care workers. Aims: To identify and describe the association between shift work and burnout among health care workers. Methods: A cross-sectional study of health care workers in Chiang Mai University Hospital, Thailand. Data were collected via an online self-answered questionnaire and included details of shift work and burnout. Burnout was measured by the Maslach Burnout Inventory (MBI). Results: Two thousand seven hundred and seventy two health care workers participated, a 52% response rate. Burnout was found more frequently among shift workers than those who did not work shifts (adjusted odds ratio [aOR] 1.4, 95% confidence interval [CI]: 1.0-1.9). Among shift workers, over 10 years of being a shift worker was associated with increasing burnout (aOR 1.7, 95% CI: 1.2-2.6) and having 6-8 sleeping hours per day was associated with having less burnout (aOR 0.7, 95% CI: 0.5-0.9). Nurses who had at least 8 days off per month had lower odds of burnout compared with those with fewer than 8 days off (aOR 0.6, 95% CI: 0.5-0.8). Conclusions: Shift work was associated with burnout in this sample. Increased years of work as a shift worker were associated with more frequent burnout. Adequate sleeping hours and days off were found to be possible protective factors. Policies on shift work should take into account the potential of such work for contributing towards increasing burnout. abstract_id: PUBMED:29699013 Investigation and Analysis on shift work female workers' and the impact on reproductive health Objective: To investigate the distribution of shift work of female workers in different industries and the relationship between shift work and reproductive health, then provide reference for the female workers' labor protection. Methods: From June to September 2016, cluster sampling questionnaire survey was performed among female workers from 11 industries including electronics, medicine and health, pharmacy. To investigate the general information, shift-work information, reproductive health and childbearing history of these female workers. Results: A total of 63 711 usable questionnaires were collected, resulting in a response rate of 96.94%.A total of 13 546 workers worked in shifts, accounting for 21.26%, the highest proportion was in the medical industry 30.61%, metallurgy 30.81%, petrochemical engineerin 26.78% respectively. Compared with the workers who did not work in shifts, those who worked in shifts had significantly higher rate of abnormal menstruation, rate of reproductive system infection in married workers, the rate of infertility (χ(2)=19.108、10.673、21.510, P<0.05) ; Compared with the workers who did not work in nightshifts, those who worked in nightshifts had significantly higher rate of abnormal menstruation, rate of reproductive system infection among married workers and rate of infertility (χ(2)=140.043、71.901、29.024, P<0.01) . Conclusion: The highest rate of shift work was in the medical industry, metallurgy, petrochemical engineering industry. Workers who worked in shifts have serious reproductive health issues, the occurrence of abnormal menstruation, reproductive system infection and infertility may associated with shift work. abstract_id: PUBMED:29351505 Helping, mediating, and gaining recognition: The everyday identity work of Romanian health social workers. Health social work is a field with challenges, opportunities, and ways of professing social work that may vary between different national contexts. In this article, we look at how Romanian health social workers construct their professional identity through their everyday identity work. Drawing on a qualitative study based on interviews with 21 health social workers working in various organizational contexts, we analyze what health social workers say they do and how this shapes their self-conception as professionals. Four main themes emerged from participants' descriptions: being a helping professional, being a mediator, gaining recognition, and contending with limits. Through these themes, participants articulated the everyday struggles and satisfactions specific to working as recently recognized professionals in Romanian health and welfare systems not always supportive of their work. abstract_id: PUBMED:34056663 Factors associated with work performance and mental health of healthcare workers during pandemics: a systematic review and meta-analysis. Introduction Healthcare workers' work performance and mental health are associated with positive mental health outcomes and directly related to increased productivity and decreased disability costs. Methods We conducted a systematic review to identify factors associated with work performance of healthcare workers during a pandemic and conducted a meta-analysis of the prevalence of mental health outcomes in this context. Primary papers were collected and analysed using the Population/Intervention/Comparison/Outcome framework and using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. To critically appraise the studies included in the review, we used the AXIS appraisal tool to assess each cross-sectional study's quality. Results The study identified nine factors associated with the work performance and mental health of healthcare workers, including experiencing feelings of depression, anxiety, having inadequate support, experiencing occupational stress, decreased productivity, lack of workplace preparedness, financial concerns associated with changes in income and daily living, fear of transmission and burnout/fatigue. Conclusion There is a rapidly rising need to address the work performance and mental health of healthcare workers providing timely care to patients. Regular and sustained interventions, including the use of information and communication technologies such as telehealth, are warranted. abstract_id: PUBMED:35379031 Funeral Industry Workers' Work Health and Safety in Australia and Ireland. Funeral workers (FWs) work within increasingly medicalized and commodified death-management systems. This study explored Worker Health and Safety (WHS) impacts in contemporary death management on Australian and Irish FWs. Mixed methods combined a survey and interviews. Survey data were descriptively summarized, interviews thematically analyzed, and a schematic systems model developed of the combined results. Survey participants (n = 45) reported psychosocial hazards from work pressures, competition, and fatigue. Psychosocial hazards were more frequently reported than physical hazards by Australian FWs. Physical hazards were of greater concern to Irish FWs. Themes from 11 interviews were: Personal Attributes, Work Demands, and Socio-Cultural Context. All FWs reported conflicts between individual capacities, work demands, and resources, resulting in hazardous personal states including difficulty sleeping and stress. Respectfully manually handling human remains and "event management" demands for increasingly elaborate funerals created negative WHS impacts. This research informs risk management for FWs and other workers in the increasingly complex death-care industry. abstract_id: PUBMED:28194080 Estimation of work capacity and work ability among plantation workers in South India. Background: Work capacity is the ability to perform real physical work, and work ability is a result of interaction of worker to his or her work that is how good a worker is at present, in near future, and how able is he or she to do his or her work with respect to work demands and health and mental resources. Objective: To assess the work capacity and work ability and to study the factors associated with work capacity and work ability of workers at a tea plantation in South India. Materials And Methods: A cross-sectional study was conducted at a tea plantation in Annamalai, South India, from March to May 2015. Data were collected using a structured interview schedule comprising of three parts as follows: sociodemographic data, work ability questionnaire, and work capacity assessment. Results: Of the 199 subjects participated in the study, majority [90 (45.3%)] were in the age group of 46-55 years, and 128 (64.3%) were females. Of the 199 workers, 12.6% had poor aerobic capacity (by Harvard Step test), 88.4% had an endurance of more than 1 h, 70.9% had better work productivity and energetic efficiency, and the voluntary activity workers spent most time on household chores. Of the 199 workers assessed, only 9.6% had good work ability. There is negative correlation between work ability and body mass index (BMI). Conclusion: Our study found 12.6% workers with poor aerobic capacity and 9.6% of workers with good work ability. Periodic health examinations and other screening procedures should be made as routine in workplace to improve work ability and capacity. abstract_id: PUBMED:20408358 Community health workers and their value to social work. Community health workers (CHWs) play a vital and unique role in linking diverse and underserved populations to health and social service systems. Despite their effectiveness, as documented by empirical studies across various disciplines including public health, nursing, and biomedicine, the value and potential role of CHWs in the social work practice and research literature has been largely absent. Thus, this article introduces social workers to CHWs, their role in promoting culturally appropriate practice, and their utility in collaboration with social workers in community settings. This integrative review also discusses current challenges identified by the CHW literature, including potential barriers to the expansion of CHW programs, as well as issues of training, certification, and sustainability. The review also discusses the close alignment of CHWs with social work values and principles of social justice, suggesting opportunities for enhanced social work practice and research. abstract_id: PUBMED:33120206 Rethinking the relationship between sex work, mental health and stigma: a qualitative study of sex workers in Australia. Aims: Sex workers may experience stigma both related to their occupation as well as to mental health issues that they face. There is limited research on the lived experience of sex workers managing mental health and stigma. This study examined the experiences of sex workers in Australia in relation to stigma surrounding sex work, and sex workers' mental health, including self-management and experiences of accessing mental health services. Method: Six focus groups and two interviews were conducted with 31 sex workers. Data collection was co-moderated by a sex worker and a university-based researcher. Analysis was informed by an approach which positioned sex workers as agential and capable, and which drew attention to structural aspects of stigma. Results: Sex workers identified that the stigma surrounding their profession had a significant impact on their mental health. The need to manage risks through selective disclosure of sex work was a pervasive experience. Management of mental health and the stigma associated with sex work was described as a responsibility primarily of the individual through self-care activities and occasional access to mental health services. Participants reported poor treatment from mental health practitioners who saw sex workers as victims lacking agency, imposed beliefs that sex work was the pathological root cause of mental health issues, or approached the issue with fascination or voyeurism. Other presenting issues (especially mental health) were lost or obscured in therapeutic encounters resulting in suboptimal care. Conclusion: The threat of stigma is pervasive and has mental health implications for sex workers. Our findings point to the need for increased training and capacity development for mental health practitioners, funding for peer support services to ameliorate internalised stigma, and action from governments to introduce enabling legal environments, stigma reduction programs and structural protections from sex work stigma. abstract_id: PUBMED:29743782 Effect of Shift Work on Sleep, Health, and Quality of Life of Health-care Workers. Background: Shift work is associated with sleep disruption, impaired quality of life, and is a risk factor for several health conditions. Aim of this study was to investigate the impact of shift work on sleep and quality of life of health-care workers (HCW). Settings: Tertiary University hospital in Greece. Study Design: Cross-sectional study. Materials And Methods: Included were HCW, working either in an irregular shift system or exclusively in morning shifts. All participants answered the WHO-5 Well-Being Index (WHO-5) and a questionnaire on demographics and medical history. Shift workers filled the Shift Work Disorders Screening Questionnaire (SWDSQ). Statistical Analysis: Descriptive statistics, Student's t-test, one-way analysis of variance (ANOVA), Pearson's r correlation coefficient, and multivariate stepwise linear regression analysis were applied. Results: Included were 312 employees (87.9% females), 194 working in irregular shift system and 118 in morning shifts. Most shift-workers (58.2%) were somehow or totally dissatisfied with their sleep quality. Regression analysis revealed the following independent determinants for sleep impairment: parenthood (P < 0.001), age 36-45 years (P < 0.001), >3 night shifts/week (P < 0.001), work >5 years in an irregular shift system (P < 0.001). Diabetes mellitus was the most common medical condition reported by shift workers (P = 0.008). Comparison between the two groups revealed a significantly impairment in WHO-5 total score, as well as in 4 of 5 of its items (P < 0.001). Conclusion: Shift-work impairs quality of life, whereas its duration and frequency, along with age and family status of employees can have adverse effects on sleep. Answer: The abstracts provided offer insights into various factors that affect the work of health workers, including work time control, job crafting, shift work, mental health, and work safety. However, they do not directly address the reliability of health workers in assessing their own work. For instance, the study on Nigerian health workers during the COVID-19 pandemic found that work time control and job crafting were positively associated with work method control (PUBMED:37492558). This suggests that health workers who have greater control over their work time and engage in job crafting may be better equipped to manage their work methods effectively. However, this does not necessarily imply that they can reliably assess the quality or outcomes of their own work. Similarly, the study on shift work and burnout among health care workers in Thailand indicates that shift work is associated with burnout, and certain factors like adequate sleep and days off can be protective (PUBMED:24550196). While this study highlights the challenges faced by health workers, it does not provide evidence on their ability to self-assess their work performance. The research on Romanian health social workers discusses how they construct their professional identity and cope with the challenges of their work environment (PUBMED:29351505). Again, this does not directly address the question of self-assessment reliability. The systematic review and meta-analysis on factors affecting healthcare workers' work performance and mental health during pandemics (PUBMED:34056663) identifies several factors that impact these outcomes, but it does not comment on the workers' self-assessment capabilities. In conclusion, while the abstracts provide valuable information on the factors influencing health workers' work environment and well-being, they do not provide evidence on the reliability of health workers in assessing their own work. Therefore, based on the provided abstracts, it cannot be determined whether health workers can reliably assess their own work.
Instruction: Can tissue oxygenation index (TOI) and cotside neurophysiological variables predict outcome in depressed/asphyxiated newborn infants? Abstracts: abstract_id: PUBMED:17052867 Can tissue oxygenation index (TOI) and cotside neurophysiological variables predict outcome in depressed/asphyxiated newborn infants? Background: Diagnostic tools of birth asphyxia provide only an uncertain prediction of neurological outcome. Aims: To assess whether TOI and DeltaCBV, combined with a set of biochemical and neurophysiological variables, have any diagnostic and prognostic value in birth depression or asphyxia. Study Design: Case control study at the nursery and NICU of the Padova University Children's Hospital. Subjects: 22 term neonates with an Apgar score &lt; or = 6 at 5', a 1-h umbilical artery pH value &lt; or = 7.25 with an increased base deficit and a gestational age &gt; or = 36 weeks; 15 healthy term infants with an Apgar score &gt; or = 9 at 5'. Outcome Measures: Troponin I and NIRS measurements (TOI and DeltaCBV) were assessed in both groups. Blood gases, neurological evaluation, US, NIRS, EEG and SEP were evaluated in the infants with depression or asphyxia. Results: Troponin I was higher in the study group than in controls (p=0.04), showing a correlation with base excess values. In the depressed/asphyxiated neonates with an abnormal outcome at 1 year, TOI rose to 80.1% vs 66.4% in controls (p=0.04) and 74.7% in infants with a normal 1-year outcome. A multiple regression model showed a significant multiple correlation coefficient, R=0.79, p&lt;0.001, where the predictive variables significantly associated with outcome were SEP and BE. Conclusions: Troponin I is a useful short-term index of birth asphyxia or perinatal depression. An increased TOI suggests a risk of abnormal neurological outcome at 1 year. Among the cotside variables, BE and evoked potential abnormalities were the best predictors of abnormal outcome in this study. abstract_id: PUBMED:19373456 Tissue oxygenation index measured using spatially resolved spectroscopy correlates with changes in cerebral blood flow in newborn lambs. Background/objective: Abnormal cerebral haemodynamics in very preterm infants undergoing neonatal intensive care have been associated with adverse outcome, but cerebral blood flow (CBF) is difficult to assess at the cotside. Spatially resolved spectroscopy (SRS) continuously measures cerebral tissue oxygen saturation expressed as tissue oxygenation index (TOI, %), and TOI would reflect changes in CBF with constant cerebral metabolic demand. We aimed to evaluate the relationship between simultaneous measurements of CBF (Transonic System) and TOI (Hamamatsu NIRO-200) in newborn lambs (n = 8). We hypothesised that alterations in CBF of different magnitudes and frequencies would be reflected as changes of TOI in both time domain and frequency domain analyses. Methods: A silicon cuff positioned around the common brachiocephalic artery was inflated to induce 10-30% reductions in cerebral perfusion pressure so as to produce mild-to-moderate variations of CBF. Relationships between changes in CBF and TOI were evaluated in the time domain (Pearson correlation) and frequency domain (Coherence). Results: In time domain analysis, there was significant correlation between DeltaTOI (%) and DeltaCBF (%) (R2 = 0.69, P &lt; 0.001). In frequency domain analysis, CBF-TOI coherence was &gt; or = 0.5 at frequencies below 0.1 Hz, but &lt;0.5 at higher frequencies. Conclusions: We conclude that overall TOI changes are concordant with CBF variations, when arterial oxygen saturation and cerebral oxygen consumption are constant. While TOI reflects CBF, it is more sensitive to variations of CBF of low frequency (&lt;0.1 Hz) than to more rapid, higher frequency changes. abstract_id: PUBMED:22082686 Early predictors of short term neurodevelopmental outcome in asphyxiated cooled infants. A combined brain amplitude integrated electroencephalography and near infrared spectroscopy study. Background: Brain Cooling (BC) represents the elective treatment in asphyxiated newborns. Amplitude Integrated Electroencephalography (aEEG) and Near Infrared Spectroscopy (NIRS) monitoring may help to evaluate changes in cerebral electrical activity and cerebral hemodynamics during hypothermia. Objectives: To evaluate the prognostic value of aEEG time course and NIRS data in asphyxiated cooled infants. Methods: Twelve term neonates admitted to our NICU with moderate-severe Hypoxic-Ischemic Encephalopathy (HIE) underwent selective BC. aEEG and NIRS monitoring were started as soon as possible and maintained during the whole hypothermic treatment. Follow-up was scheduled at regular intervals; adverse outcome was defined as death, cerebral palsy (CP) or global quotient &lt;88.7 at Griffiths' Scale. Results: 2/12 Infants died, 2 developed CP, 1 was normal at 6 months of age and then lost at follow-up and 7 showed a normal outcome at least at 1 year of age. The aEEG background pattern at 24 h of life was abnormal in 10 newborns; only 4 of them developed an adverse outcome, whereas the 2 infants with a normal aEEG developed normally. In infants with adverse outcome NIRS showed a higher Tissue Oxygenation Index (TOI) than those with normal outcome (80.0±10.5% vs 66.9±7.0%, p=0.057; 79.7±9.4% vs 67.1±7.9%, p=0.034; 80.2±8.8% vs 71.6±5.9%, p=0.069 at 6, 12 and 24 h of life, respectively). Conclusions: The aEEG background pattern at 24h of life loses its positive predictive value after BC implementation; TOI could be useful to predict early on infants that may benefit from other innovative therapies. abstract_id: PUBMED:24844816 Heart rate passivity of cerebral tissue oxygenation is associated with predictors of poor outcome in preterm infants. Aim: Near-infrared spectroscopy (NIRS) and transcranial Doppler (TCD) allow non-invasive assessment of cerebral haemodynamics. We assessed cerebrovascular reactivity in preterm infants by investigating the relationship between NIRS- and TCD-derived indices and correlating them with severity of clinical illness. Methods: We recorded the NIRS-derived cerebral tissue oxygenation index (TOI) and TCD-derived flow velocity (Fv), along with other physiological variables. Moving correlation coefficients between measurements of cerebral perfusion (TOI, Fv) and heart rate were calculated. We presumed that positivity of these correlation coefficients - tissue oxygenation heart rate reactivity index (TOHRx) and flow velocity heart rate reactivity index (FvHRx) - would indicate a direct relationship between cerebral perfusion and cardiac output representing impaired cerebrovascular autoregulation. Results: We studied 31 preterm infants at a median age of 2 days, born at a median gestational age of 26 + 1 weeks. TOHRx was significantly correlated with gestational age (R = -0.57, p = 0.007), birth weight (R = -0.58, p = 0.006) and the Clinical Risk Index for Babies II (R = 0.55, p = 0.0014). TOHRx and FvHRx were significantly correlated (R = 0.39, p = 0.028). Conclusion: Heart rate has a key influence on cerebral haemodynamics in preterm infants, and TOHRx may be of diagnostic value in identifying impaired cerebrovascular reactivity leading to adverse clinical outcome. abstract_id: PUBMED:12390989 Cerebral tissue oxygenation index in very premature infants. Aim: To describe normal values of the cerebral tissue oxygenation index (TOI) in premature infants. Methods: TOI was measured by spatially resolved spectroscopy in preterm infants on the first 3 days of life. Infants with an abnormal cranial ultrasound were excluded. Other simultaneously measured variables were PaO(2), PaCO(2), pH, mean arterial blood pressure, heart rate, haemoglobin, glycaemia, and peripheral oxygen saturation. Results: Fifteen patients with a median postmenstrual age of 28 weeks were measured. There was a significant increase in median TOI over the first 3 days of life: 57% on day 1, 66.1% on day 2, and 76.1% on day 3. Multiple regression analysis showed no correlation between TOI and postmenstrual age, peripheral oxygen saturation, mean arterial blood pressure, PaO(2), PaCO(2), and haemoglobin concentration. Conclusion: Cerebral TOI increases significantly in the first 3 days of life in premature babies. This increase probably reflects the increase in cerebral blood flow at this time. abstract_id: PUBMED:30297295 Effects of Prone Sleeping on Cerebral Oxygenation in Preterm Infants. Objective: To determine the effect of prone sleeping on cerebral oxygenation in preterm infants in the neonatal intensive care unit. Study Design: Preterm infants, divided into extremely preterm (gestational age 24-28 weeks; n = 23) and very preterm (gestational age 29-34 weeks; n = 33) groups, were studied weekly until discharge in prone and supine positions during active and quiet sleep. Cerebral tissue oxygenation index (TOI) and arterial oxygen saturation (SaO2) were recorded. Cerebral fractional tissue extraction (CFOE) was calculated as CFOE = (SaO2 - TOI)/SaO2. Results: In extremely preterm infants, CFOE increased modestly in the prone position in both sleep states at age 1 week, in no change in TOI despite higher SaO2. In contrast, the very preterm infants did not have position-related differences in CFOE until the fifth week of life. In the very preterm infants, TOI decreased and CFOE increased with active sleep compared with quiet sleep and with increasing postnatal age. Conclusion: At 1 week of age, prone sleeping increased CFOE in extremely preterm infants, suggesting reduced cerebral blood flow. Our findings reveal important physiological insights in clinically stable preterm infants. Further studies are needed to verify our findings in unstable preterm infants regarding the potential risk of cerebral injury in the prone sleeping position in early postnatal life. abstract_id: PUBMED:30245358 Bradycardias are associated with more severe effects on cerebral oxygenation in very preterm infants than in late preterm infants. Background: Commonly the magnitude and frequency of bradycardia is underestimated in the neonatal unit due to the long averaging time used in bedside oximeters. We aimed to assess the frequency and severity of bradycardia in preterm infants using the lowest averaging time (2 s) available on a clinical oximeter, compared with bradycardia detected using electrocardiogram (ECG), and whether bradycardia severity and postmenstrual age affected cerebral oxygenation. Methods: Preterm infants (10 M/9F) were studied longitudinally at 26-31 (very preterm) and 32-38 weeks (late preterm) postmenstrual age. Heart rate falls calculated from ECG were used to determine mild or moderate/severe (MS) bradycardias. Cerebral tissue oxygenation index (TOI, %) was recorded and fractional tissue oxygen extraction (FTOE) calculated. Results: Of the 615 bradycardias scored using ECG criteria, 10% were not detected by oximetry. TOI falls associated with bradycardias were greater for MS bradycardias compared with Mild for both groups (p &lt; 0.001 for both). The FTOE associated with MS bradycardias was higher for the very preterm compared with the late preterm group (p &lt; 0.001). In very preterm infants 61% of MS and 35% Mild bradycardias were associated with TOI nadirs below 55%. Conclusion: Even the most sensitive oximeter setting underestimates bradycardias. The cerebral effect from bradycardias in very preterm infants is more severe than in late preterm infants. Even the mild bradycardias are associated with falls in cerebral oxygenation. Routine NIRS monitoring of cerebral oxygenation in NICUs may increase staff awareness for interventions to reduce the repetitive falls in cerebral oxygenation in preterm infants. abstract_id: PUBMED:29528500 Comparison of the longitudinal effects of persistent periodic breathing and apnoea on cerebral oxygenation in term- and preterm-born infants. Key Points: Periodic breathing and apnoea were more common in preterm compared to age-matched term-born infants across the first 6 months after term-corrected age. Periodic breathing decreased with age in both term and preterm infants. Apnoea duration was not different between groups; however, the decline in apnoea index with postnatal age observed in the term infants was not seen in the preterm infants. Falls in tissue oxygenation index (brain TOI) associated with apnoeas were greater in the preterm infants at all three ages studied. The clinical significance of falls in brain TOI during periodic breathing and apnoea on neurodevelopmental outcome is unknown and warrants further investigations. Abstract: Periodic breathing and short apnoeas are common in infants, particularly those born preterm, but are thought to be benign. The aim of our study was to assess the incidence and impact of periodic breathing and apnoea on heart rate, oxygen saturation and brain tissue oxygenation index (TOI) in infants born at term and preterm over the first 6 months after term equivalent age. Nineteen term-born infants (38-42 weeks gestational age) and 24 preterm infants (born at 27-36 weeks gestational age) were studied at 2-4 weeks, 2-3 months and 5-6 months post-term-corrected age during sleep. Periodic breathing episodes were defined as three or more sequential apnoeas each lasting ≥3 s and apnoeas as ≥3 s in duration. The mean duration of periodic breathing episodes was longer in term infants than in preterm infants at 2-4 weeks (P &lt; 0.05) and at 5-6 months (P &lt; 0.05); however, the nadir in TOI was significantly less in the term infants at 2-3 months (P &lt; 0.001). Apnoea duration was not different between groups; however, the decline in apnoea index with postnatal age observed in the term infants was not seen in the preterm infants. Falls in TOI associated with apnoeas were greater in the preterm infants at all three ages studied. In conclusion, periodic breathing and short apnoeas were more common in infants born preterm and falls in cerebral oxygenation were greater than in the term group. The clinical significance of this on neurodevelopmental outcome is unknown and warrants further investigations. abstract_id: PUBMED:15188981 Cerebral blood-flow velocities in predicting outcome of asphyxiated newborn infants. Aim: To evaluate the role of early (up to 12 h) changes in cerebral blood-flow (CBF) velocity in predicting the severity of hypoxic-ischaemic encephalopathy (HIE) and long-term outcome in asphyxiated term infants. Methods: CBF velocities were investigated by colour Doppler ultrasonography in 81 healthy and 60 asphyxiated term infants at least three times during the first 5 d of life. The psychomotor development of infants was followed up to 18 mo. Results: No differences in CBF velocities were found at the age of 2-6 h between infants with severe and mild-moderate HIE, mean CBF velocity [mean (95% CI of mean CBF velocity)] in anterior cerebral artery [14.9 (1.4-28.4)cm/s] and [13.9 (11.1-16.7) cm/s], respectively, and between infants with poor outcome (death or severe disability) and with normal development/mild impairments. By the age of 12 h infants with mild-moderate HIE and infants with normal development/mild impairments had decreased CBF velocity in the anterior cerebral artery, and infants with severe HIE or poor outcome had increased mean CBF velocity in anterior, medial cerebral and basilar artery compared to the control group. Conclusion: The value of CBF velocity changes to predict poor outcome in asphyxiated infants is low 2-6 h after asphyxia, but increases by the age of 12 ho. abstract_id: PUBMED:28824052 Cerebral Tissue Oxygenation Index Using Near-infrared Spectroscopy during Extracorporeal Cardio-pulmonary Resuscitation Predicted Good Neurological Recovery in a Patient with Acute Severe Anemia. We herein report the clinical course of a patient who had a good neurological outcome despite severe anemia, high serum lactate levels, and a long period of time from cardiac arrest (CA) to extracorporeal cardio-pulmonary resuscitation (ECPR) establishment. During the period of resuscitation, the tissue oxygenation index (TOI) values were measured continuously by a near-infrared spectroscopy monitoring device and were kept within the normal range. The TOI seems to reflect cerebral perfusion and the balance between the oxygen supply and demand in the brain during ECPR, thereby predicting the neurological outcome. Continuous TOI monitoring is useful for predicting the neurological outcome during ECPR. Answer: Yes, the tissue oxygenation index (TOI) and cotside neurophysiological variables can predict outcomes in depressed/asphyxiated newborn infants. Studies have shown that TOI, when combined with other biochemical and neurophysiological variables, can have diagnostic and prognostic value in birth depression or asphyxia. In a study of term neonates with an Apgar score ≤ 6 at 5 minutes, those with an abnormal outcome at 1 year had a higher TOI compared to controls and infants with a normal 1-year outcome. The study also found that base excess (BE) and evoked potential abnormalities were the best predictors of abnormal outcome among the cotside variables (PUBMED:17052867). Another study in asphyxiated cooled infants found that a higher TOI was observed in infants with adverse outcomes compared to those with normal outcomes. This suggests that TOI could be useful in predicting early on which infants may benefit from other innovative therapies (PUBMED:22082686). Moreover, TOI changes have been found to be concordant with cerebral blood flow (CBF) variations when arterial oxygen saturation and cerebral oxygen consumption are constant, indicating that TOI reflects CBF, particularly to variations of CBF of low frequency (<0.1 Hz) (PUBMED:19373456). In preterm infants, the heart rate passivity of cerebral tissue oxygenation, as measured by TOI, has been associated with predictors of poor outcome, suggesting that TOI may be of diagnostic value in identifying impaired cerebrovascular reactivity leading to adverse clinical outcomes (PUBMED:24844816). Overall, these findings suggest that TOI, along with other neurophysiological measurements, can be a valuable tool in predicting neurological outcomes in depressed/asphyxiated newborn infants.
Instruction: Bioterrorism-related inhalational anthrax: can extrapolated adult guidelines be applied to a pediatric population? Abstracts: abstract_id: PUBMED:17437350 Bioterrorism-related inhalational anthrax: can extrapolated adult guidelines be applied to a pediatric population? Objective: Since the 2001 anthrax attacks, an extensive body of literature has evolved, but there has been a limited focus on the management of pediatric-specific issues. We looked at the symptom complexes of all pediatric patients presenting to the emergency department of our hospital during this period and examined whether their presentations would likely allow current guidelines to be used as potential screening criteria to identify children infected with anthrax. Methods: We retrospectively reviewed emergency department records of all adult and pediatric patients (up to the age of 21 years) at Inova Fairfax Hospital during this time, when a large, and at the time ill-defined, group in the Washington, DC, metropolitan area was at risk for pulmonary anthrax. Two cases of anthrax infection were identified at this hospital in exposed adult postal workers. Screening algorithms (described by Mayer et al. and Hupert et al.) were applied to adult and pediatric patients with the presence of fever (38 degrees C), tachycardia, or other symptoms compatible with pulmonary anthrax. Specifically, the usefulness of these guidelines as potential screening tools to identify possibly infected children was examined. Results: Of 767 pediatric patients seen in the emergency department during the study period, 312 met criteria for review (41%; 95% CI: 37-44%). Four adult patients (0.4%; 95% CI: 0.1-0.9%) had at least five clinical symptoms, fever, and tachycardia; two of them had inhalational anthrax. No pediatric patient presented with five or more clinical symptoms. Twelve children (3.9%; 95% CI: 2-6.6%) presented with four clinical symptoms; five of the 12 had neither fever nor tachycardia. Children, particularly infants and toddlers, presented with nonspecific symptom complexes primarily limited to fever, vomiting, cough, and trouble breathing. Conclusions: Existing guidelines are likely to be unreliable as a screening tool for inhalational anthrax in children, largely because of the children's inability to adequately communicate a suggestive symptom complex. abstract_id: PUBMED:16815456 Clinical and pathologic differential diagnosis of selected potential bioterrorism agents of interest to pediatric health care providers. The early recognition of potential bioterrorism agents has been of increasing concern in recent years. The Centers for Disease Control and Prevention has categorized and listed biological terrorism agents. Although any or all of the highest risk biological agents (including inhalation anthrax, pneumonic plague, smallpox, tularemia, botulism, and viral hemorrhagic fevers) can be seen in the pediatric patient, several agents might closely resemble--at least in their initial stages-some of the more common childhood illnesses. The awareness of these similarities and, more importantly,their differences, are critical for all health care professionals. Selected examples of some typical childhood illnesses are presented and then compared with three of the most virulent biological agents (smallpox, anthrax and plague) that might be used in a bioterrorism attack. abstract_id: PUBMED:12746773 Inhalational anthrax due to bioterrorism: would current Centers for Disease Control and Prevention guidelines have identified the 11 patients with inhalational anthrax from October through November 2001? A panel of 10 physicians used the nominal group technique to assess the ability of the Centers for Disease Control and Prevention (CDC) interim guidelines for clinical evaluation of persons with possible inhalational anthrax (IA) to retrospectively identify the 11 patients with IA seen during the October 2001 bioterrorism outbreak. The guidelines would not have identified 10 of 11 of these patients, primarily because the guidelines were designed to address only those patients with a known history of exposure or clearly identified environmental or occupational risk. The panel suggested revisions to the guidelines, primarily consisting of broadening the criteria for evaluation to include either known exposure or environmental occupational risk, or to include clinical symptoms consistent with IA. These extensions of the guidelines retrospectively identified 8 of 11 of the patients with IA from October 2001. abstract_id: PUBMED:17764208 Pediatric anthrax: implications for bioterrorism preparedness. Objectives: To systematically review the literature about children with anthrax to describe their clinical course, treatment responses, and the predictors of disease progression and mortality. Data Sources: MEDLINE (1966-2005), 14 selected journal indexes (1900-1966) and bibliographies of all retrieved articles. Review Methods: We sought case reports of pediatric anthrax published between 1900 and 2005 meeting predefined criteria. We abstracted three types of data from the English-language reports: (1) Patient information (e.g., age, gender, nationality), (2) symptom and disease progression information (e.g., whether the patient developed meningitis); (3) treatment information (e.g., treatments received, year of treatment). We compared the clinical symptoms and disease progression variables for the pediatric cases with data on adult anthrax cases reviewed previously. Results: We identified 246 titles of potentially relevant articles from our MEDLINE(R) search and 2253 additional references from our manual search of the bibliographies of retrieved articles and the indexes of the 14 selected journals. We included 62 case reports of pediatric anthrax including two inhalational cases, 20 gastrointestinal cases, 37 cutaneous cases, and three atypical cases. Anthrax is a relatively common and historically well-recognized disease and yet rarely reported among children, suggesting the possibility of significant under-diagnosis, underreporting, and/or publication bias. Children with anthrax present with a wide range of clinical signs and symptoms, which differ somewhat from the presenting features of adults with anthrax. Like adults, children with gastrointestinal anthrax have two distinct clinical presentations: Upper tract disease characterized by dysphagia and oropharyngeal findings and lower tract disease characterized by fever, abdominal pain, and nausea and vomiting. Additionally, children with inhalational disease may have "atypical" presentations including primary meningoencephalitis. Children with inhalational anthrax have abnormal chest roentgenograms; however, children with other forms of anthrax usually have normal roentgenograms. Nineteen of the 30 children (63%) who received penicillin-based antibiotics survived; whereas nine of 11 children (82%) who received anthrax antiserum survived. Conclusions: There is a broad spectrum of clinical signs and symptoms associated with pediatric anthrax. The limited data available regarding disease progression and treatment responses for children infected with anthrax suggest some differences from adult populations. Preparedness planning efforts should specifically address the needs of pediatric victims. abstract_id: PUBMED:15677848 Bichat guidelines for the clinical management of anthrax and bioterrorism-related anthrax. The spore-forming Bacillus anthracis must be considered as one of the most serious potential biological weapons. The recent cases of anthrax caused by a deliberate release reported in 2001 in the United States point to the necessity of early recognition of this disease. Infection in humans most often involves the skin, and more rarely the lungs and the gastrointestinal tract. Inhalational anthrax is of particular interest for possible deliberate release: it is a life-threatening disease and early diagnosis and treatment can significantly decrease the mortality rate. Treatment consists of massive doses of antibiotics and supportive care. Isolation is not necessary. Antibiotics such as ciprofloxacin are recommended for post-exposure prophylaxis during 60 days. abstract_id: PUBMED:16419465 Bioterrorism agents: getting ready for the unthinkable The September 11, 2001 terrorist attacks in the U.S.A. demonstrated our vulnerability to terrorist raids. Furthermore, in the same year inhalational anthrax cases in humans caused by the intentional [corrected] release of Bacillus anthracis spores via the U.S.A. postal system inflicted a lot of panic and terror over the civilian population. The succeeding terrorist events scattered in several other countries are continuous reminders of our frailness [corrected] and of the risk that terrorists attempts in the future may be implemented by means of deliberate evil release of biological agents. These events may be perpetrated by either the release of an infectious agent or any of its products in order to spread death or sickness in humans, animals, or plants with the obnoxius purpose of scaring governments and societies for the profit of particular ideological causes. In the current article, we present a review of the main bioterrorism agents, as well as a historical and clinical aspects and their significance for public health preparedness and response. abstract_id: PUBMED:36981980 Triage Algorithms for Mass-Casualty Bioterrorism: A Systematic Review. Objectives: To understand existing triage algorithms, propose improvement measures through comparison to better deal with mass-casualty incidents caused by bioterrorism. Study Design: Systematic review. Methods: Medline, Scopus and Web of Science were searched up to January 2022. The studies investigating triage algorithms for mass-casualty bioterrorism. Quality assessment was performed using the International Narrative Systematic Assessment tool. Data extractions were performed by four reviewers. Results: Of the 475 titles identified in the search, 10 studies were included. There were four studies on triage algorithms for most bioterrorism events, four studies on triage algorithms for anthrax and two studies on triage algorithms for mental or psychosocial problems caused by bioterrorism events. We introduced and compared 10 triage algorithms used for different bioterrorism situations. Conclusion: For triage algorithms for most bioterrorism events, it is necessary to determine the time and place of the attack as soon as possible, control the number of exposed and potentially exposed people, prevent infection and determine the type of biological agents used. Research on the effects of decontamination on bioterrorism attacks needs to continue. For anthrax triage, future research should improve the distinction between inhalational anthrax symptoms and common disease symptoms and improve the efficiency of triage measures. More attention should be paid to triage algorithms for mental or psychosocial problems caused by bioterrorism events. abstract_id: PUBMED:15578409 Screening for inhalational anthrax due to bioterrorism: evaluating proposed screening protocols. Eleven known cases of bioterrorism-related inhalational anthrax (IA) were treated in the United States during 2001. We retrospectively compared 2 methods that have been proposed to screen for IA. The 2 screening protocols for IA were applied to the emergency department charts of patients who presented with possible signs or symptoms of IA at Inova Fairfax Hospital (Falls Church, Virginia) from 20 October 2001 through 3 November 2001. The Mayer criteria would have screened 4 patients (0.4%; 95% CI, 0.1%-0.9%) and generated charges of 1900 dollars. If 29 patients (2.6%; 95% CI, 1.7%-3.7%) with &gt;or=5 symptoms (but without fever and tachycardia) were screened, charges were 13,325 dollars. The Hupert criteria would have screened 273 patients (24%; 95% CI, 22%-27%) and generated charges of 126,025 dollars. In this outbreak of bioterrorism-related IA, applying the Mayer criteria would have identified both patients with IA and would have generated fewer charges than applying the Hupert criteria. abstract_id: PUBMED:12751654 Recognition and management of bioterrorism infections. Recent events have demonstrated that bioterrorists have the ability to disseminate biologic agents in the United States and cause widespread social panic. Family physicians would play a key role in the initial recognition of a potential bioterrorism attack. Familiarity with the infectious agents of highest priority can expedite diagnosis and initial management, and lead to a successful public health response to such an attack. High-priority infectious agents include anthrax, smallpox, plague, tularemia, botulism, and viral hemorrhagic fever. Anthrax and smallpox must be distinguished from such common infections as influenza and varicella. Anthrax treatment is stratified into postexposure prophylaxis and treatment of confirmed cutaneous, intestinal, or inhalation anthrax. Disease prevention by vaccination and isolation of affected persons is key in preventing widespread smallpox infection. Many resources are available to physicians when a bioterrorism attack is suspected, including local public health agencies and the Centers for Disease Control and Prevention. abstract_id: PUBMED:12396911 First case of bioterrorism-related inhalational anthrax, Florida, 2001: North Carolina investigation. The index case of inhalational anthrax in October 2001 was in a man who lived and worked in Florida. However, during the 3 days before illness onset, the patient had traveled through North Carolina, raising the possibility that exposure to Bacillus anthracis spores could have occurred there. The rapid response in North Carolina included surveillance among hospital intensive-care units, microbiology laboratories, medical examiners, and veterinarians, and site investigations at locations visited by the index patient to identify the naturally occurring or bioterrorism-related source of his exposure. Answer: The extrapolation of adult guidelines for bioterrorism-related inhalational anthrax to a pediatric population is likely to be unreliable. A study conducted at Inova Fairfax Hospital during the 2001 anthrax attacks found that existing guidelines, which were applied to both adult and pediatric patients, were not effective as a screening tool for inhalational anthrax in children. This was primarily due to the children's inability to adequately communicate a suggestive symptom complex. Specifically, no pediatric patient presented with five or more clinical symptoms, which was a criterion for adults, and children often presented with nonspecific symptoms such as fever, vomiting, cough, and trouble breathing (PUBMED:17437350). Additionally, pediatric anthrax presents with a wide range of clinical signs and symptoms that differ somewhat from the presenting features of adults with anthrax. Preparedness planning efforts should specifically address the needs of pediatric victims, considering the differences in disease progression and treatment responses between children and adults (PUBMED:17764208). Furthermore, the CDC guidelines for clinical evaluation of persons with possible inhalational anthrax would not have identified 10 of 11 patients with inhalational anthrax seen during the October 2001 bioterrorism outbreak, primarily because the guidelines were designed to address only those patients with a known history of exposure or clearly identified environmental or occupational risk. The panel suggested revisions to the guidelines to include clinical symptoms consistent with inhalational anthrax (PUBMED:12746773). In conclusion, adult guidelines for bioterrorism-related inhalational anthrax cannot be directly applied to a pediatric population due to differences in symptom presentation and communication abilities. There is a need for pediatric-specific guidelines and preparedness planning to effectively identify and treat anthrax in children.
Instruction: Do Effects on Bowel Patterns Imposed by Solifenacin Always Have Negative Impacts on Treating Patients with Overactive Bladder (OAB)? Abstracts: abstract_id: PUBMED:27111622 Do Effects on Bowel Patterns Imposed by Solifenacin Always Have Negative Impacts on Treating Patients with Overactive Bladder (OAB)? Objectives: To investigate whether solifenacin intervention leads to any changes in bowel symptoms, and the types of impacts imposed on bowel symptoms in patients with overactive bladder (OAB). Methods: This prospective, single-arm observational study included 40 adult patients who underwent anticholinergic treatment for OAB. Outcome measures were determined by examining differences in voiding and bowel symptoms, before and after patients commenced anticholinergic therapy. Patients were evaluated at baseline, 4, and 12 weeks via questionnaires on OAB and irritable bowel syndrome (IBS), side-effects, and overall satisfaction with the treatment. Results: A total of 22 patients completed follow-up visits. Mean age was 62.1 ± 10.3 years. The most common side-effects were constipation and dry mouth. OAB symptom scores improved, with significant changes in urgency, incontinence, and total symptom scores and borderline significant changes in frequency. All bowel symptoms except diarrhea became aggravated. Average constipation and overall quality of life worsened with significance. Aside from the specific bowel habit changes, solifenacin treatment resulted in changes in patient status of IBS, as well. Patients were mostly satisfied with the treatment, despite some aggravations in discomfort due to defecation problems. Conclusions: This study shows that solifenacin treatment is effective for treating urinary incontinence but may lead to changes in bowel patterns and affects overall quality of life (QoL). Effects on bowel patterns imposed by solifenacin can be positive or negative, therefore, physicians should consider more holistic therapy by addressing overall bowel symptoms when treating OAB patients. abstract_id: PUBMED:37863737 A Retrospective Database Analysis to Investigate Treatment Patterns and Health Care Resource Utilisation in Patients who CYCLe AntiMuscarinics in ENgland (CYCLAMEN). Background: Patients with overactive bladder may cycle through different antimuscarinic medications even though there is limited evidence to support this approach. Objective: To describe treatment patterns and the associated health care resource utilisation (HCRU) according to antimuscarinic cycling groups. Design, Setting, And Participants: The CYCLe AntiMuscarinics in ENgland (CYCLAMEN) study was a retrospective observational investigation that used primary care records from the Clinical Practice Research Datalink GOLD database linked to Hospital Episode Statistics secondary care data. Eligible patients (≥18 yr) were prescribed their first antimuscarinic between January 2014 and December 2017. Patients were categorised into groups prescribed one, two, or three or more (groups 1-3) consecutive unique antimuscarinics over 18 mo. Outcome Measurements And Statistical Analysis: The HCRU rate and costs were calculated for the period of continuous antimuscarinic therapy (first antimuscarinic treatment episode) and the 18-mo follow-up period. Treatment sequence patterns were displayed using sunburst plots and Kaplan-Meier analysis was used to assess time on treatment. Results And Limitations: Overall, 35 369 patients were included, of whom 31 760 (89.8%) received one antimuscarinic (group 1), 3182 (9.0%) received two (group 2), and 427 (1.2%) received three or more (group 3). The most common initial antimuscarinics were solifenacin (13 628 patients, 42.9%) in group 1, and oxybutynin in group 2 (1267 patients, 39.8%) and group 3 (200 patients, 46.8%). The median duration of the first antimuscarinic treatment episode was 57 d and &lt;20% of patients were receiving any antimuscarinic after 18 mo. The number of primary care visits and mean costs increased across groups. The reasons for cycling could not be identified in this study. Conclusions: Approximately 10% of patients underwent sequential cycling with two or more antimuscarinics. Furthermore, as the majority discontinued treatment within 18 mo, there is a need to improve the management of these patients in the clinical care setting. Patient Summary: We investigated treatment patterns and health care use for patients with overactive bladder who were prescribed at least one antimuscarinic drug (AMD), which are drugs that reduce some of the impulses passing from the bladder to the brain. Around 10% of patients accessing primary health care in England received more than one sequential AMD. Most patients discontinued treatment, which may indicate inadequate management of their condition. Prescription of a higher number of AMDs was associated with higher health care costs. abstract_id: PUBMED:25524209 The analysis of incontinence episodes and other count data in patients with overactive bladder by Poisson and negative binomial regression. Clinical studies in overactive bladder have traditionally used analysis of covariance or nonparametric methods to analyse the number of incontinence episodes and other count data. It is known that if the underlying distributional assumptions of a particular parametric method do not hold, an alternative parametric method may be more efficient than a nonparametric one, which makes no assumptions regarding the underlying distribution of the data. Therefore, there are advantages in using methods based on the Poisson distribution or extensions of that method, which incorporate specific features that provide a modelling framework for count data. One challenge with count data is overdispersion, but methods are available that can account for this through the introduction of random effect terms in the modelling, and it is this modelling framework that leads to the negative binomial distribution. These models can also provide clinicians with a clearer and more appropriate interpretation of treatment effects in terms of rate ratios. In this paper, the previously used parametric and non-parametric approaches are contrasted with those based on Poisson regression and various extensions in trials evaluating solifenacin and mirabegron in patients with overactive bladder. In these applications, negative binomial models are seen to fit the data well. abstract_id: PUBMED:31195868 A Nordic registry-based study of drug treatment patterns in overactive bladder patients. Aim: To describe treatment patterns in Denmark, Norway and Sweden for patients receiving overactive bladder (OAB) pharmacotherapy.Methods: This was a prospective, multinational, registry-based study involving three nationwide prescribed drug registries (sample size 6000 patients per country), performed between 1 January and 30 June 2014. Patients were followed prospectively for 12 months after first pick-up of index medication. The primary objective was to evaluate the proportion of patients picking up first refill of index medication. Secondary objectives included evaluation of the average number of pick-ups collected during 1 year and time to discontinuation of index medication.Results: A high proportion of patients in the three Nordic countries picked up a first refill of OAB medication: 64-75% for mirabegron and 84-95% for individual antimuscarinics. Amongst treatment-naïve patients, the proportion picking up their first mirabegron refill was 60-64%; for individual antimuscarinics it was 30-63%. Mean number of pick-ups during 1 year ranged from 3.5-5.0 for mirabegron across the countries and for individual antimuscarinics from 3.8-12.3. Median time to discontinuation for mirabegron ranged from 140 (Denmark) to 207 days (Norway) and, for individual antimuscarinics (solifenacin), from 182 (Denmark) to 355 days (Sweden). At 12 months, the proportion of patients still on treatment with mirabegron and antimuscarinics was 21% and 38%, respectively.Conclusions: Treatment patterns in patients with OAB picking up a mirabegron or antimuscarinic prescription in Denmark, Norway and Sweden indicate that persistence remains a challenge. abstract_id: PUBMED:27738922 A short review of drug-food interactions of medicines treating overactive bladder syndrome. Background Overactive bladder syndrome is a condition where one or more of the symptoms such as pollakiuria, urgent need to urinate, nocturia and urinary incontinence is observed. Its prevalence ranges between 7 and 27 % in men and 9-43 % in women. The role of a pharmacist is to educate the patient on medications administration scheme, and drug interactions with particular food or food components. Aim of the review To assess a potential impact of food and fruit juice on the pharmacokinetic and therapeutic effects of medications used in treating overactive bladder syndrome. This information will enhance pharmaceutical care and is vital and helpful for pharmacists counseling their patients. Method In order to gather information on interactions of medications employed in bladder dysfunctions, the English language reports published in the PubMed, Embase, Cochrane and CINAHL database over the years 1996-2015 were studied. Additionally, other resources, namely drugs.com, Medscape, UpToDate, Micromedex, Medical Letter, as well as Stockley Drugs Interaction electronic publication were included in the study. The analysis also covered product data sheets for particular medicinal products. Results Meals and the consumption of grapefruit juice were found to exert a diversified effect on the pharmacokinetics of drugs employed in overactive bladder syndrome therapy. Neither tolterodine, nor mirabegron interact with food and citrus fruit juice, whereas darifenacin, fesoterodine, oxybutynin and solifenacin do interact with grapefruit and others citrus fruit juice. The effects of such interactions may potentially be negative to patients. Trospium absorption is significantly decreased by food. Conclusion For selected medicines used in treating bladder dysfunctions food and grapefruit juice consumption may significantly affect efficacy and safety of the therapy. All information on the topic is likely to enhance the quality of pharmaceutical care. abstract_id: PUBMED:23171069 Differential effects of selective and non-selective muscarinic antagonists on gastrointestinal transit and bowel function in healthy women. Background: The gastrointestinal effects of antimuscarinic drugs used to treat overactive bladder may be related to the selectivity of these agents for M(3) -muscarinic receptor subtypes. We compared the effects of non-selective (fesoterodine) and M(3)-selective (solifenacin) antimuscarinics on gastrointestinal transit in healthy women. Methods: Gastric emptying (GE), small-intestinal transit (colonic filling at 6 h), colonic transit [geometric center at 24 h (GC24; primary endpoint) and 48 h (GC48)], and bowel habits were assessed by scintigraphy and bowel diaries before and after randomization to fesoterodine 8 mg, solifenacin 10 mg, or placebo (2 : 2 : 1) for 14 days. An interim analysis to finalize sample size was conducted. Key Results: After 60 subjects [placebo (n = 12), fesoterodine (n = 25), solifenacin (n = 23)] completed the study, the study was terminated due to a prespecified criterion (sample size ≥ 452.5 needed to provide ≥ 80% power to demonstrate superiority of fesoterodine over solifenacin in GC24). Compared with baseline, (i) placebo delayed small-intestinal, but not colonic, transit, (ii) fesoterodine significantly increased GE t(1/2) vs placebo (17.0 min; P = 0.027), and (iii) fesoterodine and solifenacin delayed small-intestinal (-36.8% and -21.8%, respectively, P &lt; 0.001 vs placebo) and colonic transit (GC24: -0.44 and -0.49, respectively, P &lt; 0.05 vs placebo; GC48: -0.25 and -0.65, respectively, P &gt; 0.05 vs placebo). Solifenacin increased stool hardness from baseline (P = 0.010 for difference vs fesoterodine); stool frequency was comparable. Conclusions & Inferences: In healthy women, fesoterodine had greater effects on small-intestinal transit and solifenacin had greater effects on colonic transit; the latter finding may explain why solifenacin, but not fesoterodine, increased stool hardness. abstract_id: PUBMED:30804425 Anti-muscarinic drugs increase rectal compliance and exacerbate constipation in chronic spinal cord injury : Anti-muscarinic drug effect on neurogenic bowel. Study Design: Prospective cohort study OBJECTIVES: We hypothesized that anti-muscarinic agents alter rectal compliance in SCI patients and that altered rectal compliance relates to bowel symptomatology. Our primary aim was to compare rectal compliance before and after the institution of anti-muscarinics (solifenacin and tolterodine) and an adrenoceptor agonist (mirabegron) in these patients. Additionally, we wanted to evaluate if anorectal manometry differed before and after use of anti-muscarinic agents. Setting: Tertiary neurogastroenterology clinic, London METHODS: Thirty-five patients with supraconal spinal cord injury (SCI) underwent anal manometry, assessment of rectoanal inhibitory reflex (RAIR) and rectal compliance before and after anti-muscarinic treatment (for overactive bladder) was started (mean follow-up 12 weeks). Patients were assessed identically, pre-and post-treatment (solifenacin n = 17, tolterodine n = 10, mirabegron n = 8). Doses used were as for non-SCI patients. Results: Resting, squeeze and cough pressures were unchanged after anti-muscarinic treatment. Rectal compliance was significantly raised after anti-muscarinic treatment (p = 0.001). The percent amplitude of maximal sphincter relaxation of the RAIR was decreased (p &lt; 0.001) and excitation latency was increased (p = 0.006). There was no significant change in the duration of recovery of the RAIR. There was a significant increase of the Wexner Constipation Score (p = 0.001) but no change in the Wexner Incontinence Score. There was a significant correlation between change in rectal compliance and change in Wexner Constipation Score (p = 0.001). Thus, increasing compliance of the rectum is associated with worsening of constipation after anti-muscarinic therapy. However, there were no changes in anorectal manometry or rectal compliance in those who received mirabegron. Conclusion: Anti-muscarinic therapy for overactive bladder increases compliance of the neurogenic rectum and alters anorectal reflex activity, with worsening of constipation. abstract_id: PUBMED:18438799 Solifenacin-induced small bowel pseudo-obstruction. An 89-year-old woman was admitted to Johns Hopkins Hospital with a small bowel obstruction and symptoms of urinary retention. She had been started on solifenacin for bladder overactivity 10 days prior to her presentation. Withdrawal of the solifenacin resulted in a full recovery, which has persisted for greater than 6 months without surgical intervention. This is the first reported case report of small bowel pseudo-obstruction due to solifenacin. abstract_id: PUBMED:18660862 Concomitant medications and possible side effects of antimuscarinic agents. Antimuscarinic agents are the treatment of choice for overactive bladder syndrome; clinical experience and the literature support their efficacy, tolerability, and safety. The most common side effects experienced include dry mouth and constipation. Many commonly prescribed drugs have anticholinergic effects that could increase the anticholinergic "load" or "burden" in patients with overactive bladder, potentially increasing the frequency and severity of side effects. In addition, the adverse events associated with antimuscarinics may be more pronounced in the elderly, especially those taking multiple medications. Knowledge regarding the potential side effects associated with antimuscarinics is important so that patients can be advised and effectively treated. abstract_id: PUBMED:30260993 Patterns of use of antimuscarinic drugs to treat overactive bladder in Denmark, Sweden, and the United Kingdom. Purpose: To describe the use of antimuscarinic drugs to treat overactive bladder (OAB) in Denmark, Sweden, and the United Kingdom (UK). Methods: We identified new users of darifenacin, fesoterodine, oxybutynin, solifenacin, tolterodine, and trospium aged 18 years or older from the Danish National Registers (2004-2012), the Swedish National Registers (2006-2012), and UK Clinical Practice Research Datalink (2004-2012). Users were followed until disenrollment, cancer diagnosis, death, or study end. Treatment episodes, identified by linking consecutive prescriptions, were described with respect to duration, drug switch, and drug add-on. Results: Mean age of OAB drug users was 66 years in Denmark (n = 72,917) and Sweden (n = 130,944), and 62 years in the UK (n = 119,912); 60% of Danish and Swedish patients and 70% of UK patients were female. In Denmark, of 224,680 treatment episodes, 39% were with solifenacin, and 35% with tolterodine; 2% were with oxybutynin. In Sweden, of 240,141 therapy episodes, 37% were with tolterodine and 35% with solifenacin; 5% were with oxybutynin. In the UK, of 245,800 treatment episodes, 28% were with oxybutynin, 27% with solifenacin, and 26% with tolterodine. In the three countries, 49%-52% of treatment episodes comprised one prescription and over 80% of episodes ended because of no refill; less than 20% ended because of a switch to another antimuscarinic. During the study years, we observed a change in OAB treatment preference from tolterodine to solifenacin. Conclusions: In these cohorts, persistence with antimuscarinic drugs was low. By 2012, the preferred drug was solifenacin; oxybutynin use was marginal in Nordic countries compared with the UK. Answer: The effects on bowel patterns imposed by solifenacin do not always have negative impacts on treating patients with overactive bladder (OAB), but they can vary. While solifenacin treatment is effective for treating urinary incontinence, it may lead to changes in bowel patterns and affect overall quality of life (QoL). The most common side effects reported were constipation and dry mouth. All bowel symptoms except diarrhea became aggravated, and average constipation and overall QoL worsened significantly. Despite some aggravations in discomfort due to defecation problems, patients were mostly satisfied with the treatment. This suggests that while there are negative impacts on bowel patterns, the overall effectiveness of solifenacin in treating OAB symptoms can lead to patient satisfaction. Physicians should consider a more holistic therapy approach by addressing overall bowel symptoms when treating OAB patients (PUBMED:27111622). In a broader context, treatment patterns for OAB indicate that persistence with antimuscarinic drugs like solifenacin can be low, and there is a need to improve the management of these patients in the clinical care setting (PUBMED:37863737). Additionally, other studies have shown that antimuscarinic drugs, including solifenacin, can increase rectal compliance and exacerbate constipation in patients with chronic spinal cord injury, indicating that the effects on bowel patterns can be significant and negative in certain populations (PUBMED:30804425). However, it is also noted that the effects of antimuscarinics on gastrointestinal transit may be related to the selectivity of these agents for M3-muscarinic receptor subtypes, with solifenacin having greater effects on colonic transit and increasing stool hardness (PUBMED:23171069). In conclusion, while solifenacin can have negative impacts on bowel patterns, these effects do not always overshadow its benefits in treating OAB symptoms. The overall impact on a patient's treatment satisfaction and QoL will depend on the balance between the improvement in urinary symptoms and the management of bowel side effects.
Instruction: Biomechanical analysis of the modified Bristow procedure for anterior shoulder instability: is the bone block necessary? Abstracts: abstract_id: PUBMED:24925701 Biomechanical analysis of the modified Bristow procedure for anterior shoulder instability: is the bone block necessary? Background: Anterior shoulder instability with bone loss can be treated successfully with the modified Bristow procedure. Opinions vary regarding the role of the soft-tissue sling created by the conjoined tendon after transfer. Therefore, the aim of this study was to determine the effect of the modified Bristow procedure and conjoined tendon transfer on glenohumeral translation and kinematics after creating anterior instability. Methods: Eight cadaveric shoulders were tested with a custom shoulder testing system. Range-of-motion, translation, and kinematic testing was performed in 60° of glenohumeral abduction in the scapular and coronal planes under the following conditions: intact joint, Bankart lesion with 20% glenoid bone loss, modified Bristow procedure, and soft tissue-only conjoined tendon transfer. Results: A Bankart lesion with 20% bone loss resulted in significantly increased external rotation and translation compared with the intact condition (P &lt; .05), as well as an anterior shift of the humeral head apex at all points of external rotation. Both the modified Bristow procedure and soft-tissue Bristow procedure maintained the increase in external rotation but resulted in significantly decreased translation (P &lt; .05). There was no difference in translation between the 2 reconstructions. Conclusions: The increase in external rotation suggests that the modified Bristow procedure does not initially restrict joint motion. Translational stability can be restored in a 20% bone loss model without a bone block, suggesting the importance of the soft-tissue sling. abstract_id: PUBMED:25065295 Optimization of bone-block positioning in the Bristow-Latarjet procedure: a biomechanical study. Background: In the Bristow-Latarjet procedure, optimal positioning of the coracoid bone-block on the anterior aspect of the glenoid (standing or lying on the glenoid rim) remains debated. A biomechanical study assessed the effect of the position of the bone-block with its attached conjoint tendon on anterior and inferior stabilization of the humeral head. Materials And Methods: The Bristow-Latarjet procedure was performed on 8 fresh cadaveric shoulders. The bone-block size was systematically at 2.5×1×1 cm. Anterior translation of the humeral head was stress induced under 30-N traction, in maximum external rotation at 0° and at 90° abduction: respectively, adduction and external rotation (ADER), and abduction and external rotation (ABER). Under radiological control, displacement of the center of the humeral head was compared with the glenoid surface at the 3, 4 and 5 o'clock (medial, antero-inferior and inferior) positions for the 2 bone-block positionings. Results: The lying position at 4 o'clock substantially decreased anterior and inferior displacement of the humeral head respectively in ADER and ABER; and in ABER it also tended to decrease anterior translation, but not significantly. The standing bone-block position did not affect translation. Conclusions: Positioning the bone-block so that it lies on the anterior aspect of the glenoid in the middle of the antero-inferior quarter of the rim at 4 o'clock can decrease anterior displacement of the humeral head and inferior glenohumeral translation, especially in ADER for anterior displacement and in ABER for inferior displacement. Study Design: Laboratory study. abstract_id: PUBMED:33711500 Intra-articular biomechanical environment following modified Bristow and Latarjet procedures in shoulders with large glenoid defects: relationship with postoperative complications. Background: Although coracoid transfers including the modified Bristow and Latarjet procedures are widely used to treat anterior shoulder instability, the influence of the choice of procedure on the biomechanical outcomes is not well characterized. We aimed to clarify the intra-articular stress distribution following these 2 procedures using 3-dimensional finite-element analysis and to investigate the role of stress distribution in the pathophysiology of postoperative complications. Methods: Overall, 6 male patients aged 17-47 years with unilateral anterior shoulder instability were recruited. Computed tomographic digital imaging and communications in medicine (CT-DICOM) data of the contralateral (healthy) shoulder of each patient was obtained and used for developing the 3-dimensional normal glenohumeral joint model. A 25% bony defect was created in the anterior glenoid rim where the coracoid process was transferred in the standing and lying-down positions to create the Bristow and Latarjet models, respectively. The arm position was set as 0° or 90° abduction. The Young moduli of the humerus and scapula were calculated using CT data, and set as 35.0 MPa and 113.8 GPa for the articular cartilage and inserted screw, respectively. A compressive load (50 N) was applied to the greater tuberosity toward the center of the glenoid, and a tensile load (20 N) was applied to the tip of the coracoid in the direction of conjoint tendon. Elastic analysis was used to determine the equivalent stress distribution. Results: A significant reduction in mean equivalent stress was observed within the glenoid cartilage for both models (P = .031); however, a new stress concentration appeared within the grafted coracoid-facing region of the humeral-head cartilage in both models. The proximal half of the coracoid graft exhibited lower equivalent stress than the distal half in 5 of the 6 Latarjet models, whereas the proximal half showed higher equivalent stress than the distal half in all 6 Bristow models. High stress concentration was identified at the midpoint of the inserted screw in Bristow models. Discussion And Conclusions: Intra-articular stress distribution may explain the different rates of postoperative complications associated with the modified Bristow and Latarjet procedures. New stress concentration within the humeral-head cartilage might contribute to the development of glenohumeral osteoarthritis following both procedures. Stress shielding in the proximal part of the coracoid graft might contribute to osteolysis following the Latarjet procedure. Surgeons should be aware of the risk of breakage of the inserted screw following the modified Bristow procedure. abstract_id: PUBMED:34974590 Functional outcomes of modified Bristow procedure in recurrent shoulder dislocation. The glenohumeral joint is the most mobile in humans. Of all the joints 50% of dislocations involve the shoulder, mostly young males. When the first dislocation occurs in a patient under 20 years age the risk for recurrent instability increases to 90%. Many techniques are available to reduce and stabilise the glenohumeral joint; in cases of anterior dislocations one of which is the famous Bristow's procedure, originally described in 1954 by Laterjet. The purpose of this study is was to determine the Functional Outcomes of the modified Bristow procedure. This retrospective review was conducted at Aga Khan University Hospital, Karachi from January 2000- December 2015, comprising patients who underwent the modified Bristow procedure. All patients recruited in the study underwent modified Bristow procedure. A total of 70 patients were included, which comprised of 61(87.1%) males and 9(12.9%) females with a mean age of 31.6±11.0 years The maximum number of shoulder dislocations occurred primarily due to road traffic accidents in 48 (68.57%) patients while the second highest cause in 13 (18.57) patients was due to playing sports. The mean number of dislocations before surgery were 3.50±0.5 whereas no patient had an episode of dislocation in the post-operative period. Two patients presented with subluxations but none required further surgical intervention. The Modified Bristow-Latarjet procedure is considered an effective surgical treatment for the recurrent glenohumeral instability of the joint. abstract_id: PUBMED:37424459 Determining optimal length of coracoid graft in the modified Bristow procedure for anterior shoulder instability: A three-dimensional finite element analysis. Background: There is a lack of consensus concerning the coracoid graft length in the modified Bristow procedure. Objective: We attempted to determine the optimal graft length using the three-dimensional finite element method. Methods: In a shoulder model with a 25% anterior glenoid defect, a coracoid graft of varying lengths (5, 10, 15, and 20 mm) was fixed using a half-threaded screw. First, a compressive load of 500 N was applied to the screw head to determine the graft failure load during screw tightening. Next, a tensile load (200 N) was applied to the graft to determine the failure load due to biceps muscle traction. Results: In the screw compression, the failure loads in the 5-, 10-, 15-, and 20-mm models were 252, 370, 377, and 331 N, respectively. In the tensile load applied to the coracoid graft, the failure load exceeded 200 N for both the 5- and 10-mm models. Conclusion: The 5-mm graft had a high risk of fracture during intraoperative screw tightening. As for the biceps muscle traction, the 5- and 10-mm-grafts had a lower failure risk than the 15- and 20-mm-grafts. Therefore, we believe that the optimal length of the coracoid graft is 10 mm in the modified Bristow procedure. abstract_id: PUBMED:31463689 Biomechanical evaluation of suture buttons versus cortical screws in the Latarjet-Bristow procedure: a fresh-frozen cadavers study. Introduction: A commonly used method of fixation of the transferred coracoid in the traditional Latarjet-Bristow procedure (open or arthroscopic) is by two bicortical screws. Although mechanically effective, screw fixation is also a major source of hardware and neurologic complications. This study aimed to compare the biomechanical performances of traditional metal screws and endobuttons as fixators of the Latarjet-Bristow procedure. Materials And Methods: Nine fresh-frozen cadaveric human scapulae with the conjoined tendon attached to the coracoid process were used for the Latarjet-Bristow procedure. The specimens were randomly assigned one of two groups: fixation using two 4.5-mm cannulated partially threaded Latarjet-Bristow experience screws or fixation using a suture-button construct. Specimens were secured in a material testing machine and cyclically preconditioned from 2 to 10 N at 0.1 Hz for ten cycles. They were then pulled to failure at a normalized displacement rate of 400% of the measured gauge length per minute. The maximal load-to-failure, stiffness and stress were calculated using a custom script. The failure mechanism and site were recorded for each specimen. Results: There were no significant differences in the maximal load-to-failure or other biomechanical properties of the two fixation techniques, but the failure mechanisms were unique to each one. Four specimens fixated with screws underwent graft failures (fracture) through the proximal or distal drill hole. Five specimens fixated with endobuttons underwent failure due to glenoid bone fractures. Conclusions: A single endobutton fixation appears to be biomechanically comparable to screw fixation in the Latarjet-Bristow procedure and provides a lower risk for graft fracture. Further studies with more numerous specimens are warranted to conclusively validate these findings. abstract_id: PUBMED:25590034 Comparison of Bristow procedure and Bankart arthroscopic method as the treatment of recurrent shoulder instability. Background: Anterior shoulder dislocation is the most common major joint dislocation. In patients with recurrent shoulder dislocation, surgical intervention is necessary. In this study, two methods of treatment, Bankart arthroscopic method and open Bristow procedure, were compared. Materials And Methods: This clinical trial survey had been done in the orthopedic department of Alzahra and Kashani hospitals of Isfahan during 2008-2011. Patients with recurrent anterior shoulder dislocation who were candidates for surgical treatment were randomly divided into two groups, one treated by Bankart arthroscopic technique and the other treated by Bristow method. All the patients were assessed after the surgery using the criteria of ROWE, CONSTANT, UCLA, and ASES. Data were analyzed by SPSS software. Results: Six patients (16.22%) had inappropriate condition with ROWE score (score less than 75); of them, one had been treated with Bristow and five with Bankart (5.26 vs. 27.78). Nine patients (24.32%) had appropriate condition, which included six from Bristow group and three treated by Bankart technique (31.58 vs. 16.67). Finally, 22 patients (59.46%) showed great improvement with this score, which included 12 from Bristow and 10 from Bankart groups (63.16 vs. 55.56). According to Fisher's exact test, there were no significant differences between the two groups (P = 0.15). Conclusion: The two mentioned techniques did not differ significantly, although some parameters such as level of performance, pain intensity, use of analgesics, and range of internal rotation showed more improvement in Bristow procedure. Therefore, if there is no contraindication for Bristow procedure, it is preferred to use this method. abstract_id: PUBMED:38030446 Bone formation of the coracoid bone graft after the Bristow procedure assessed using three-dimensional computed tomography. Background: The Bristow procedure has been shown to be a reliable method to prevent recurrent anterior shoulder instability by compensating for glenoid bone loss and producing the sling effect. The degree of postoperative morphological change in the coracoid bone graft is speculated to influence glenohumeral joint stability; however, the details of these changes after the Bristow procedure remain unknown. This study was performed to quantify the postoperative change in the coracoid bone graft volume as assessed by three-dimensional computed tomography (3D-CT). Methods: The Bristow procedure was performed on 17 shoulders in 17 patients from August 2018 to January 2020. All patients were men, and their mean age at surgery was 17.9 years. The mean follow-up duration was 21.4 months. Within the first week after the operation (Time 0) and at the final follow-up, 3D-CT was used to determine the total coracoid bone graft volume. The clinical outcomes were evaluated using the Japanese Orthopaedic Association (JOA) shoulder score, the University of California Los Angeles (UCLA) shoulder score, and the Western Ontario Shoulder Instability Index (WOSI). Results: The mean volume of the total coracoid bone graft was 1.26 ± 0.29 cm3 at Time 0 and 1.90 ± 0.36 cm3 at the final follow-up (p &lt; 0.0001). The mean JOA score, UCLA score, and WOSI were significantly better at the final follow-up than preoperatively (p &lt; 0.001). No postoperative infection, neurovascular injury, or recurrent instability of the glenohumeral joint occurred. Conclusions: In the Bristow procedure, the volume of the total coracoid bone graft as shown by 3D-CT was significantly greater at the final follow-up than at Time 0, and bone formation of the coracoid bone graft was found after the Bristow procedure. abstract_id: PUBMED:36911776 Quality of life following an open Latarjet-Bristow procedure in a general population with recurrent anterior shoulder instability. Background: Despite the extensive literature on anterior glenohumeral instability, to date, there are no studies that report on the quality of life following a Latarjet-Bristow procedure with recurrent anterior shoulder instability. The purpose of this study was to evaluate the quality of life in patients who had a Latarjet-Bristow procedure. Methods: A single-center retrospective review with a prospective recall of patients who underwent a Latarjet-Bristow technique for recurrent anterior glenohumeral instability between January 2017 and March 2020. Outcomes measures included health-related quality of life using the Short Form-36 (SF-36) questionnaire and patient-related outcome measures using the Western Ontario Shoulder Instability Index and Rowe scores. Results: A total of 66 patients were identified to participate in the study; 40 (60.6%) responded and were included in the analysis. There were nine females and 31 males, with a median age of 32 years (27.5‒41 years). Three cases had bilateral anterior shoulder instability with a single joint being operated and three patients were epileptic. Physical and mental components summary of the SF-36 showed a better health-related quality of life in the general population. There was a significant strong correlation between SF-36 and Western Ontario Shoulder Instability Index. However, SF-36 and Rowe score showed a fair correlation. Conclusion: The Latarjet-Bristow procedure improves the quality of life in a general population similarly to an athletic population. The number of episodes of dislocation before surgery and the delayed surgical intervention did not increase the recurrent anterior shoulder instability rates postoperatively. abstract_id: PUBMED:34087273 The Bristow-Latarjet procedure for revision of failed arthroscopic Bankart: a retrospective case series of 59 consecutive patients. Background: Recurrence of anterior instability after arthroscopic Bankart prevents return to sports and remains a surgical challenge. We aim to assess clinical and radiologic outcomes after coracoid bone-block performed either open or under arthroscopy, for the management of failed arthroscopic Bankart PATIENTS AND METHODS: Fifty-nine consecutive patients with anterior instability recurrence after arthroscopic Bankart were revised with a Bristow or Latarjet procedure performed either open (25 cases) or under arthroscopy (34 cases). Patients were reviewed for clinical and radiologic examination at a minimum 2-year follow-up. Glenohumeral bony lesions were evaluated preoperatively with computed tomographic scans. Postoperative bone-block position, union, and postinstability arthritis were also evaluated. Results: The mean follow-up was 89 months (24-193). The epidemiologic analysis showed that patients with failed arthroscopic Bankart were young (age &lt;23 years), 58 (98%) were practicing sports, with contact/forced overhead sports (53%), often in competition (53%), had hyperlaxity (71%), and for the most part of them glenohumeral bone loss (88%). Their mean preoperative Instability Severity Index Score was 5.4 ± 2.2 points. After revision with Bristow-Latarjet procedure, 53 patients (91%) returned to sports, 37 (70%) to their previous sports activity, and 17 (46%) to their previous level. No patient suffered recurrent dislocation. Four patients (7%) had recurrent subluxations, all after open procedure; 8 patients (14.5%) had persistent anterior apprehension. A large and deep Hill-Sachs lesion was a risk factor for persistent anterior apprehension (P = .002) and lower level when returned to sports (P = .04). Ninety-two percent of bone-blocks were positioned flush with the glenoid anterior rim, with 84% of bone union. At last follow-up, 5% of patients had severe postinstability osteoarthritis (Samilson 4). Conclusion: The Bristow-Latarjet, performed either open or under arthroscopy, is an efficient procedure to restore shoulder stability and allow returning to sports in patients with failed arthroscopic Bankart and glenoid bone loss. Patients with a large and deep Hill-Sachs lesion had more persistent anterior apprehension and a lower sports level. Answer: The biomechanical analysis of the modified Bristow procedure for anterior shoulder instability has been a subject of investigation to determine the necessity of the bone block. According to a study (PUBMED:24925701), after creating anterior instability with a Bankart lesion and 20% glenoid bone loss, both the modified Bristow procedure and a soft-tissue-only Bristow procedure (without the bone block) resulted in significantly decreased translation compared to the Bankart lesion state. There was no difference in translation between the two reconstructions, suggesting that translational stability can be restored in a 20% bone loss model without a bone block. This indicates the importance of the soft-tissue sling created by the conjoined tendon transfer in providing stability. However, another study (PUBMED:25065295) found that positioning the bone block so that it lies on the anterior aspect of the glenoid at the 4 o'clock position can decrease anterior displacement of the humeral head and inferior glenohumeral translation, particularly in certain arm positions. This suggests that the bone block does play a role in stabilizing the shoulder in specific conditions. Further biomechanical evaluation (PUBMED:33711500) revealed that following the modified Bristow and Latarjet procedures, a new stress concentration appeared within the grafted coracoid-facing region of the humeral-head cartilage in both models. This could potentially contribute to the development of glenohumeral osteoarthritis following both procedures. The study also noted differences in stress distribution that may explain the different rates of postoperative complications associated with the two procedures. In summary, biomechanical analyses suggest that while the soft-tissue sling is important for restoring translational stability in cases of anterior shoulder instability with bone loss, the bone block may still contribute to stability in certain positions and may influence the biomechanical environment within the joint, potentially affecting long-term outcomes and the risk of complications.
Instruction: Pathologic Upstaging in Patients Undergoing Resection for Stage I Non-Small Cell Lung Cancer: Are There Modifiable Predictors? Abstracts: abstract_id: PUBMED:26277562 Pathologic Upstaging in Patients Undergoing Resection for Stage I Non-Small Cell Lung Cancer: Are There Modifiable Predictors? Background: A substantial proportion of patients with clinical stage I non-small cell lung cancer (NSCLC) have more advanced disease on final pathologic review. We studied potentially modifiable factors that may predict pathologic upstaging. Methods: Data of patients with clinical stage I NSCLC undergoing resection were obtained from the National Cancer Database. Univariate and multivariate analyses were performed to identify variables that predict upstaging. Results: From 1998 to 2010, 55,653 patients with clinical stage I NSCLC underwent resection; of these, 9,530 (17%) had more advanced disease on final pathologic review. Of the 9,530 upstaged patients, 27% had T3 or T4 tumors, 74% had positive lymph nodes (n &gt; 0), and 4% were found to have metastatic disease (M1). Patients with larger tumors (38 mm vs 29 mm, p &lt; 0.001) and a delay greater than 8 weeks from diagnosis to resection were more likely to be upstaged. Upstaged patients also had more lymph nodes examined (10.9 vs 8.2, p &lt; 0.001) and were more likely to have positive resection margins (10% vs 2%, p &lt; 0.001). Median survival was lower in upstaged patients (39 months vs 73 months). Predictors of upstaging in multivariate regression analysis included larger tumor size, delay in resection greater 8 weeks, positive resection margins, and number of lymph nodes examined. There was a linear relationship between the number of lymph nodes examined and the odds of upstaging (1 to 3 nodes, odds ratio [OR] 2.01; &gt;18 nodes OR 6.14). Conclusions: Pathologic upstaging is a common finding with implications for treatment and outcomes in clinical stage I NSCLC. A thorough analysis of regional lymph nodes is critical to identify patients with more advanced disease. abstract_id: PUBMED:31179062 Prognosis of upstaged N1 and N2 disease after curative resection in patients with clinical N0 non-small cell lung cancer. Background: Nodal upstaging occasionally occurs after curative resection in clinical N0 non-small cell lung cancer (NSCLC). The purpose of this study was to evaluate the prognosis of clinical N0 NSCLC (T1-2, tumor size 5 cm or smaller) after upstaging to pathologic N1 or N2. Methods: From 2005 to 2015, 676 consecutive patients were diagnosed with clinical T1-2N0 NSCLC and underwent curative resection. Among these, tumors were upstaged to N1 in 46 patients and to N2 in 24 patients. We analyzed the prognosis of upstaged tumors. For comparison of prognosis between nodal upstaging groups and others in the same stage, patients with preoperative pathologically proven N1 (n=31) and N2 (n=55) NSCLC were included in the study. Results: A total of 70 patients (10.4%) had nodal upstaging after curative resection of clinical N0 NSCLC. Upstaging to N1 occurred in 46 patients and upstaging to N2 occurred in 24 patients. The 5-year disease-specific survival rate was not statistically different between the upstaged and non-upstaged N1/N2 groups in N1 disease (73.3% vs. 70.5%, P=0.247) or in N2 disease (58.9% vs. 50.7%, P=0.283). Multivariate analysis showed that nodal upstaging was not a significant prognostic factor in N1 or N2 NSCLC (hazard ratio =0.385, P=0.235; hazard ratio =0.677, P=0.458). Conclusions: Postoperative nodal upstaging from clinical T1-2N0 NSCLC was not a significant prognostic factor in the same stage. Therefore, surgical treatment of clinical T1-2N0 lung cancer diagnosed by imaging without preoperative pathologic lymph node staging can be a treatment option. abstract_id: PUBMED:31873770 Predictors of nodal upstaging in patients with cT1-3N0 non-small cell lung cancer (NSCLC): results from the Italian VATS Group Registry. Purpose: Accurate staging of early non-small cell lung cancer is fundamental for selecting the best treatment. The aim of this study was to identify risk factors for nodal upstaging after video-assisted thoracoscopic lobectomy for clinical T1-3N0 tumors. Methods: From 2014 to 2017, 3276 thoracoscopic lobectomies were recorded in the prospective database "Italian VATS Group". Linear and multiple logistic regression models were adapted to identify independent predictors of nodal upstaging and factors associated with progression in postoperative N status. Results: Nodal upstaging was found in 417 cases (12.7%), including 206 cases (6.2%) of N1-positive nodes, 81 cases of N2 nodes (2.4%), and 130 cases (4%) of involvement of both N1 + N2 nodes. A total of 241 (7.3%) patients had single-station nodal involvement, whereas 176 (5.3%) had multiple-station involvement. In the final regression model, the tumor grade, histology, pathologic T status, and &gt; 12 resected nodes were independent predictors of nodal upstaging. Conclusions: The number of resected lymph nodes seems to predict nodal upstaging better than the type of intraoperative lymph node management. Other preoperative risk factors correspond to those for which the current guidelines of the European Society of Thoracic Surgery recommend more extensive preoperative mediastinal staging. abstract_id: PUBMED:33021465 Impact of waiting times on tumour growth and pathologic upstaging in patients with non-small cell lung cancer having lung resection. Objective: There are limited data on tumour growth or pathologic upstaging in patients with early-stage lung cancer awaiting lung resection. We aimed to evaluate whether waiting times on the current lung cancer treatment pathway were associated with significant tumour growth or pathologic upstaging. Methods: This is a retrospective observational study of a consecutive series of patients with early-stage, non-small cell lung cancer who underwent resection for lung cancer. The difference between tumour size at diagnostic and preoperative computed tomography (CT) scans was calculated. Significant tumour growth was defined as a diameter increase of ⩾5 mm or ⩾20%. The time intervals between baseline and repeat CT (CT-int) and between baseline CT and date of surgery (Surg-int), as well as other potential clinical and pathologic prognostic factors, were compared between upstaged and nonupstaged patients. Results: There were 121 patients identified. Fifty-four patients (44.6%) had tumour growth ⩾5 mm and 27 patients (22%) had tumour growth ⩾20%. Median CT-int and Surg-int were 2.4 and 2.6 months, respectively. Forty-four patients (36%) were upstaged at surgery due to new lymph node involvement (n = 19), pleural invasion (n = 12), satellite nodules (n = 4), or increase in tumour diameter (n = 9). There was a marginal, but statistically insignificant, difference in median CT intervals in patients who had tumour growth &lt;20% vs ⩾20% at 2.4 vs 2.6 months (p = 0.06). Conclusion: Current cancer pathway waiting times are not associated with significant tumour growth or pathologic upstaging in this cohort. abstract_id: PUBMED:37426150 Prognostic significance of extranodal extension in patients with pathologic N1 non-small cell lung cancer undergoing complete resection. Background: The prognostic significance of extranodal extension (ENE) remains unclear in patients with pathologic N1 (pN1) non-small-cell lung cancer (NSCLC) undergoing surgery. We evaluated the prognostic impact of ENE in patients with pN1 NSCLC. Methods: From 2004 to 2018, we retrospectively analyzed the data of 862 patients with pN1 NSCLC who underwent lobectomy and more (lobectomy, bilobectomy, pneumonectomy, sleeve lobectomy). According to their resection status and the presence of ENE, patients were classified into R0 without ENE (pure R0) (n=645), R0 with ENE (R0-ENE) (n=130), and incomplete resection (R1/R2) groups (n=87). The primary and secondary endpoints were 5-year overall survival (OS) and recurrence-free survival (RFS), respectively. Results: The prognosis of the R0-ENE group was significantly worse than the pure R0 group for both OS (5-year rate: 51.6% vs. 65.4%, P=0.008) and RFS (44.4% vs. 53.0%, P=0.04). According to the recurrence pattern, a difference of RFS was found only for distant metastasis (55.2% vs. 65.0%, P=0.02). The multivariable Cox analysis revealed that the presence of ENE was a negative prognostic factor in patients who did not undergo adjuvant chemotherapy [hazard ratio (HR) =1.58; 95% confidence interval (CI): 1.06-2.36; P=0.03], but it was not in those with adjuvant chemotherapy (HR =1.20; 95% CI: 0.80-1.81; P=0.38). Conclusions: For patients with pN1 NSCLC, the presence of ENE was a negative prognostic factor for both OS and RFS, regardless of resection status. The negative prognostic effect of ENE was significantly associated with an increase in distant metastasis and was not observed in patients who underwent adjuvant chemotherapy. abstract_id: PUBMED:17267071 Clinically predictive factors of pathologic upstaging in patients with peripherally located clinical stage IA non-small cell lung cancer. Objective: Post-surgical pathologic examination often reveals a more advanced state than clinically defined in non-small cell lung cancer (NSCLC), posing the need for careful consideration of a lesser resection. We investigated the predictive factors for the pathologic upstaging in clinical stage IA NSCLC. Methods: The clinical features of 253 consecutive patients with peripherally located T1N0M0 NSCLC who underwent complete resection between 1991 and 2004 were investigated in relation to pathologic T- and N-factors. Results: Of the 253 patients, 46 (18.2%) were upstaged after surgery, due to T-factor in 12 patients, N-factor in 32, M-factor in 2 and both T- and N-factors in 1. Among the clinical parameters, a higher level of serum CEA (p=0.0378) and larger tumor size (p=0.0276) were observed in the upstaged patients. Multivariable analysis revealed that tumor size and positive serum CEA were independently associated with pathologic upstaging. When tumor size was greater than 10mm, patients with positive serum CEA (&gt;2.0 ng/ml) showed a significantly higher incidence of pathologic upstaging (29.2%) than the rest (15.5%, p=0.0461). Conclusion: Clinically defined peripheral stage IA NSCLC should be carefully indicated for a lesser resection when positive serum CEA and/or tumors greater than 10mm in size are observed. abstract_id: PUBMED:30505548 Lymph node upstaging for non-small cell lung cancer after uniportal video-assisted thoracoscopy. Background: Radical lymph node dissection (LND) plays a major role in the treatment of non-small cell lung cancer (NSCLC). This study presents the analysis of the results after uniportal video-assisted thoracoscopy (VATS) lymphadenectomy during anatomical lung resections for NSCLC, focusing on pathological nodal upstaging. Any possible risk factor affecting nodal upstaging was also investigated. Methods: The prospectively collected clinical data of 136 patients undergone uniportal VATS anatomical lung resections, from June 2012 to September 2017, were reviewed. In particular, all details inherent the clinical and pathological node stage and any possible risk factor affecting nodal upstaging were analyzed. Results: The patient population consisted of 90 males and 46 females; their mean age was 67.42±10.64 years. The mean number of lymph nodes retrieved during uniportal VATS lymphadenectomy was 20.14±10.73 (7.27±5.90 and 12.60±7.96 in N1 and N2 stations, respectively). The incidence of nodal upstaging was 13.3% (18 cases). In particular there was a N0-1 upstaging in 10 cases (7.4%), a N1-2 upstaging in 3 (2.2%) and a N0-2 in 4 (3%). The ROC analysis showed that the resection of 18 lymph nodes was the best predictor of a general upstaging with an AUC-ROC of 0.595, while the resection of 7 hilar lymph nodes was the best predictor of N1 upstaging (AUC-ROC: 0.554) and 11 mediastinal nodes was the best predictor of N2 upstaging (AUC-ROC: 0.671). The number of positive lymph nodes of stations 5-6 (OR: 2.035, 95% CI: 1.082-3.826, P=0.027) and stations 2-3-4 (OR: 6.198, 95% CI: 1.580-24.321, P=0.009) were confirmed to be the only independent risk factors for N2 upstaging by multivariate analysis. Conclusions: According to our experience, uniportal VATS allows a safe and effective radical lymphadenectomy, with a satisfactory pathological nodal upstaging, comparable to other minimally invasive techniques. abstract_id: PUBMED:22560265 Prognostic predictors in non-small cell lung cancer patients undergoing intentional segmentectomy. Background: Despite recent studies reporting on the results of prospective intentional sublobar resection for patients with small non-small cell lung cancer (NSCLC), few studies have investigated predictors for prognosis or recurrence exclusively in patients undergoing intentional sublobar resection. Methods: We retrospectively reviewed 223 patients with small (2 cm or less) peripheral NSCLC who underwent intentional segmentectomy at the Niigata Cancer Center Hospital between 1992 and 2009. The significant demographic, clinical, and pathologic factors identified with the log rank test in univariate analyses were analyzed with the Cox proportional hazards regression model to examine independent predictors for prognosis and recurrence in multivariate analysis. Results: The 5-year and 10-year overall survival rates were 89.6% and 81.0%, respectively, and the 5-year and 10-year recurrence-free probabilities were 91.1% and 91.1%, respectively. Eight patients had locoregional recurrence, and 12 had distant recurrence. Multivariate analyses revealed that age more than 70 years (hazard ratio [HR] 2.389), male (HR 2.750), more than 75% consolidation/tumor ratio on high-resolution computed tomography (HR 2.750), and lymphatic permeation (HR 5.618) were independent poor prognostic factors, and lymphatic permeation (HR 16.257) was an independent predictor for recurrence. Conclusions: The factors related to upstaging on pathologic diagnosis were not identified as independent predictors; therefore, the current patient selection criterion seems reasonable. If lymphatic permeation is present on pathologic findings, careful follow-up is recommended. The predictors identified in this study will support assessment and interpretation of the results of ongoing prospective randomized trials of lobar versus sublobar resection in patients with small peripheral NSCLC. abstract_id: PUBMED:29943062 Nodal upstaging evaluation in NSCLC patients treated by robotic lobectomy. Background: Open pulmonary resection is considered the gold standard treatment of early-stage non-small cell lung cancer (NSCLC). However, in the last decades, the use of minimal-invasive techniques has given promising results. Survival in lung cancer, after surgery, depends on the number of pathological nodes (pN), thus lymph nodal upstaging can be considered a surrogate for surgical quality of the procedure. Several studies have demonstrated a lower rate of upstaging in video-assisted thoracic surgery than in open surgery, suggesting an approach-related difference in lymphadenectomy. Features of robotic technique could consent a lymph nodal dissection similar to open surgery. The aim of the study is to compare nodal upstaging between thoracotomy and robotic approaches to evaluate the oncologic radicality. Methods: Between January 2013 and December 2016, 212 consecutive cN0 NSCLC patients underwent lobectomy and lymphadenectomy (N1 + N2 stations) by either thoracotomy (Open Group) or robotic surgery (Robotic Group). Results: Lobectomy and lymphadenectomy were performed in 106 cN0-cN1 NSCLC patients by robotic surgery and in 106 cN0-cN1 NSCLC patients by open surgery. A mean of 14.42 ± 6.99 lymph nodes was removed in the Robotic Group (RG) and a mean of 14.32 ± 7.34 nodes in the Open Group (OG). Nodal upstaging was observed in 22 (20.75%) RG patients and in 19 OG (17.92%) patients. Conclusions: Robotic lobectomy for clinical N0-N1 NSCLC appears to be equivalent to thoracotomy in terms of efficacy of lymph node dissection and nodal upstaging. Given that the nodal upstaging is a surrogate of quality of surgery, we can consider robotic lobectomy an appropriate procedure which ensures similar result to the open approach. abstract_id: PUBMED:29078639 Nodal upstaging: effects of instrumentation and three-dimensional view in clinical stage I lung cancer. Nodal upstaging after surgical intervention for non-small cell lung cancer (NSCLC) is defined as the presence of unsuspected pathologic hilar (pN1) or mediastinal (pN2) disease detected during the final histopathologic evaluation of surgical specimens. The prevalence of pathologic nodal upstaging is used as a quality measure for the definition of the completeness of the nodal dissection. Risk factors for nodal upstaging may be patient-related (history of tuberculosis, rheumatoid arthritis, and diabetes mellitus), or tumor-related (central tumor, higher T stage, higher SUVmax value, or adenocarcinoma). Actually, the theorical superiority of a minimally invasive resections is the lymph node dissection. Studies may suggest that, expert video-assisted thoracoscopic surgery (VATS) surgeon could do similar lymph node dissection as it is done in open. Robotic surgeons may replicate the results of lymph node dissection in the open techniques. The possible reason for this is the instrumental superiority provided by the higher technology. Answer: Pathologic upstaging in patients undergoing resection for stage I non-small cell lung cancer (NSCLC) refers to the discovery of more advanced disease than initially diagnosed after the final pathologic review of surgical specimens. Several studies have identified potential predictors for pathologic upstaging, some of which may be modifiable. One study found that patients with larger tumors and those experiencing a delay greater than 8 weeks from diagnosis to resection were more likely to be upstaged. Upstaged patients also had more lymph nodes examined and were more likely to have positive resection margins. Predictors of upstaging included larger tumor size, delay in resection greater than 8 weeks, positive resection margins, and the number of lymph nodes examined (PUBMED:26277562). Another study indicated that the number of resected lymph nodes could predict nodal upstaging better than the type of intraoperative lymph node management. Tumor grade, histology, pathologic T status, and more than 12 resected nodes were independent predictors of nodal upstaging (PUBMED:31873770). A retrospective observational study suggested that current cancer pathway waiting times were not associated with significant tumor growth or pathologic upstaging in their cohort (PUBMED:33021465). In terms of prognostic significance, one study found that extranodal extension (ENE) was a negative prognostic factor for both overall survival and recurrence-free survival in patients with pathologic N1 NSCLC, especially in those who did not undergo adjuvant chemotherapy (PUBMED:37426150). Clinical features such as higher levels of serum CEA and larger tumor size were also associated with pathologic upstaging in clinical stage IA NSCLC. Tumor size greater than 10mm and positive serum CEA were independently associated with pathologic upstaging (PUBMED:17267071). Lastly, the number of positive lymph nodes of stations 5-6 and stations 2-3-4 were confirmed to be the only independent risk factors for N2 upstaging in a study analyzing uniportal video-assisted thoracoscopy (VATS) lymphadenectomy (PUBMED:30505548). In summary, modifiable predictors of pathologic upstaging in stage I NSCLC patients undergoing resection may include the timing of surgery post-diagnosis, the extent of lymph node examination, and potentially the surgical approach used. Factors such as tumor size, serum CEA levels, and the number of resected lymph nodes are also predictive of upstaging. While some of these factors, like tumor size and serum CEA levels, are inherent to the patient's condition and not modifiable, the timing of surgery and the thoroughness of lymph node examination during surgery are potentially modifiable factors that can influence the likelihood of pathologic upstaging. Therefore, efforts to minimize delays in surgery and to ensure a comprehensive lymph node examination during resection may help reduce the incidence of pathologic upstaging in patients with stage I NSCLC.
Instruction: Biologic grafts for cystocele repair: does concomitant midline fascial plication improve surgical outcomes? Abstracts: abstract_id: PUBMED:21484371 Biologic grafts for cystocele repair: does concomitant midline fascial plication improve surgical outcomes? Introduction And Hypothesis: This study aims to compare the efficacy of porcine graft-augmented anterior prolapse repair with and without underlying pubocervical fascia plication. Methods: Women with symptomatic anterior prolapse who underwent transvaginal repair, using biologic graft via transobturator approach, were retrospectively studied. An initial cohort had the graft placed without fascial plication. A second cohort had graft placement as an overlay to fascial plication. Results: Between March 2005 and September 2008, 65 subjects underwent anterior repair with biologic graft; 35 without fascial plication and 30 as an overlay to plication. At 6 months follow-up, anatomic recurrence (Ba&gt;/= -1) was significantly higher in the non-plicated group (18/35, 51%) compared with the plicated group (2/30, 7%) p &lt; 0.01. Five (5/35, 14%) in the non-plicated group compared with none (0/30) in the plicated group underwent further treatment (p = 0.06). Conclusions: When using a non-crosslinked biologic graft for repair of anterior vaginal prolapse, we recommend the addition of concomitant midline fascial plication to enhance anatomic outcome. abstract_id: PUBMED:22528118 Evaluation of current biologic meshes in pelvic organ prolapse repair. Pelvic organ prolapse (POP) is a common disorder estimated to affect 15%-30% of women over the age of 50 years. About 11% of women will require surgery by the age of 80 years and there is an estimated 30% rate of prolapse recurrence. In an attempt to improve surgical outcomes, biologic grafts and synthetic meshes have been implemented in the repair of POP. Biologic grafts have been used with the hope of avoiding complications associated with synthetic mesh. This presents the existing data surrounding the use of biologic grafts in the surgical repair of anterior compartment, vaginal vault, and posterior compartment prolapse. abstract_id: PUBMED:19097515 Contemporary approaches to cystocele repair: a survey of AUGS members. Introduction: To assess practice patterns in the management of cystoceles among American Urogynecologic Society (AUGS) members. Study Design: A 22-question survey regarding various cystocele repair techniques was delivered to AUGS members via e-mail. Results: Of 963 members surveyed, there were 315 respondents. For the treatment of primary cystocele, 77% of respondents used midline vaginal plication; 40% used this approach for management of recurrent cystocele. At the time of abdominal sacrocolpopexy, most considered the anterior vaginal graft sufficient to address cystocele. At the time of uterosacral suspension, midline plication was the most common approach to address cystocele. The most commonly used graft material was synthetic mesh (67%). Almost half of respondents used minimally invasive transobturator devices for cystocele repair. Conclusion: The wide variety of surgical approaches likely reflects the absence of a clearly defined best practice for cystocele repair, underscoring the need for rigorous surgical trials. abstract_id: PUBMED:22082789 Randomized controlled trial of cystocele plication risks: a pilot study. Objective: We wished to determine the feasibility of a single-centre randomized controlled trial to investigate whether avoiding cystocele plication in women undergoing transvaginal repair of cystocele decreases the need for catheterization beyond the second postoperative day. Methods: Patients undergoing transvaginal repair of cystocele were randomly assigned to either have or not have plication sutures during their procedure. We assessed the success of recruitment, physician acceptability, and adherence to protocol, and we also assessed the need for catheterization beyond the second postoperative day. Results: Twenty-two women were randomized (a recruitment rate of 85%), and participating surgeons did not feel that one surgical technique was superior to the other. There were no protocol violations after randomization. No cases of postoperative voiding dysfunction were identified. Conclusion: A multicentre randomized controlled trial is required to determine the short-term and long-term risks and benefits of avoiding plication sutures in women undergoing transvaginal repair of cystocele. If patients and physicians support the study protocol, conducting such a trial is feasible. abstract_id: PUBMED:16528454 Cystocele - vaginal approach to repairing paravaginal fascial defects. The objective of this study is to determine the efficacy and safety of vaginal approach to repair paravaginal defects in patients with symptomatic cystocele. This was a retrospective study of 66 women with a diagnosis of symptomatic cystocele grade 2 to 4, referred to our unit between January 2002 and March 2005. A clinical evaluation was carried out using the Baden-Walker classification before and after the surgery. The same surgical team performed every surgery. The repair of paravaginal fascial defects was carried out through a vaginal approach, exposing the arcus tendineus. The paravaginal fascial defects were corrected through suspension of vesicovaginal fascia to the arcus tendineus with nonreabsorbable Ethibond 0 sutures. Women were seen for follow-up at 3, 6, and 12 months. The presence of well-demarcated vaginal lateral sulci at grade 0, firmly apposed to the lateral pelvic sidewalls and no anterior relaxation with Valsalva maneuver, were used as criteria for cure. Grade 2 cystocele was diagnosed preoperatively in most women. The mean duration of complaints due to prolapse was 64.6 months. There were no major intraoperative complications. Mean time of inpatient stay was of 4.9 days. The cure rate at 12 months was 91.6%. There were five cases of recurrence of cystocele 6 months after surgery. Surgical repair of symptomatic cystocele through a paravaginal approach is a safe and efficacious technique. Vaginal approach to repair paravaginal fascia defects had a low postoperative morbidity and high cure rate at 12 months (91.6%). abstract_id: PUBMED:28914338 The UK National Prolapse Survey: 10 years on. Introduction And Hypothesis: To assess trends in the surgical management of pelvic organ prolapse (POP) amongst UK practitioners and changes in practice since a previous similar survey. Methods: An online questionnaire survey (Typeform Pro) was emailed to British Society of Urogynaecology (BSUG) members. They included urogynaecologists working in tertiary centres, gynaecologists with a designated special interest in urogynaecology and general gynaecologists. The questionnaire included case scenarios encompassing contentious issues in the surgical management of POP and was a revised version of the questionnaire used in the previous surveys. The revised questionnaire included additional questions relating to the use of vaginal mesh and laparoscopic urogynaecology procedures. Results: Of 516 BSUG members emailed, 212 provided completed responses.. For anterior vaginal wall prolapse the procedure of choice was anterior colporrhaphy (92% of respondents). For uterovaginal prolapse the procedure of choice was still vaginal hysterectomy and repair (75%). For posterior vaginal wall prolapse the procedure of choice was posterior colporrhaphy with midline fascial plication (97%). For vault prolapse the procedure of choice was sacrocolpopexy (54%) followed by vaginal wall repair and sacrospinous fixation (41%). The laparoscopic route was preferred for sacrocolpopexy (62% versus 38% for the open procedure). For primary prolapse, vaginal mesh was used by only 1% of respondents in the anterior compartment and by 3% in the posterior compartment. Conclusion: Basic trends in the use of native tissue prolapse surgery remain unchanged. There has been a significant decrease in the use of vaginal mesh for both primary and recurrent prolapse, with increasing use of laparoscopic procedures for prolapse. abstract_id: PUBMED:31904661 One-year outcomes of a suture-less laparoscopic sacral hysteropexy using polypropylene Y-mesh grafts and fibrin sealant spray: A prospective comparative study. Background: Laparoscopic sacral hysteropexy (SH) is a minimally invasive and effective treatment for uterovaginal prolapse in women who wish to retain the uterus. However, this procedure is limited by a steep learning curve and a long operating time. In this study, we aim to evaluate the surgical outcomes of a modified laparoscopic SH procedure using innovative methods of vaginal mesh attachment, which we hypothesized to be equally effective and time-saving. Methods: This was a prospective, parallel, comparative study conducted at a tertiary referral hospital. A total of 34 women with advanced (POPQ [Pelvic Organ Prolapse Quantification] stages ≧ 2) uterine prolapse, who underwent a laparoscopic SH procedure using lightweight polypropylene Y-mesh grafts (ALYTE), were studied. In half (n = 17) of the cases, fibrin sealant spray (TISSEEL) was applied to the meshes fixed at the anterior and posterior vaginal wall with fewer (six vs at least ten) sutures. A detailed comparison of one-year outcomes between groups was performed. Results: Patient characteristics and perioperative results were comparable between groups with the exception of a significantly shorter total operating time (247.0 vs 292.9 minutes, p = 0.04) noted in the fibrin group. At 1 year, anatomic success (POPQ stage ≦1) rates (76.5% vs 76.5%) were not different between groups. There were eight patients, with four in each group, who had surgical failure. Notably, most (7/8; 87.5%) surgical failures were at the anterior compartment (i.e, recurrent cystocele). No vaginal mesh extrusions were noted. After statistical analysis, we found "cystocele as the dominant prolapse before operation" was a significant predisposing factor for prolapse recurrence (p = 0.019; odds ratio = 8.04). Conclusion: The modified laparoscopic SH procedure using Y-mesh grafts and fibrin sealant spray with fewer vaginal sutures was equally effective as conventional methods but saved time. Laparoscopic SH using Y-mesh grafts might not be as effective in repairing a concomitant dominant cystocele. abstract_id: PUBMED:33081090 Cystocele Repair by a Modified Surgical Technique of Bilateral Pubococcygeus Plication: Long-Term Surgical and Functional Results. Here we describe our modified surgical technique of bilateral pubococcygeus plication (BPCP) for cystocele repair and assess its safety and long-term outcomes. This is a prospective study of 147 consecutive women who underwent BPCP for cystocele between January 2010 to January 2018. Inclusion criteria was naïve women with symptomatic cystocele ≥ POP-Q 2nd stage. Exclusion criteria: stress urinary incontinence (SUI), urgency urinary incontinence, other associated vaginal wall prolapses ≥ stage 2, neurological diseases, previous SUI surgeries, and previous radiation/surgery of the pelvic area. BPCP was performed by obtaining the medialization of the pubococcygeus muscle fibers of the right and left sides. Statistical analysis was performed. Objective cure was POP-Q &lt; 2nd stage. Subjective cure and functional outcomes were evaluated by validated questionnaires. Patient's satisfaction was assessed by a Likert-type scale. Mean operative time was 64 min. At a mean follow-up of 82.4 months, objective and subjective success rates were 89.8% and 92.2% respectively. De novo urgency was 3.2%. Surgery did not alter sexual function. Complications occurred in 4.8%, and were: wrong dissection plane, hematoma, and pain lasting between 24-72 h. BPCP for correction of cystocele is safe and effective, with limited risk of complication and good long-term results. abstract_id: PUBMED:31692921 Current surgical management of pelvic organ prolapse: Strategies for the improvement of surgical outcomes. There are a variety of surgical management strategies to help surgeons repair pelvic organ prolapse (POP). Surgical treatment for POP includes native tissue repair, augmentation with mesh, and minimally invasive surgeries. Currently, laparoscopic or robotic techniques for POP repair are increasing in popularity and continuing to evolve. The aim of this review is to present an up-to-date review of surgical techniques used for POP repair and to discuss ways to optimize surgical outcomes. abstract_id: PUBMED:17880840 Abdominal sacral colpopexy: surgical pearls and outcomes. Vaginal vault prolapse is a challenging form of pelvic organ prolapse that occurs in combination with cystocele, rectocele, or enterocele in nearly 75% of affected patients. Clinical presentation will vary depending on the associated defects. Any successful therapy for vaginal vault prolapse will depend on a thorough evaluation of the vaginal compartments and concomitant lower urinary tract function. Surgical correction of vaginal vault prolapse can be achieved through a variety of vaginal or abdominal approaches. This review focuses on the abdominal approach for vaginal vault prolapse surgery. We review outcomes of abdominal sacral colpopexy (ASC) and available comparisons to vaginal vault suspension. We address the role of laparoscopy and robotics in ASC and examine the outcomes of such procedures. We also discuss available literature on the management of the lower urinary tract in combination with ASC. Answer: Yes, concomitant midline fascial plication appears to improve surgical outcomes when using biologic grafts for cystocele repair. A study comparing the efficacy of porcine graft-augmented anterior prolapse repair with and without underlying pubocervical fascia plication found that at 6 months follow-up, anatomic recurrence was significantly higher in the group without fascial plication (51%) compared to the group with plication (7%) (PUBMED:21484371). This suggests that the addition of concomitant midline fascial plication enhances anatomic outcomes when using a non-crosslinked biologic graft for the repair of anterior vaginal prolapse.
Instruction: Can gene expression profiling predict survival for patients with squamous cell carcinoma of the lung? Abstracts: abstract_id: PUBMED:15579197 Can gene expression profiling predict survival for patients with squamous cell carcinoma of the lung? Background: Lung cancer remains to be the leading cause of cancer death worldwide. Patients with similar lung cancer may experience quite different clinical outcomes. Reliable molecular prognostic markers are needed to characterize the disparity. In order to identify the genes responsible for the aggressiveness of squamous cell carcinoma of the lung, we applied DNA microarray technology to a case control study. Fifteen patients with surgically treated stage I squamous cell lung cancer were selected. Ten were one-to-one matched on tumour size and grade, age, gender, and smoking status; five died of lung cancer recurrence within 24 months (high-aggressive group), and five survived more than 54 months after surgery (low-aggressive group). Five additional tissues were included as test samples. Unsupervised and supervised approaches were used to explore the relationship among samples and identify differentially expressed genes. We also evaluated the gene markers' accuracy in segregating samples to their respective group. Functional gene networks for the significant genes were retrieved, and their association with survival was tested. Results: Unsupervised clustering did not group tumours based on survival experience. At p &lt; 0.05, 294 and 246 differentially expressed genes for matched and unmatched analysis respectively were identified between the low and high aggressive groups. Linear discriminant analysis was performed on all samples using the 27 top unique genes, and the results showed an overall accuracy rate of 80%. Tests on the association of 24 gene networks with study outcome showed that 7 were highly correlated with the survival time of the lung cancer patients. Conclusion: The overall gene expression pattern between the high and low aggressive squamous cell carcinomas of the lung did not differ significantly with the control of confounding factors. A small subset of genes or genes in specific pathways may be responsible for the aggressive nature of a tumour and could potentially serve as panels of prognostic markers for stage I squamous cell lung cancer. abstract_id: PUBMED:17082175 Expression profiling defines a recurrence signature in lung squamous cell carcinoma. Lung cancer remains the leading cause of cancer death worldwide. Overall 5-year survival is approximately 10-15% and despite curative intent surgery, treatment failure is primarily due to recurrent disease. Conventional prognostic markers are unable to determine which patients with completely resected disease within each stage group are likely to relapse. To identify a gene signature associated with recurrent squamous cell carcinoma (SCC) of lung, we analyzed primary tumor gene expression for a total of 51 SCCs (Stages I-III) on 22 323 element microarrays, comparing expression profiles for individuals who remained disease-free for a minimum of 36 months with those from individuals whose disease recurred within 18 months of complete resection. Cox proportional hazards modeling with leave-one-out cross-validation identified a 71-gene signature capable of predicting the likelihood of tumor recurrence and a 79-gene signature predictive for cancer-related death. These two signatures were pooled to generate a 111-gene signature which achieved an overall predictive accuracy for disease recurrence of 72% (77% sensitivity, 67% specificity) in an independent set of 58 (Stages I-III SCCs). This signature also predicted differences in survival [log-rank P=0.0008; hazard ratio (HR), 3.8; 95% confidence interval (CI), 1.6-8.7], and was superior to conventional prognostic markers such as TNM stage or N stage in predicting patient outcome. Genome-wide profiling has revealed a distinct gene-expression profile for recurrent lung SCC which may be clinically useful as a prognostic tool. abstract_id: PUBMED:18676750 Three-gene expression signature predicts survival in early-stage squamous cell carcinoma of the lung. Purpose: Adjuvant treatment may improve survival in early-stage squamous cell carcinoma (SCC) of the lung; however, the absolute gain is modest and mainly limited to stage II-IIIA. Current staging methods are imprecise indications of prognosis, but high-risk patients can be identified by gene expression profiling and considered for adjuvant therapy. Experimental Design: The expression of 29 genes was assessed by reverse transcriptase quantitative PCR in frozen primary tumor specimens obtained from 66 SCC patients who had undergone surgical resection. Expression values were dichotomized using the median as a cutoff value. We used a risk score to develop a gene expression model for the prediction of survival. Results: The univariate analysis of gene expression in the training cohort identified 10 genes with significant prognostic value: CSF1, EGFR, CA IX, PH4, KIAA0974, ANLN, VEGFC, NTRK1, FN1, and INR1. In the multivariate Cox model, CSF1 (hazard ratio, 3.5; P = 0.005), EGFR (hazard ratio, 2.7; P = 0.02), CA IX (hazard ratio, 0.2; P &lt; 0.0001), and tumor size &gt;4 cm (hazard ratio, 2.7; P = 0.02) emerged as significant markers for survival. The high prognostic value of a risk score based on the expression of the three genes (CSF1, EGFR, and CA IX) was positively validated in a separate cohort of 26 patients in an independent laboratory (P = 0.05). Conclusions: The three-gene signature is strongly associated with prognosis in early-stage SCC. Positive independent validation suggests its suitability for selecting SCC patients with an increased risk of death who might benefit from adjuvant treatment. abstract_id: PUBMED:20949446 Gene expression profiling in head and neck squamous cell carcinoma: Clinical perspectives. Background: To date, more than 60 gene expression profiling (GEP) studies have been published in the field of head and neck squamous cell carcinoma (HNSCC) with variable objectives, methods, and results. Methods: The purpose of this study was to present a state-of-the-art review of GEP in HNSCC focusing on the current advances and perspectives for clinical applications. Results: Gene expression signatures have been developed to identify screening and diagnostic molecular markers, to improve tumor staging (cervical lymph node and distant metastasis prediction), to differentiate lung metastasis of HNSCC from primary lung squamous cell carcinoma, to predict tumor response to chemoradiotherapy, and to provide outcome predictors. Conclusion: Some transcriptional signatures that could improve HNSCC management have been identified, but further analyses are required to properly validate and to precisely evaluate their clinical relevance. After an exploratory phase, the completion of large scale projects with stringent methodology is now necessary to transfer GEP from bench to bedside. abstract_id: PUBMED:16007138 Two subclasses of lung squamous cell carcinoma with different gene expression profiles and prognosis identified by hierarchical clustering and non-negative matrix factorization. Current clinical and histopathological criteria used to define lung squamous cell carcinomas (SCCs) are insufficient to predict clinical outcome. To make a clinically useful classification by gene expression profiling, we used a 40 386 element cDNA microarray to analyse 48 SCC, nine adenocarcinoma, and 30 normal lung samples. Initial analysis by hierarchical clustering (HC) allowed division of SCCs into two distinct subclasses. An additional independent round of HC induced a similar partition and consensus clustering with the non-negative matrix factorization approach indicated the robustness of this classification. Kaplan-Meier analysis with the log-rank test pointed to a nonsignificant difference in survival (P = 0.071), but the likelihood of survival to 6 years was significantly different between the two groups (40.5 vs 81.8%, P = 0.014, Z-test). Biological process categories characteristic for each subclass were identified statistically and upregulation of cell-proliferation-related genes was evident in the subclass with poor prognosis. In the subclass with better survival, genes involved in differentiated intracellular functions, such as the MAPKKK cascade, ceramide metabolism, or regulation of transcription, were upregulated. This work represents an important step toward the identification of clinically useful classification for lung SCC. abstract_id: PUBMED:28869900 Integration of RNA-Seq and RPPA data for survival time prediction in cancer patients. Integration of several types of patient data in a computational framework can accelerate the identification of more reliable biomarkers, especially for prognostic purposes. This study aims to identify biomarkers that can successfully predict the potential survival time of a cancer patient by integrating the transcriptomic (RNA-Seq), proteomic (RPPA), and protein-protein interaction (PPI) data. The proposed method -RPBioNet- employs a random walk-based algorithm that works on a PPI network to identify a limited number of protein biomarkers. Later, the method uses gene expression measurements of the selected biomarkers to train a classifier for the survival time prediction of patients. RPBioNet was applied to classify kidney renal clear cell carcinoma (KIRC), glioblastoma multiforme (GBM), and lung squamous cell carcinoma (LUSC) patients based on their survival time classes (long- or short-term). The RPBioNet method correctly identified the survival time classes of patients with between 66% and 78% average accuracy for three data sets. RPBioNet operates with only 20 to 50 biomarkers and can achieve on average 6% higher accuracy compared to the closest alternative method, which uses only RNA-Seq data in the biomarker selection. Further analysis of the most predictive biomarkers highlighted genes that are common for both cancer types, as they may be driver proteins responsible for cancer progression. The novelty of this study is the integration of a PPI network with mRNA and protein expression data to identify more accurate prognostic biomarkers that can be used for clinical purposes in the future. abstract_id: PUBMED:22011649 Gene copy number aberrations are associated with survival in histologic subgroups of non-small cell lung cancer. Introduction: Non-small cell lung cancer (NSCLC) is characterized by a multitude of genetic aberrations with unknown clinical impact. In this study, we aimed to identify gene copy number changes that correlate with clinical outcome in NSCLC. To maximize the chance to identify clinically relevant events, we applied a strategy involving two prognostically extreme patient groups. Methods: Short-term (&lt;20 month; n = 53) and long-term survivors (&gt;58 month; n = 47) were selected from a clinically well-characterized NSCLC patient cohort with available fresh frozen tumor specimens. The samples were analyzed using high-resolution single-nucleotide polymorphism array technology to assess gene copy number variations and array-based gene expression profiling. The molecular data were combined with information on clinical parameters. Results: Genetic aberrations were strongly associated with tumor histology. In adenocarcinoma (n = 50), gene copy number gains on chromosome 8q21-q24.3 (177 genes) were more frequent in long-term than in short-term survivors. In squamous cell carcinoma (n = 28), gains on chromosome 14q23.1-24.3 (133 genes) were associated with shorter survival, whereas losses in a neighboring region, 14q31.1-32.33 (110 genes), correlated with favorable outcome. In accordance with copy number gains and losses, messenger RNA expression levels of corresponding genes were increased or decreased, respectively. Conclusion: Comprehensive tumor profiling permits the integration of genomic, histologic, and clinical data. We identified gene copy number gains and losses, with corresponding changes in messenger RNA levels that were associated with prognosis in adenocarcinoma and squamous cell carcinoma of the lung. abstract_id: PUBMED:33084541 PLEK2 Gene Upregulation Might Independently Predict Shorter Progression-Free Survival in Lung Adenocarcinoma. Objective: This study aimed to explore PLEK2 expression profile, its prognostic value, and the potential genomic alterations associated with its dysregulation in lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC). Materials And Methods: Data from The Cancer Genome Atlas (TCGA), The Genotype-Tissue Expression (GTEx), and Kaplan-Meier plotter were used in combination for bioinformatic analysis. Results: PLEK2 mRNA was significantly upregulated in both LUAD and LUSC compared with their respective normal controls. PLEK2 upregulation showed independent prognostic value in progression-free survival (PFS) (HR: 1.169, 95%CI: 1.033 -1.322, p = 0.014). PLEK2 mRNA expression was positively correlated with invasion, cell cycle, DNA damage, and DNA repair of LUAD cells at the single-cell level. Genomic analysis showed that gene-level amplification might not directly lead to increased PLEK2 expression. Methylation profile analysis found 4 CpG sites (cg12199376, cg14437634, cg17641252, and cg06724236) had at least a weakly negative correlation with PLEK2 expression, among which cg12199376, cg14437634 and cg17641252 locate around the first exon of the gene. Conclusions: Increased PLEK2 expression might be a specific prognostic biomarker of poor PFS in LUAD patients. Its expression had significant positive correlations with invasion, cell cycle, DNA damage, and DNA repair of LUAD cells at the single-cell level. Promoter hypomethylation might be a potential mechanism leading to its upregulation. abstract_id: PUBMED:34044809 A glycolysis-based three-gene signature predicts survival in patients with lung squamous cell carcinoma. Background: Lung cancer is one of the most lethal and most prevalent malignant tumors worldwide, and lung squamous cell carcinoma (LUSC) is one of the major histological subtypes. Although numerous biomarkers have been found to be associated with prognosis in LUSC, the prediction effect of a single gene biomarker is insufficient, especially for glycolysis-related genes. Therefore, we aimed to develop a novel glycolysis-related gene signature to predict survival in patients with LUSC. Methods: The mRNA expression files and LUSC clinical information were obtained from The Cancer Genome Atlas (TCGA) dataset. Results: Based on Gene Set Enrichment Analysis (GSEA), we found 5 glycolysis-related gene sets that were significantly enriched in LUSC tissues. Univariate and multivariate Cox proportional regression models were performed to choose prognostic-related gene signatures. Based on a Cox proportional regression model, a risk score for a three-gene signature (HKDC1, ALDH7A1, and MDH1) was established to divide patients into high-risk and low-risk subgroups. Multivariate Cox regression analysis indicated that the risk score for this three-gene signature can be used as an independent prognostic indicator in LUSC. Additionally, based on the cBioPortal database, the rate of genomic alterations in the HKDC1, ALDH7A1, and MDH1 genes were 1.9, 1.1, and 5% in LUSC patients, respectively. Conclusion: A glycolysis-based three-gene signature could serve as a novel biomarker in predicting the prognosis of patients with LUSC and it also provides additional gene targets that can be used to cure LUSC patients. abstract_id: PUBMED:29843532 Increased GOLM1 Expression Independently Predicts Unfavorable Overall Survival and Recurrence-Free Survival in Lung Adenocarcinoma. Golgi membrane protein 1 (GOLM1) is a transmembrane glycoprotein of the Golgi cisternae, which is implicated in carcinogenesis of multiple types of cancer. In this study, using data from the Gene Expression Omnibus and The Cancer Genome Atlas, we compared the expression of GOLM1 in lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC) and studied its prognostic value in terms of overall survival (OS) and recurrence-free survival (RFS) in these 2 subtypes of non-small cell lung cancer (NSCLC). Results showed that GOLM1 was significantly upregulated in both LUAD and LUSC tissues compared to the normal controls. However, GOLM1 expression was higher in LUAD tissues than in LUSC tissues. More importantly, using over 10 years' survival data from 502 patients with LUAD and 494 patients with LUSC, we found that high GOLM1 expression was associated with unfavorable OS and RFS in patients with LUAD, but not in patients with LUSC. The following univariate and multivariate analyses confirmed that increased GOLM1 expression was an independent prognostic indicator of poor OS (hazard ratio [HR]: 1.30, 95% confidence interval [CI]: 1.11-1.54, P = .002) and RFS (HR: 1.37, 95% CI: 1.14-1.64, P = .001) in patients with LUAD. Of 511 cases with LUAD, 248 (48.5%) had heterozygous loss (-1), while 28 (5.5%) of 511 cases with LUAD had low-level copy gain (+1). In addition, we also found that the methylation status of 1 CpG site (chr9: 88,694,942-88,694,944) showed a weak negative correlation with GOLM1 expression (Pearson r = -0.25). Based on these findings, we infer that GOLM1 might serve as a valuable prognostic biomarker in LUAD, but not in LUSC. In addition, DNA copy number alterations and methylation might be 2 important mechanisms of dysregulated GOLM1 in LUAD. Answer: Yes, gene expression profiling can predict survival for patients with squamous cell carcinoma of the lung. Several studies have identified gene signatures that are associated with the prognosis of lung squamous cell carcinoma (LUSC). For instance, one study identified a 71-gene signature capable of predicting the likelihood of tumor recurrence and a 79-gene signature predictive for cancer-related death in lung squamous cell carcinoma. These signatures were combined to generate a 111-gene signature which achieved an overall predictive accuracy for disease recurrence of 72% in an independent set of patients. This signature also predicted differences in survival and was superior to conventional prognostic markers such as TNM stage or N stage in predicting patient outcome (PUBMED:17082175). Another study developed a three-gene expression signature based on CSF1, EGFR, and CA IX, which was strongly associated with prognosis in early-stage SCC. This signature was positively validated in a separate cohort, suggesting its suitability for selecting SCC patients with an increased risk of death who might benefit from adjuvant treatment (PUBMED:18676750). Additionally, a study using hierarchical clustering and non-negative matrix factorization identified two subclasses of lung squamous cell carcinoma with different gene expression profiles and prognosis, indicating that gene expression profiling can be used to classify lung SCC into clinically relevant subgroups (PUBMED:16007138). Furthermore, a glycolysis-based three-gene signature (HKDC1, ALDH7A1, and MDH1) was established to divide patients into high-risk and low-risk subgroups, serving as an independent prognostic indicator in LUSC (PUBMED:34044809). These studies demonstrate that gene expression profiling has the potential to serve as a prognostic tool for lung squamous cell carcinoma, helping to predict survival and recurrence, and potentially guiding treatment decisions.
Instruction: Neutrophil elastase gene polymorphisms: modulators of response to therapy in childhood bronchiectasis? Abstracts: abstract_id: PUBMED:24816969 Neutrophil elastase gene polymorphisms: modulators of response to therapy in childhood bronchiectasis? Background: The aim of this study was to investigate polymorphisms in the promoter region of the neutrophil elastase (ELANE) gene as potential modulators of the therapeutic response in children with idiopathic bronchiectasis. Methods: The study included 48 children between 5 and 17 years old who were diagnosed with idiopathic bronchiectasis based on high-resolution computed tomography of the thorax. In all patients therapy included administration of antibiotics, anti-inflammatory drugs, expectorants, and postural drainage. Response to therapy was evaluated by the change in FeNO levels before and after administration of therapy. The ELANE promoter region polymorphisms were analyzed by PCR-direct DNA sequencing. Results: According to the predicted activity of ELANE genotypes, subjects were divided into two groups: low/intermediate activity (n = 18) and high activity (n = 30). Subjects in the group with high-activity genotype had higher initial FeNO levels and this difference was statistically significant (t = 2.906; p = 0.006). The difference between FeNO levels before and after therapy was also statistically significantly higher in children with high-activity genotype (t = 3.329; p = 0.002). Statistically significant correlation was observed between the change in FeNO levels and ELANE genotypes (r = 0.350; p = 0.015). Conclusion: Children with high-activity genotype had higher initial FeNO levels and showed better response to therapy than children with low/intermediate-activity genotypes. abstract_id: PUBMED:36161236 Augmentation therapy with human alpha-1-proteinase inhibitor reduces exacerbations in patient with bronchiectasis and alpha-1-antitrypsin deficiency. Alpha-1-antitrypsin deficiency (AATD) is a rare cause of noncystic fibrosis (CF) bronchiectasis. The benefits of augmentation therapy in patients with chronic obstructive pulmonary disease (COPD) and pulmonary emphysema are well established. The role of augmentation therapy in AATD bronchiectasis in patients without pulmonary emphysema is not clear. We present the case of a 53-year-old woman (never smoker) who presented with increased susceptibility to infection, productive cough, and intermittent hemoptysis. Pulmonary function testing revealed restrictive impairment [VC 2,7 l (83% of pred.), FEV1 2,3 l (86% of pred.)]. A CT scan of the chest showed marked basal bronchiectasis with mucoid impaction, surrounding consolidation, and no emphysema. Despite frequent use of inhalation therapy, a satisfactory control of symptoms and exacerbations was not achieved. In the course of extended diagnostics regarding the genesis of bronchiectasis, a reduced alpha-1-antitrypsin (AAT) serum level was detected, and a genetic test revealed a homozygous Pi*ZZ genotype. We started augmentation therapy with AAT (Respreeza®, CLS Behring) at the dose of 60 mg/kg per week; the therapy was well tolerated by the patient, and she reported clinical improvement with a reduction in exacerbation frequency. AAT is a serine protease inhibitor and plays a major role in regulating inflammatory activities, in particular by inhibiting neutrophil elastase (NE). The present case illustrates the positive effect of augmentation therapy, including patients without airway obstruction. Among other causes, AATD should be considered as a possible cause of bronchiectasis, and the effects of augmentation therapy for this indication need to be prospectively studied. abstract_id: PUBMED:7784713 Preliminary evidence that augmentation therapy diminishes degradation of cross-linked elastin in alpha-1-antitrypsin-deficient humans. It is hypothesized that emphysema develops in some severely alpha 1-antitrypsin (AAT)-deficient persons because endogenous elastases are not adequately controlled by AAT, and accelerated elastin degradation occurs. It is not known whether augmentation therapy with AAT diminishes degradation of lung elastin in severely deficient persons with lung disease. Two severely deficient, PiZ patients were studied, a 63-year-old never-smoking woman with bronchiectasis and a 41-year-old smoking man with emphysema. Urinary desmosine (DES) was determined before and after augmentation therapy with AAT, 260 mg/kg/month. Mean +/- SEM pretreatment urinary DES was elevated in both patients, 19.7 +/- 0.9 (n = 2) and 10.8 +/- 0.2 (n = 2) micrograms/g creatinine, respectively, compared to normal values of 7.5 +/- 0.3 (n = 22) micrograms/g creatinine. Following augmentation therapy, urinary DES values decreased 40 and 36%, respectively, to 11.9 +/- 0.3 (n = 8) and 6.9 +/- 0.4 (n = 7) microgram/g creatinine (p &lt; 0.05). We conclude that monthly AAT augmentation therapy decreased DES excretion in the urine of these PiZ patients. We speculate that since there was lung disease in both patients, a decrease in degradation of lung elastin is the most likely explanation for this observation. abstract_id: PUBMED:26335954 Inflammation and its genesis in cystic fibrosis. The host inflammatory response in cystic fibrosis (CF) lung disease has long been recognized as a central pathological feature and an important therapeutic target. Indeed, many believe that bronchiectasis results largely from the oxidative and proteolytic damage comprised within an exuberant airway inflammatory response that is dominated by neutrophils. In this review, we address the longstanding argument of whether or not the inflammatory response is directly attributable to impairment of the cystic fibrosis transmembrane conductance regulator or only secondary to airway obstruction and chronic bacterial infection and challenge the importance of this distinction in the context of therapy. We also review the centrality of neutrophils in CF lung pathophysiology and highlight more recent data that suggest the importance of other cell types and signaling beyond NF-κB activation. We discuss how protease and redox imbalance are critical factors in CF airway inflammation and end by reviewing some of the more promising therapeutic approaches now under development. abstract_id: PUBMED:25814049 Inflammation in cystic fibrosis lung disease: Pathogenesis and therapy. Lung disease is the major cause of morbidity and mortality in patients with cystic fibrosis (CF). Although CF lung disease is primarily an infectious disorder, the associated inflammation is both intense and ineffective at clearing pathogens. Persistent high-intensity inflammation leads to permanent structural damage of the CF airways and impaired lung function that eventually results in respiratory failure and death. Several defective inflammatory responses have been linked to cystic fibrosis transmembrane conductance regulator (CFTR) deficiency including innate and acquired immunity dysregulation, cell membrane lipid abnormalities, various transcription factor signaling defects, as well as altered kinase and toll-like receptor responses. The inflammation of the CF lung is dominated by neutrophils that release oxidants and proteases, particularly elastase. Neutrophil elastase in the CF airway secretions precedes the appearance of bronchiectasis, and correlates with lung function deterioration and respiratory exacerbations. Anti-inflammatory therapies are therefore of particular interest for CF lung disease but must be carefully studied to avoid suppressing critical elements of the inflammatory response and thus worsening infection. This review examines the role of inflammation in the pathogenesis of CF lung disease, summarizes the results of past clinical trials and explores promising new anti-inflammatory options. abstract_id: PUBMED:10445614 Clinical and immunoregulatory effects of roxithromycin therapy for chronic respiratory tract infection. The clinical and immunoregulatory effects of long-term macrolide antibiotic therapy for patients with chronic lower respiratory tract infections (CLRTI) were investigated. Clinical parameters and neutrophil chemotactic mediators in the epithelial lining fluid (ELF) of CLRTI patients (n = 10) were examined before and after 3 months oral administration of roxithromycin (RXM). The in vitro effects of RXM were also examined on the release of these mediators from alveolar macrophages (AM) and neutrophils. Arterial oxygen tension (p&lt;0.05), vital capacity (VC) (p&lt;0.001), %VC (p&lt;0.05) and forced expiratory volume in one second (p&lt;0.01) were improved after RXM treatment, but airway bacteria were not eradicated. Among the mediators, the levels of interleukin (IL)-8, neutrophil elastase (NE) and leukotriene B4 (LTB4) were higher in ELF than in plasma of CLRTI patients and they decreased after RXM treatment (n = 7, p&lt;0.05 for each). RXM concentrations were significantly increased in the bronchoalveolar lavage cells of the treated patients. In in vitro experiments, RXM showed inhibitory effects on IL-8 release from AM and neutrophils. In conclusion, interleukin-8, neutrophil elastase and leukotriene B4 contribute to the neutrophilic inflammation in the airways of chronic lower respiratory tract infection patients and the clinical effects of roxithromycin may, in part, be attributable to the suppression of excess release of the chemotactic mediators from inflammatory cells. abstract_id: PUBMED:30281324 Elastase Exocytosis by Airway Neutrophils Is Associated with Early Lung Damage in Children with Cystic Fibrosis. Rationale: Neutrophils are recruited to the airways of individuals with cystic fibrosis (CF). In adolescents and adults with CF, airway neutrophils actively exocytose the primary granule protease elastase (NE), whose extracellular activity correlates with lung damage. During childhood, free extracellular NE activity is measurable only in a subset of patients, and the exocytic function of airway neutrophils is unknown. Objectives: To measure NE exocytosis by airway neutrophils in relation to free extracellular NE activity and lung damage in children with CF. Methods: We measured lung damage using chest computed tomography coupled with the Perth-Rotterdam Annotated Grid Morphometric Analysis for Cystic Fibrosis scoring system. Concomitantly, we phenotyped blood and BAL fluid leukocytes by flow and image cytometry, and measured free extracellular NE activity using spectrophotometric and Förster resonance energy transfer assays. Children with airway inflammation linked to aerodigestive disorder were enrolled as control subjects. Measurements And Main Results: Children with CF but not disease control children harbored BAL fluid neutrophils with high exocytosis of primary granules, before the detection of bronchiectasis. This measure of NE exocytosis correlated with lung damage (R = 0.55; P = 0.0008), whereas the molecular measure of free extracellular NE activity did not. This discrepancy may be caused by the inhibition of extracellular NE by BAL fluid antiproteases and its binding to leukocytes. Conclusions: NE exocytosis by airway neutrophils occurs in all children with CF, and its cellular measure correlates with early lung damage. These findings implicate live airway neutrophils in early CF pathogenesis, which should instruct biomarker development and antiinflammatory therapy in children with CF. abstract_id: PUBMED:21791628 AZD9668: pharmacological characterization of a novel oral inhibitor of neutrophil elastase. N-{[5-(methanesulfonyl)pyridin-2-yl]methyl}-6-methyl-5-(1-methyl-1H-pyrazol-5-yl)-2-oxo-1-[3-(trifluoromethyl)phenyl]-1,2-dihydropyridine-3-carboxamide (AZD9668) is a novel, oral inhibitor of neutrophil elastase (NE), an enzyme implicated in the signs, symptoms, and disease progression in NE-driven respiratory diseases such as bronchiectasis and chronic obstructive pulmonary disease via its role in the inflammatory process, mucus overproduction, and lung tissue damage. In vitro and in vivo experiments were done to evaluate the binding kinetics, potency, and selectivity of AZD9668, its effects in whole-blood and cell-based assays, and its efficacy in models of lung inflammation and damage. In contrast to earlier NE inhibitors, the interaction between AZD9668 and NE was rapidly reversible. AZD9668 was also highly selective for NE over other neutrophil-derived serine proteases. In cell-based assays, AZD9668 inhibited plasma NE activity in zymosan-stimulated whole blood. In isolated human polymorphonuclear cells, AZD9668 inhibited NE activity on the surface of stimulated cells and in the supernatant of primed, stimulated cells. AZD9668 showed good crossover potency to NE from other species. Oral administration of AZD9668 to mice or rats prevented human NE-induced lung injury, measured by lung hemorrhage, and an increase in matrix protein degradation products in bronchoalveolar lavage (BAL) fluid. In an acute smoke model, AZD9668 reduced the inflammatory response to cigarette smoke as indicated by a reduction in BAL neutrophils and interleukin-1β. Finally, AZD9668 prevented airspace enlargement and small airway wall remodeling in guinea pigs in response to chronic tobacco smoke exposure whether dosed therapeutically or prophylactically. In summary, AZD9668 has the potential to reduce lung inflammation and the associated structural and functional changes in human diseases. abstract_id: PUBMED:25135408 Coronary artery bypass grafting in a patient with chronic bird fancier's lung Acute exacerbation is a devastating and fatal event of hypersensitivity pneumonitis. We report an 80-year-old man with chronic bird fancier's lung which acutely exacerbated after off-pump coronary artery bypass grafting. Our perioperative management included avoidance of inhalation of high concentration oxygen, administration of neutrophil elastase inhibitor, and long-term administration of antibiotics. Nevertheless, the patient experienced severe respiratory failure after influenza infection. High-resolution computed tomography (CT) showed new ground-glass opacities overlying fibrosis and traction bronchiectasis. We diagnosed acute exacerbation of hypersensitivity pneumonitis. Immediate steroid pulse therapy with empirical antibiotic administration was effective, and he recovered from critical respiratory failure. High-resolution CT was very helpful in the early detection of acute exacerbation of pneumonitis. abstract_id: PUBMED:25348906 Expression of soluble triggering receptor expressed on myeloid cells-1 in childhood CF and non-CF bronchiectasis. Background: Soluble triggering receptor expressed on myeloid cells-1 (sTREM-1) is demonstrating promise as an inflammatory biomarker of acute infection in various pulmonary conditions; including community acquired pneumonia, ventilator associated pneumonia and non-tuberculous mycobacterial infection. Introduction: The expression of sTREM-1 has been poorly studied in all forms of bronchiectasis, both in the context of cystic fibrosis (CF) and non-cystic fibrosis bronchiectasis. Method: Induced sputum samples were collected for sTREM-1 determination in children with HIV-associated bronchiectasis and CF-bronchiectasis. The presence or absence of an exacerbation was noted at study entry. Lung function parameters (FEV1, FVC, FEV1 /FVC, FEF(25-75)) were measured using the Viasys SpiroPro Jaeger Spirometer (Hoechberg, Germany). Result: A total of twenty-six children with HIV-associated bronchiectasis and seventeen with CF were included. With respect to sTREM-1, the levels were readily detected in both groups, but were significantly higher in children with HIV-associated bronchiectasis (1244.0 pg/ml (iqr 194.5; 3755.3 pg/ml) and 204.9 pg/ml (iqr 66.9; 653.6 pg/ml) P = 0.003. There was a positive correlation between sTREM-1 and IL-8 as well as sputum neutrophil elastase in the HIV-bronchiectasis group (r = 0.715 and r = 0.630), respectively both P &lt; 0.005. sTREM-1 was not further increased in subjects presenting with an acute pulmonary exacerbation in the HIV-associated bronchiectasis and in CF participants (P = 0.971 and P = 0.481), respectively. In the CF group sTREM-1 strongly correlated with FVC% predicted and FEV1 % predicted (r = 0.950 and r = 0.954), both P &lt; 0.005. Conclusion: The pulmonary innate immune functions are over-active in HIV-associated bronchiectasis, with readily detected sTREM-1 values, which were higher than those in CF. sTREM-1 does not correlate with markers of HIV-disease activity but does correlate with markers of neutrophilic inflammation. In CF sTREM-1 has a negative correlation with pulmonary function parameters. Answer: The study described in abstract PUBMED:24816969 investigated the role of polymorphisms in the promoter region of the neutrophil elastase (ELANE) gene as potential modulators of therapeutic response in children with idiopathic bronchiectasis. The study included 48 children who were diagnosed with idiopathic bronchiectasis and received therapy that included antibiotics, anti-inflammatory drugs, expectorants, and postural drainage. The response to therapy was evaluated by the change in FeNO levels before and after administration of therapy. The results showed that children with high-activity ELANE genotypes had higher initial FeNO levels and a statistically significant better response to therapy compared to children with low/intermediate-activity genotypes. This suggests that ELANE gene polymorphisms may indeed modulate the response to therapy in childhood bronchiectasis, with high-activity genotypes being associated with a better therapeutic response.
Instruction: Mayo risk score for primary biliary cirrhosis: a useful tool for the prediction of course after liver transplantation? Abstracts: abstract_id: PUBMED:18806733 Mayo risk score for primary biliary cirrhosis: a useful tool for the prediction of course after liver transplantation? Background: Survival after orthotopic liver transplantation (OLT) for primary biliary cirrhosis (PBC) is excellent. In order to define the optimal time point for OLT, the Mayo risk score (MRS) was developed and a score of 7.8 was identified for transplantation. However, in reality most recipients are in a bad condition with a MRS above 7.8. So far it is still unknown if a higher score is associated with more complications after OLT perioperatively and in a long-term follow-up. Therefore, this study was designed to investigate the association of the MRS score with postoperative survival and complications. Material/methods: Between 1989 and 2006, 115 patients were transplanted for histologically proven PBC at the Charité Campus Virchow Clinic. In 98 of these patients, MRS data was available and retrospectively analyzed. Median age of 87 women and 11 men was 54 years (25 to 66 years). Results: The median follow-up after liver transplantation was 109 months (0.5-205 months). Actuarial patient survival after 5, 10 and 15 years was 90%, 88%, and 83%. Calculated survival by MRS without transplantation after 1, 5 and 7 years was 20%, 2% and 1% for these patients. Twelve patients (12%) died and histological recurrence of PBC was detected in 14 patients (14%). Seven patients underwent retransplantation (7%) and 58 patients developed an acute rejection episode (59%). Mean MRS was in all patients 9.54+/-1.35 and did not differ between patients with lethal course, retransplantations, PBC recurrence, rejection episodes and duration of hospital stay. Classification of all patients into a low (&lt;8.5), middle (8.5-10) and high MRS score (&gt;10) did not show a significant difference in long-term survival. Univariate analysis for identifying the level of MRS as risk for death, PBC-recurrence, retransplantation, acute rejection episodes and hospital stay only showed a significant increased risk for acute rejection episodes (1 episode = 0.04; 2 episodes = 0.01) for patients with a MRS above 8.5. Conclusions: The Mayo risk score is an approved mathematical model predicting survival in non-transplanted patients suffering from PBC. However, the score did not predict the course of our liver transplanted patients in a long-term follow-up. We could not demonstrate a reduced patient survival at a median MRS of 9.4 and about 10.0. Therefore, it is, from our point of view, questionable if the optimal time point for OLT is still 7.8. abstract_id: PUBMED:9222682 Primary biliary cirrhosis: Dutch application of the Mayo Model before and after orthotopic liver transplantation. Background/aims: A retrospective study of primary biliary cirrhosis (PBC) was performed to study the Original Mayo Model for predicting survival by a Dutch data-set of patients, presentation of disease progression; assessment of liver transplantation, prediction of post-transplantation survival; and the addition of two laboratory variables to the Original Mayo Model. Materials And Methods: Survival of 83 patients, 37 of whom underwent transplantation, were studied. Mean follow-up was 6.0 +/- 0.45 SEM years. Risk score at diagnosis, platelet count, and serum sodium were analyzed in a Cox model. Results: The Original Mayo Model estimated survival for low-, medium-, and high-risk groups accurately and it also presented disease progression. Baseline Mayo risk score in a Cox model had a regression coefficient of 1.01, indicating an excellent predictor p &lt; 0.0001. Platelet count was a predictor of survival (p &lt; 0.002), whereas serum sodium did not (p = 0.67). A new model combined of the Original Mayo risk score and platelet count predicted survival in high-risk patients somewhat better compared to the Original Mayo Model. With both models, liver transplantation had a significant beneficial effect on survival (p &lt; 0.001). The scores revealed no significant influence (p = 0.47) for overall post-transplantation survival. Conclusions: The Original Mayo Model remains the model of choice for patients with PBC for prognostication from 3-8 years, is a useful tool in the assessment of liver transplantation but not an indicator of post-transplantation survival. Platelet count showed to have additional prognostic value. A new model combined of platelet count and the Original Mayo risk score did predict survival in high-risk groups slightly better compared to the Original Mayo Model. abstract_id: PUBMED:11084062 Confirmation of the role of the Mayo Risk Score as a predictor of resource utilization after orthotopic liver transplantation for primary biliary cirrhosis. Resource utilization is an important consideration when patients are selected for orthotopic liver transplantation (OLT). The Mayo Risk Score has been proposed to help predict optimum time for OLT. We assessed the relation between Mayo risk score, Child-Pugh score, and resource utilization and outcome after OLT for primary biliary cirrhosis. The mean Mayo risk score was greater in patients who died than in the survivors (8.6 +/- 1.4 v 7.1 +/- 1.8; P &lt;.05). There was a positive correlation between Mayo risk score and the 4 resource variables studied (intraoperative blood requirements, time ventilated, and duration of intensive care unit and hospital stays). Patients with a Mayo risk score greater than 7.8 used almost twice the resources of patients with a risk score less than 7.8. A positive correlation also existed between Child-Pugh score and duration of hospital stay. The mean Child-Pugh score in patients who died was greater than that in survivors (10.7 +/- 2.0 v 8.5 +/- 2.8, P =.03). This study confirms that Mayo Risk score is an important predictor of resource utilization and outcome after OLT. abstract_id: PUBMED:2664365 Application of the Mayo primary biliary cirrhosis survival model to Mayo liver transplant patients. Liver transplantation is considered lifesaving for selected patients with end-stage primary biliary cirrhosis (PBC). A mathematical model to predict survival in the patient with PBC who has not undergone transplantation would be valuable for improving selection of patients for and timing of transplantation and for providing control information for assessment of the efficacy of transplantation. The Cox regression method and data from 312 Mayo Clinic patients with PBC were used to develop a model based on age, total serum bilirubin, serum albumin, prothrombin time, and severity of edema. When cross-validated on an independent set of 106 Mayo patients, the model accurately predicted their survival. It was similar to two other published survival models in terms of risk measurement but had the advantage of not necessitating liver biopsy. The model was used to assess the efficacy of liver transplantation by comparing the Kaplan-Meier survival of 32 Mayo patients after transplantation with the average model prediction of survival without transplantation. Beyond 3 months after transplantation, Kaplan-Meier survival probabilities were significantly greater than control survival predicted by the model (P less than 0.001). Examples of using the model for aiding in selection of patients for and timing of transplantation are provided. abstract_id: PUBMED:10915173 Adaptation of the Mayo primary biliary cirrhosis natural history model for application in liver transplant candidates. The Mayo natural history model has been used widely as a tool to estimate prognosis in patients with primary biliary cirrhosis (PBC), particularly liver transplant candidates. We present an abbreviated model in which a tabular method is used to approximate the risk score, which may be incorporated in the minimal listing criteria for liver transplant candidates. Data used in the development and validation of the original Mayo model were derived from 418 patients with well-characterized PBC. To construct an abbreviated risk score in a format similar to that of Child-Turcotte-Pugh score, 1 to 3 cut-off criteria were determined for each variable, namely age (0 point for &lt;38, 1 for 38 to 62 and 2 for &gt;/=63 years), bilirubin (0 point for &lt;1, 1 for 1 to 1.7, 2 for 1.7 to 6.4, and 3 for &gt;6.4 mg/dL), albumin (0 point for &gt;4.1, 1 for 2.8 to 4.1, and 2 for &lt;2.8 g/dL), prothrombin time (1 point for normal and 2 for prolonged) and edema (0 point for absent and 1 for present). The intervals between these criteria were chosen in a way to enable a meaningful classification of patients according to their risk for death. This score is highly correlated with the original risk score (r = 0.93; P &lt;.01). The Kaplan-Meier estimate at 1 year was 90.6% in patients with a score of 6. The abbreviated risk score is a convenient method to quickly estimate the risk score in patients with PBC. An abbreviated score of 6 may be consistent with the current minimal listing criteria in liver transplant candidates. abstract_id: PUBMED:2680867 Extramural cross-validation of the Mayo primary biliary cirrhosis survival model establishes its generalizability. The generalizability of the Mayo model for predicting survival in individual primary biliary cirrhosis patients without liver transplantation was tested and confirmed. The model was applied to a data base of patients from the New England Medical Center Hospitals (n = 141) and the Scott and White Clinic (n = 35) and found to predict their survival accurately. It was also shown to be accurate for Mayo primary biliary cirrhosis patients with very advanced disease (n = 30), those with less than a 33% chance of surviving 12 months. The analyses confirmed that the addition of histologic stage did not significantly improve the predictive power of the model (p greater than 0.10). We suggest that the Mayo model is a practical tool for clinical management and decision making. abstract_id: PUBMED:1874495 Accurate prediction of death by serial determination of galactose elimination capacity in primary biliary cirrhosis: a comparison with the Mayo model. We retrospectively analyzed the predictive accuracy of serial determinations of galactose elimination capacity in 61 patients with primary biliary cirrhosis. Death was predicted from the time that the regression line describing the decline in galactose elimination capacity vs. time intersected a value of 4 mg.min-1.kg-1. Thirty-one patients exhibited decreasing galactose elimination capacity; in 11 patients it remained stable and in 19 patients only one value was available. Among those patients with decreasing galactose elimination capacity, 10 died and three underwent liver transplantation; prediction of death was accurate to 7 +/- 19 mo. This criterion incorrectly predicted death in two patients with portal-vein thrombosis; otherwise, it did better than or as well as the Mayo clinic score. The latter was also tested on our patients and was found to adequately describe risk in yet another independent population of patients with primary biliary cirrhosis. Cox regression analysis selected only bilirubin and galactose elimination capacity, however, as independent predictors of death. We submit that serial determination of galactose elimination capacity in patients with primary biliary cirrhosis may be a useful adjunct to optimize the timing of liver transplantation and to evaluate new pharmacological treatment modalities of this disease. abstract_id: PUBMED:8020881 Primary biliary cirrhosis: prediction of short-term survival based on repeated patient visits. The progression of primary biliary cirrhosis was studied in 312 patients who were seen at the Mayo Clinic between January 1974 and May 1984. Follow-up was extended to April 30, 1988, by which time 140 of the patients had died and 29 had undergone orthotopic liver transplantation. These patients generated 1,945 patient visits that enabled us to study the change in the prognostic variables of primary biliary cirrhosis (age, bilirubin value, albumin value, prothrombin time and edema) from the time of referral. Also, using this database and the Cox proportional-hazards regression model, we developed an updated model for primary biliary cirrhosis that can be used to predict short-term survival at any time in the course of the disease. This model uses the values of the prognostic variables measured at the latest patient visit. Comparison of predicted survival from the update model and the natural history model of primary biliary cirrhosis showed that the updated model was superior to the original model for predicting short-term survival. This finding applied to both the Mayo Clinic patients and an independent set of 83 Dutch patients. The Mayo updated model is recommended for improving the accuracy of predictions of survival during the 2 yr after a patient visit. abstract_id: PUBMED:33941746 A Comparison of Prognostic Scores (Mayo, UK-PBC, and GLOBE) in Primary Biliary Cholangitis. Introduction: Comparative data on scores that predict outcome in primary biliary cholangitis (PBC) are scarce. We aimed to assess and compare the prognostic value of the Mayo Risk Score (MRS, 1989 and 1994), UK-PBC score, and GLOBE score in a large international cohort of patients with PBC. Methods: Ursodeoxycholic acid-treated patients from 7 centers participating in the GLOBAL PBC Study Group were included. The discriminatory performance of the scores was assessed with concordance statistics at yearly intervals up to 5 years. Model for End-stage Liver Disease was included for comparison. Prediction accuracy was assessed by comparing predicted survival and actual survival in Kaplan-Meier analyses. Results: A total of 1,100 ursodeoxycholic acid-treated patients with PBC were included, with a mean (SD) age of 53.6 (12.0) years, of whom 1,003 (91%) were female. During a median follow-up of 7.6 (interquartile range 4.1-11.7) years, 42 patients underwent liver transplantation, and 127 patients died. At 1 year, the concordance statistic for Model for End-stage Liver Disease was 0.68 (95% confidence interval [CI] 0.64-0.72), 0.74 (95% CI 0.67-0.80) for the UK-PBC score, 0.76 (95% CI 0.72-0.81) for the MRS (1989 and 1994), and 0.80 (95% CI 0.76-0.84) for the GLOBE score. The GLOBE score showed superior discriminatory performance, but differences were not statistically different. For all scores, discriminatory performance increased in those with bilirubin &gt;0.6 × ULN and advanced fibrosis estimated with Fibrosis-4. The predicted (median) minus observed 5-year transplant-free survival was +0.4% and +2.5% for the MRS (1989) and GLOBE score, respectively. Discussion: All prognostic scores developed for PBC (GLOBE, UK-PBC, and MRS) demonstrated comparable discriminating performance for liver transplantation or death as well as good prediction accuracy. abstract_id: PUBMED:9621867 Liver transplantation for primary biliary cirrhosis and primary sclerosing cholangitis: predicting outcomes with natural history models. In patients with primary biliary cirrhosis (PBC) and primary sclerosing cholangitis (PSC), risk score models that reflect disease severity have been developed and can serve as an objective measurement to assess and evaluate the effect of the severity of liver disease on the outcome of liver transplantation. Thus, using the established Mayo risk scores for PBC and PSC, one not only can estimate survival for the individual patient but can measure disease activity as well. Indeed, several studies have suggested that the optimal timing of liver transplantation with use of the Mayo PBC model may be an important tool to improve survival, decrease morbidity, and decrease overall related costs. Likewise, studies in patients with PSC have yielded similar results. This review explores how prognostic mathematical survival models for PBC and PSC might be applied to individual patients in need of liver transplantation. The following question is addressed: How can the timing of liver transplantation be optimized to increase survival, decrease postoperative morbidity, and ultimately, decrease the overall resource utilization involved in this procedure? Answer: The Mayo risk score (MRS) for primary biliary cirrhosis (PBC) has been a subject of investigation in terms of its utility for predicting the course after liver transplantation (OLT). Several studies have explored the relationship between the MRS and outcomes post-OLT with varying conclusions. One study found that the MRS did not predict the course of liver-transplanted patients in a long-term follow-up. The mean MRS was 9.54±1.35 and did not differ significantly between patients with different outcomes such as lethal course, retransplantations, PBC recurrence, rejection episodes, and duration of hospital stay. This led to the conclusion that the MRS may not be a reliable predictor of postoperative survival and complications, and it was questioned whether the optimal time point for OLT is still an MRS of 7.8 (PUBMED:18806733). Another study confirmed the role of the MRS as a predictor of resource utilization after OLT for PBC. It found that patients with a higher MRS used significantly more resources than those with a lower score and that there was a positive correlation between the MRS and outcomes such as intraoperative blood requirements, time ventilated, and duration of intensive care unit and hospital stays (PUBMED:11084062). The Dutch application of the Original Mayo Model before and after OLT for PBC showed that the model accurately estimated survival for low-, medium-, and high-risk groups and presented disease progression. However, it was not an indicator of post-transplantation survival, although liver transplantation had a significant beneficial effect on survival (PUBMED:9222682). In summary, while the MRS is an approved mathematical model predicting survival in non-transplanted patients with PBC, its predictive value for post-OLT outcomes is less clear. Some studies suggest that it may not be a reliable tool for predicting the course after OLT, while others indicate its usefulness in predicting resource utilization post-OLT. Therefore, the utility of the MRS as a tool for predicting the course after liver transplantation for PBC remains a topic of debate and may require further investigation to clarify its role (PUBMED:18806733; PUBMED:11084062; PUBMED:9222682).
Instruction: Do illness characteristics and familial risk differ between women with anorexia nervosa grouped on the basis of personality pathology? Abstracts: abstract_id: PUBMED:16336725 Do illness characteristics and familial risk differ between women with anorexia nervosa grouped on the basis of personality pathology? Background: In view of the potential utility of personality-based groupings in eating disorders (EDs), and the lack of studies using this approach in large samples of individuals with anorexia nervosa (AN), this study set out to examine (i) the nature of personality-based clusters of women with lifetime AN and (ii) if these clusters are associated with either clinical symptoms or aetiological variables. Method: The self-report Dimensional Assessment of Personality Pathology (DAPP) was completed by 153 women with a lifetime diagnosis of DSM-IV AN. A cluster analysis was used to identify personality-based subgroups. Clusters were then compared on clinical and aetiological variables. Results: Three personality-based clusters were identified, defined by broad, avoidant and compulsive types of personality pathology. Dimensions of low dissocial behaviour, high inhibition and high compulsivity were common to all clusters, while dimensions related to emotional dysregulation appeared more heterogeneous. Clinical symptoms were not related to personality profile with the exception of a trend towards more fasting behaviour in the broad group. The compulsive cluster with the narrowest range of extreme personality traits reported the highest familial risk of eating pathology. Conclusions: The three clusters identified in our AN sample were similar to those previously identified in broader eating disordered samples. Personality-based clusters did not correspond overall to clinical symptoms but aetiological differences supported their validity. Broader personality pathology indicative of emotional dysregulation, problems with identity and relationships, in addition to core traits, may increase vulnerability to AN in those with less familial risk. abstract_id: PUBMED:16901263 Personality pathology among individuals with a lifetime history of anorexia nervosa. Anorexia nervosa (AN) appears to be associated with certain personality features. The degree to which personality pathology persists after the acute phase of illness is unclear. This study aimed to investigate personality pathology in a large sample (n = 153) of women with histories of AN using the Dimensional Assessment of Personality Pathology (DAPP) and to evaluate the persistence of pathological personality features by comparing women with acute (n = 83) and remitted AN (n = 55). Women with AN demonstrated high levels of emotional dysregulation, social inhibition and compulsivity relative to age-matched healthy women (n = 124). Identity problems, intimacy problems and schizotypal features were highlighted as important. The same 14 pathological personality dimensions were significantly elevated in both ill and remitted participants, though remitted individuals' scores were reduced on 4 of the subscales. These results indicate that pathological personality features in AN are persistent traits, although some features may become exaggerated in the acute phase. abstract_id: PUBMED:1592839 The prevalence of personality disorders in 210 women with eating disorders. Background: The purpose of this study was to assess the prevalence, reliability, and predictive value of comorbid personality disorders in a large sample of 210 women seeking treatment for anorexia nervosa (N = 31), bulimia nervosa (N = 91), or mixed disorder (N = 88). Method: All subjects were interviewed using the Structured Interview for DSM-III Personality Disorders as part of a longitudinal outcome study of eating disorders currently underway at Massachusetts General Hospital. Results: Of the 210 subjects, 27% had at least one personality disorder; the most commonly observed was borderline personality disorder in 18 subjects (9%). The highest prevalence of personality disorders was found in the anorexia nervosa/bulimia nervosa group at 39%, followed by 22% in the anorexics and 21% in the bulimic sample. We found statistically significant differences regarding the distribution of personality disorders across eating disorder groups. The dramatic personality disorder cluster was differentially distributed across groups; this finding was accounted for by higher rates of borderline personality disorder in the bulimia nervosa and anorexia nervosa/bulimia nervosa groups than in the anorexia nervosa group. The anxious personality disorder cluster was differentially distributed across groups with higher rates in the anorexia nervosa and anorexia nervosa/bulimia nervosa samples. Those subjects with a comorbid personality disorder had a significantly slower recovery rate than those without a comorbid personality disorder. Conclusion: The prevalence of personality disorders is not high in treatment-seeking women with eating disorders compared with previously studied samples. The greatest frequency of comorbid personality disorders is in the anorexia nervosa/bulimia nervosa group; this subset also had longer duration of eating disorder illness and much greater comorbid Axis I psychopathology compared with the rest of the sample. Future studies should address whether personality disorders have predictive value in the long-term course and outcome of eating disorders. abstract_id: PUBMED:12323011 Personality profiles in patients with eating disorders. The present review focused on the personality profiles of patients with eating disorders. Studies using the Structured Clinical Interview for DSM-III-R Personality Disorder showed high rates of diagnostic co-occurrence between eating disorders and personality disorders. The most commonly observed were histrionic, obsessive-compulsive, avoidant, dependent and borderline personality disorders. Studies using the Cloninger's personality theory suggested that high Harm Avoidance might be relevant to the pathology of anorexia nervosa and high Novelty Seeking and Harm Avoidance to bulimia nervosa. Moreover, high Self-Directedness was suggested to be associated with favorable outcome in bulimia nervosa. The assessment of personality in a cross-sectional study, however, might be influenced by the various states of the illness. Therefore, a sophisticated longitudinal study will be required to advance this area of research. abstract_id: PUBMED:1291541 A psychoanalytic study of eating disorders: I. A developmental profile of 67 index cases. Much has been written in both the professional literature and lay press regarding the sociocultural basis for the current increase in eating disorders. The typical victims of these afflictions appear to be intelligent adolescent girls from middle and upper class families (Garfinkel and Garner, 1982). Although the general pressure for thinness, mastery, and achievement among women in our culture undoubtedly help shape the course of illness, these social mores do not appear to be its root causes. The data presented here suggest strongly that the truly eating-disordered individual has a history of disturbances in early relationships, leading to a lack of security and pronounced difficulties in trusting others, and in simply being an authentic individual in the presence of others. As such these individuals share many characteristics of others with severe personality disorders such as inner emptiness, problems with identity, fear of abandonment, and peer relationships (Clarkin et al., in press), but differ in the expression of these basic defects or in solutions found to compensate pathologically for these core deficiencies. Therefore, although contemporary cultural ideals and values may supply the shape of the disorder, these individuals would have fared badly in any social climate although their presentation of illness may have differed (Bemporad et al., 1988). It is this profound underlying personality pathology that necessitates prolonged treatment for any hope of amelioration. Later reports will address this indication with a description of the results of therapy. abstract_id: PUBMED:26076785 MMPI for personality characteristics of patients with different diseases. In the field of psychosomatic medicine the relationship between personality characteristics and diseases is supposed to be an important issue. The aim of this article is to present group's MMPI profiles obtained for patients with different chronic diseases and to discuss about possible specific features of these different groups. We summarized results obtained by psychological testing of following groups of patients: adult patients treated with chronic maintenance dialysis, patients with diabetic retinopathy, general anxiety group, attack panic syndrome, parents of children with rheumatoid arthritis, as well as adolescents with mental anorexia, cystic fibrosis, diabetes mellitus and leukemia. Two control groups comprised adults and adolescents, both without any health problems, selected randomly. As a psychometric test MMPI-201 was used. Statistic 10 package is used for statistical analysis. In our presentation it can be seen some typical personality characteristics for patients with chronic conditions. These findings could be helpful for clinicians concerning treatment planning and follow-up. In general, the MMPI helps us to obtain a global, factual picture from the self-assessment of the patient, explained in a psycho-technical language. Group's profile could be used in clinical practice for planning treatment and to suppose the prognosis of the illness. abstract_id: PUBMED:33687655 Exploring associations between personality trait facets and emotional, psychological and social well-being in eating disorder patients. Purpose: Personality functioning is strongly linked to well-being in the general population. Yet, there is a lack of scientific knowledge about the pathways between personality trait facets and emotional, psychological and social well-being in ED patients. The general aim was to examine potential associations between maladaptive personality trait facets and the three main dimensions of well-being. Methods: Participants were 1187 female eating disorder patients who were referred for specialized treatment. Patients were diagnosed with anorexia nervosa (31.7%), bulimia nervosa (21.7%), binge eating disorder (11%) and other specified eating disorders (35.5%). The Personality Inventory for the DSM 5 (PID-5) was used to measure 25 trait facets, and well-being was measured with the Mental Health Continuum Short Form (MHC-SF). Multiple hierarchical regression analyses were applied to examine potential associations between personality and well-being while controlling for background and illness characteristics. Results: Personality trait facets led to a statistically significant increase of the explained variance in emotional (38%), psychological (39%), and social well-being (26%) in addition to the background and illness characteristics. The personality trait facets anhedonia and depression were strongly associated with all three well-being dimensions. Conclusion: Personality traits may play an essential role in the experience of well-being among patients with EDs. To promote overall mental health, it may be critical for clinicians to address relevant personality trait facets, such as anhedonia and depression, associated with well-being in treatment. Level Of Evidence: Level V, cross-sectional descriptive study. abstract_id: PUBMED:8293022 Comorbidity of DSM-III-R eating disorders and personality disorders. The purpose of this study was to assess the relationship of eating disorders to personality disorders. Two hundred subjects were independently administered the Structured Clinical Interview for DSM-III-R (SCID) and the Personality Disorder Examination (PDE) face-to-face by two experienced clinicians. One hundred forty-six also completed the Personality Diagnostic Questionnaire-Revised (PDQ-R). Rates of personality disorder among patients with and without eating disorders were determined by each of the three instruments. Comorbidity between bulimia nervosa and anorexia nervosa and a conservative estimate of individual Axis II disorders was examined. Eating disorders with and without personality disorders were compared on age at onset and two measures of illness severity. Results indicate that the association, in general, between personality disorders and eating disorders varies by diagnostic method. Bulimia nervosa, however, is associated with borderline personality disorder and anorexia nervosa with avoidant personality disorder. Eating disorders with personality disorders are characterized by chronicity and low levels of functioning compared with eating disorders without personality disorders. abstract_id: PUBMED:26261037 Eating disorders in older women. Eating disorders (EDs) are disturbances that seriously endanger the physical health and often the lives of sufferers and affect their psychosocial functioning. EDs are usually thought of as problems afflicting teenagers. However, the incidence in older women has increased in recent decades. These cases may represent either late-onset disease or, more likely, a continuation of a lifelong disorder. The DSM-5 classification differentiates 4 categories of eating disorder: anorexia nervosa, bulimia nervosa, binge-eating disorders and other specified feeding and eating disorders. The weight loss and malnutrition resulting from EDs have widespread negative consequences for physical, mental and social health. The main risk factors for developing long-term consequences are the degree of weight loss and the chronicity of the illness. Most of the cardiac, neurological, pulmonary, gastric, haematological and dermatological complications of EDs are reversible with weight restoration. EDs are serious illnesses and they should never be neglected or treated only as a manifestation of the fashion for dieting or a woman's wish to achieve an imposed standard feminine figure. Additionally, EDs are associated with high risk of morbidity and mortality. The literature concerning EDs in older, postmenopausal women is very limited. The main aim of this paper is to ascertain the epidemiology and prognosis of EDs in older women, and to review their diagnosis and management. abstract_id: PUBMED:16809976 Psychopathological features of patients with prolonged anorexia nervosa as assessed by the Minnesota Multiphasic Personality Inventory. Objective: The duration of illness is quite long in some anorexia nervosa (AN) patients. In the present study, we investigated the psychopathological features of patients with prolonged AN as assessed by the Minnesota Multiphasic Personality Inventory-1 (MMPI-1). Methods: Fifty-five AN patients completed the MMPI-1 on admission to Kyushu University Hospital from 1999 to 2002. The patients were divided into three groups on the basis of their illness duration: a short-term group, less than 5 years of illness duration (n=31); a middle-term group, from 5 to 10 years (n=14); and a prolonged group, 10 years or more (n=10). Results: The prolonged group scored significantly higher on the MPPI-1 scales of hysteria (Hy), low back pain (Lb) and family conflict than the short-term group. Discussion: AN patients whose illness duration was prolonged characteristically had more hysteria with family conflict. This should be considered in their treatment. Answer: Yes, illness characteristics and familial risk do differ between women with anorexia nervosa (AN) when grouped based on personality pathology. A study that used the Dimensional Assessment of Personality Pathology (DAPP) to identify personality-based subgroups among 153 women with a lifetime diagnosis of AN found three personality-based clusters: broad, avoidant, and compulsive types of personality pathology. While dimensions of low dissocial behavior, high inhibition, and high compulsivity were common across all clusters, dimensions related to emotional dysregulation were more heterogeneous. Clinical symptoms did not correspond overall to personality profile, except for a trend towards more fasting behavior in the broad group. Notably, the compulsive cluster, which had the narrowest range of extreme personality traits, reported the highest familial risk of eating pathology. This suggests that broader personality pathology indicative of emotional dysregulation, problems with identity and relationships, in addition to core traits, may increase vulnerability to AN in those with less familial risk (PUBMED:16336725). Additionally, other studies have found that personality pathology is prevalent among individuals with AN and that certain personality features persist after the acute phase of the illness. For example, women with AN demonstrated high levels of emotional dysregulation, social inhibition, and compulsivity. Pathological personality features in AN are persistent traits, although some features may become exaggerated in the acute phase (PUBMED:16901263). Personality disorders were also found to be comorbid with AN, with the highest prevalence in the anorexia nervosa/bulimia nervosa group, and those with a comorbid personality disorder had a significantly slower recovery rate (PUBMED:1592839). In summary, women with AN exhibit different illness characteristics and familial risks when grouped according to personality pathology, and these differences have implications for understanding the vulnerability and treatment of AN.
Instruction: Are parents' knowledge and practice regarding immunization related to pediatrics' immunization compliance? Abstracts: abstract_id: PUBMED:24460878 Are parents' knowledge and practice regarding immunization related to pediatrics' immunization compliance? a mixed method study. Background: Immunization rate is one of the best public health outcome and service indicators of the last 100 years. Parental decisions regarding immunization are very important to improve immunization rate. The aim of this study was to evaluate the correlation between parental knowledge-practices (KP) and children's immunization completeness. Methods: A mixed method has been utilized in this study: a retrospective cohort study was used to evaluate immunization completeness; a prospective cross-sectional study was used to evaluate immunization KP of parents. 528 children born between 1 January 2003 and 31 June 2008 were randomly selected from five public health clinics in Mosul, Iraq. Immunization history of each child was collected retrospectively from their immunization record/card. Results: About half of studied children (n = 286, 56.3%) were immunized with all vaccination doses; these children were considered as having had complete immunization. 66.1% of the parents was found to have adequate KP scores. A significant association of immunization completeness with total KP groups (p &lt; 0.05) was found. Conclusions: Future efforts are required to improve immunization rate and parents' knowledge and practice. The study results reinforce recommendations for the periodic assessment of immunization rate and the use of educational programmes to improve the immunization rate, knowledge and practice. abstract_id: PUBMED:24485194 Factors underlying inadequate parents' awareness regarding pediatrics immunization: findings of cross-sectional study in Mosul- Iraq. Background: Since last 100 years, immunization rate is one of the best public health outcome and service indicators. However, the immunization system is still imperfect; there are many countries that still have unvaccinated children. Parental decisions regarding immunization are very important to improve immunization rate. The aim of this study is to evaluate the association between parental knowledge-practice (KP) regarding immunization with family and immunization providers' factors. Methods: This is a prospective cross-sectional study design. Immunization knowledge and practices among 528 Iraqi parents were evaluated through validated questionnaire. Familial data and immunization provider's characteristics were collected from parents through interview. Results: More than half of respondents/study population (66.1%) have adequate knowledge- practice scores. Significant associations were noted for knowledge-practice groups with father's education level, mother's education level, mother's age at delivery, number of preschool children, parents gender, family income, provider types, and birth place (p &lt; 0.05). Conclusion: Immunization campaigns and awareness are required to improve parents' knowledge and practice regarding immunization. The study results reinforce recommendations for use of educational programmes to improve the immunization knowledge and practice. abstract_id: PUBMED:34007240 Knowledge, Attitude, and Practices of Parents About Immunization of Infants and Its Associated Factors in Wadla Woreda, North East Ethiopia, 2019. Introduction: Improving infant immunization coverage and timeliness is a key health policy objective in many developing countries such as Ethiopia. Despite this, full immunization coverage in Ethiopia becomes low with the concurrent significant burden of vaccine-preventable diseases among infants. A previously published study acknowledged that parental knowledge, attitude, and practice towards infant immunization are vital issues to improve coverage and influence uptake. Objective: To assess parents' knowledge, attitude, practice, and its associated factors regarding immunization of infants at Wadla Woreda, North East Ethiopia, 2019. Material And Methods: A community-based cross-sectional study was employed among 418 parents from March to April 2019 in Wadla Woreda, North East Ethiopia. A stratified sampling technique was used to approach the study subjects. An interviewer-administered structured questionnaire was used. The collected data were cleaned, coded, and entered in EPI-Info 7.2 and transferred to SPSS version 25.0 for analysis. Binary logistic regression analyses were performed to identify variables having a significant association with parental knowledge, attitude, and practice. Variables with a p-value of ≤0.05 and AOR with 95% CI were declared as having a statistically significant association during multivariable logistic regression analysis. Results: In this study, 65.1%, 57.3%, and 55.3% of the parents had good knowledge, a favorable attitude, and good practice towards infant immunization, respectively. Parent's educational status (AOR=5.330), urban residency (AOR=2.788), favorable attitude (AOR=4.308) and got immunization service two-three times (AOR=3.227) and four-five times (AOR=2.254) were statistically associated with knowledge of parents. Parents who attend primary school [AOR=0.451) and secondary school [AOR=0.320), parents who were mothers of the child [AOR=3.813), and respondents who had good knowledge about infant immunization (AOR= 4.592) were significantly associated with a favorable parental attitude. Infant immunization practice was significantly associated with parental education who attend primary school (AOR=2.513), secondary school (AOR=2.546) and higher education (AOR=11.988), parents who had good knowledge of infant immunization (AOR= 4.206), and short waiting time (AOR=3.881). Conclusion And Recommendation: Parental KAP towards infant immunization was found to be lower than most of the study findings quoted in this study. Improving the knowledge, attitude, and practice of parents about immunization and vaccine-preventable diseases was recommended by providing health education and health promotion interventions. abstract_id: PUBMED:23879083 Iraqi parents' views of barriers to childhood immunization. Deficiencies in knowledge about immunization among parents often leads to poor utake or errors in immunization dosage and timing. The aims of this study were to determine Iraqi parents' views of barriers to immunization and beliefs about ways to promote immunization. A questionnaire survey was carried out among 528 Iraqi parents with children who had incomplete immunization status. The main barriers to immunization agreed by the parents were lack of vaccine availability (51.5% of parents) and parents' lack of education (42.4%), while 88.4% of parents thought that lack of funding was not an important barrier. More than 60% of the parents suggested promoting childhood immunization via the media, and 77.5% thought that an increase in funding would not remove barriers to childhood immunization. Better vaccine availability in public health clinics and improving parents' literacy might enhance immunization uptake in Iraq. abstract_id: PUBMED:20635642 Immunization status of children admitted to a tertiary-care hospital of north India: reasons for partial immunization or non-immunization. Reasons for the low coverage of immunization vary from logistic ones to those dependent on human behaviour. The study was planned to find out: (a) the immunization status of children admitted to a paediatric ward of tertiary-care hospital in Delhi, India and (b) reasons for partial immunization and non-immunization. Parents of 325 consecutively-admitted children aged 12-60 months were interviewed using a semi-structured questionnaire. A child who had missed any of the vaccines given under the national immunization programme till one year of age was classified as partially-immunized while those who had not received any vaccine up to 12 months of age or received only pulse polio vaccine were classified as non-immunized. Reasons for partial/non-immunization were recorded using open-ended questions. Of the 325 children (148 males, 177 females), 58 (17.84%) were completely immunized, 156 (48%) were partially immunized, and 111 (34.15%) were non-immunized. Mothers were the primary respondents in 84% of the cases. The immunization card was available with 31.3% of the patients. All 214 partially- or completely-immunized children received BCG, 207 received OPV/DPT1, 182 received OPV/DPT2, 180 received OPV/DPT3, and 115 received measles vaccines. Most (96%) received pulse polio immunization, including 98 of the 111 non-immunized children. The immunization status varied significantly (p&lt;0.05) with sex, education of parents, urban/rural background, route and place of delivery. On logistic regression, place of delivery [odds ratio (OR): 2.3, 95% confidence interval (CI) 1.3-4.1], maternal education (OR=6.94, 95% CI 3.1-15.1), and religion (OR=1.75, 95% CI 1.2-3.1) were significant (p&lt;0.05). The most common reasons for partial or non-immunization were: inadequate knowledge about immunization or subsequent dose (n=140, 52.4%); belief that vaccine has side-effects (n=77, 28.8%); lack of faith in immunization (n=58, 21.7%); or oral polio vaccine is the only vaccine required (n=56, 20.9%. Most (82.5%) children admitted to a tertiary-care hospital were partially immunized or non-immunized. The immunization status needs to be improved by education, increasing awareness, and counselling of parents and caregivers regarding immunizations and associated misconceptions as observed in the study. abstract_id: PUBMED:22053765 Factors influencing parental decision making when parents choose to deviate from the standard pediatric immunization schedule. Purpose: The purpose of this article is to present the results of a study focusing on the basis for parental decisions to refrain from the standard pediatric immunization schedule. Design And Methods: The study was based upon open-ended qualitative items that were subjected to content analysis to identify the prominent themes cited by parents. Results: The results of the study demonstrate the presence of a variety of continuing misperceptions regarding the risks represented by common pediatric immunizations. Practice Implications: The study demonstrates the need for intensive efforts designed to apprise families of the risks and benefits associated with pediatric immunization. abstract_id: PUBMED:10475675 Knowledge, attitude and practice survey on immunization service delivery in Guangxi and Gansu, China. Background: a knowledge, attitude and practice (KAP) survey on immunization service delivery among providers and parents was conducted in Guangxi and Gansu, Provinces in China. Purpose: provide baseline information for a health education program. Methods: KAP surveys were conducted in 12 randomly selected counties from high and low quality immunization service delivery area in Guangxi and Gansu Provinces. Caretakers of 2520 children born in 1995 and 637 immunization providers were selected by probability proportional to size sampling. Results: the level of immunization knowledge among parents was positively associated with attitude and practices of immunization. Immunization coverage was 89.3% in the high and 63.8% in the low service areas. Low coverage was associated (p &lt; 0.01) with the number of immunization service sessions per year, fee for immunization services and health insurance schemes for immunization services. Conclusions: immunization coverage can be improved by ensuring sufficient immunization service sessions per year, reducing the cost of immunization services and increasing the participation in health insurance schemes for immunization services. abstract_id: PUBMED:35363326 Assessment of parent knowledge and perception towards the importance of child immunization in Sudair region, Saudi Arabia. Objective: According to WHO, global coverage of immunization was 86% in 2019, which dropped to 83% in 2020. The objective of this study is to assess parental knowledge and identify their perception towards the importance of child immunization. This study has also tried to determine the possible influencing factors for parental decision-making towards child immunization. Patients And Methods: A community-based cross-sectional study was conducted in 2020 in Sudair region of Saudi Arabia. The study was done among parents residing in Sudair region, where the sample was randomly selected based on the household. The sample of 436 was taken using the population proportion formula. Data were collected and analyzed using SPSS version 18 (SPSS Inc., Chicago, IL, USA). Both the descriptive and inferential statistics were used to draw the results. Results: In this study the three-fourth (77.1%) of respondents were female and the remaining were male (23%). The majority of the respondent's knowledge was reported to have some form of influence on their idea of immunization followed by Ministry of Health, family members, medical staff, social media and others. A significant association was seen between age, marital status, occupational status and parents' knowledge on the importance of immunization (thinking if all child immunization was important). Association was also seen between parents' perception that immunization causes serious side effects and the age of the respondents. Conclusions: The specific programs and bcc materials are needed to enhance the knowledge and perception of individuals regarding the need for complete immunization and also regarding the fact of vaccine-preventable diseases. abstract_id: PUBMED:25346471 Immunization knowledge and practice among Malaysian parents: a questionnaire development and pilot-testing. Background: Parents are the main decision makers for their children vaccinations. This fact makes parents' immunization knowledge and practices as predictor factors for immunization uptake and timeliness. The aim of this pilot study was to develop a reliable and valid instrument in Malaysian language to measure immunization knowledge and practice (KP) of Malaysian parents. Methods: A cross-sectional prospective pilot survey was conducted among 88 Malaysian parents who attended public health facilities that provide vaccinations. Translated immunization KP questionnaires (Bahasa Melayu version) were used. Descriptive statistics were applied, face and content validity were assessed, and internal consistency, test-retest reliability, and construct validity were determined. Results: The mean ± standard deviation (SD) of the knowledge scores was 7.36 ± 2.29 and for practice scores was 7.13 ± 2.20. Good internal consistency was found for knowledge and practice items (Cronbach's alpha = 0.757 and 0.743 respectively); the test-retest reliability value was 0.740 (p = 0.014). A panel of three specialist pharmacists who are experts in this field judged the face and content validity of the final questionnaire. Parents with up-to-date immunized children had significantly better knowledge and practice scores than parents who did not (p &lt; 0.001 and p = 0.001 respectively), suggesting a good construct validity. A significant difference was found in knowledge and practice scores among parents' age (p = 0.006 and p = 0.029 respectively) and place of living (p = 0.037 and p = 0.043). The parents' knowledge level was positively associated with their practice toward immunization (Spearman's rank correlation coefficient 0.310, p = 0.003). Conclusions: The pilot study concluded that the Bahasa Melayu version of the immunization KP questionnaire has good reliability and validity for measuring the knowledge and practices of Malaysian parents and therefore this version can be used in future research. abstract_id: PUBMED:36494999 Factors affecting childhood immunization: Thematic analysis of parents and healthcare workers' perceptions. Immunization against common childhood diseases is an important strategy as it is critical for reducing the global child morbidity and mortality. This review explores the perceptions of parents and HCWs toward childhood immunization. The PRISMA guideline was used to search and include the studies. Relevant electronic databases were systemically searched for the years ranging from 2000 to 2021 to identify studies reported in English. Themes were then identified using thematic analysis. A total of 44 studies met the review criteria and were summarized and categorized into 4 themes: barriers to immunization, parental knowledge, attitude and behavior (KAB), health system factors and HCWs' KAB. This review found that immunization decision-making is a complex process. Parental KAB leads to immunization decisions. HCWs were also noted to be the trusted sources of immunization information. Further research can be conducted on how to improve parents' perceptions of immunization and immunization practices. Answer: Yes, parents' knowledge and practice regarding immunization are related to pediatric immunization compliance. Studies have shown that there is a significant association between parental knowledge-practices (KP) and children's immunization completeness. For instance, a study conducted in Mosul, Iraq, found that about half of the studied children were immunized with all vaccination doses, and these children's parents had adequate KP scores. There was a significant association of immunization completeness with total KP groups (PUBMED:24460878). Another study in the same region confirmed that more than half of the respondents had adequate knowledge-practice scores, and significant associations were noted for knowledge-practice groups with various factors, including parents' education level and family income (PUBMED:24485194). In Ethiopia, a study revealed that good parental knowledge, attitude, and practice (KAP) towards infant immunization were associated with factors such as parents' educational status and urban residency. The study recommended improving the KAP of parents about immunization through health education and promotion interventions (PUBMED:34007240). Moreover, a study assessing Iraqi parents' views identified lack of vaccine availability and parents' lack of education as main barriers to immunization, suggesting that improving parents' literacy might enhance immunization uptake (PUBMED:23879083). Similarly, in India, the most common reasons for partial or non-immunization included inadequate knowledge about immunization or subsequent doses, indicating the need for education and awareness among parents (PUBMED:20635642). In summary, the evidence suggests that parents' knowledge and practice are indeed related to pediatric immunization compliance, and efforts to improve parental KAP could lead to higher immunization rates among children.
Instruction: Hypercortisolemic Depressed Women: Lean but Viscerally Obese? Abstracts: abstract_id: PUBMED:26138707 Hypercortisolemic Depressed Women: Lean but Viscerally Obese? Background: Activation of the hypothalamic-pituitary-adrenal (HPA) system in depressed patients has been related to visceral adiposity. In contrast, low HPA system activity is associated with increased body fat in the general population. Our study intended to clarify whether HPA system activity is related to body weight and composition in depressed inpatients. Methods: In a cohort of 51 female and 20 male depressed inpatients, we measured saliva cortisol (HPA system activity), body mass index (BMI), waist circumference as well as body composition as reflected by bioimpedance. Results: In female patients, cortisol in saliva was negatively associated with fat-to-muscle ratio and BMI. Conclusion: In depressed inpatients, especially women, there is evidence that activation of the HPA system is related to relatively low body weight and low body fat content. abstract_id: PUBMED:11914443 Hypercortisolemic depression is associated with increased intra-abdominal fat. Objective: Similar to patients with a metabolic syndrome, patients with major depression are at increased risk of developing cardiovascular disorders. Interestingly, both disorders share a specific endocrine syndrome that promotes the accumulation of visceral fat, which again is considered a marker of increased cardiovascular morbidity and mortality. Methods: Intra-abdominal fat was measured in 22 postmenopausal depressed women and 23 age-matched healthy women by computer tomography at the level of lumbar vertebrae 1 (L1) and 4 (L4). Saliva was taken in patients and control subjects at 08:00 hours over a period of 7 drug-free days for the measurement of free cortisol. In patients only we performed an oral glucose tolerance test. Results: Compared with control subjects, depressed patients with elevated free cortisol concentrations showed similar visceral fat depots at L1 (113.0 +/- 41.6 vs. 94.3 +/- 53.2 cm(2)). Hypercortisolemic depressed patients also showed greater fat depots in this area (74.5 +/- 55.5 cm(2), p =.04) than the normocortisolemic patients. However, a comparison of all patients with control subjects revealed no difference in fat accumulation at either L1 or L4. Finally, glucose concentrations during the glucose tolerance test were higher in hypercortisolemic than in normocortisolemic patients, whereas their insulin levels showed only a tendency toward being increased. Conclusions: Hypercortisolemic depressed patients suffer from resistance to insulin and increased visceral fat. The fact that hypercortisolemia reverses depression-related fat loss, particularly in the visceral area, might partially explain why major depression can be considered a risk factor for cardiovascular disorders. abstract_id: PUBMED:17224244 Hypercortisolemic depression is associated with the metabolic syndrome in late-life. Introduction: Depression has been hypothesized to be associated with metabolic abnormalities which increase the risk of cardiovascular disease (CVD) and diabetes. Such a link could be due to increased HPA-axis activity. This study investigates the cross-sectional relationship between depression, urinary cortisol and metabolic syndrome in an older population. Methods: Data are from 867 participants of the InChianti Study, aged 65 years. Depressive symptoms were assessed using the CES-D scale; cortisol levels were determined in 24-h urine samples. Metabolic syndrome was defined as three or more of the following: abdominal obesity, high triglycerides, low HDL cholesterol, high blood pressure, and high fasting glucose. Results: Clinically relevant depressed mood (CES-D20) was present in 20.6% of the sample, and 24.5% had the metabolic syndrome. After adjustment for sociodemographics and health indicators, depression score (per SD increase: OR=1.20, 95% CI=1.02-1.41) and urinary cortisol level (per SD increase: OR=1.23, 95% CI=1.01-1.51) were significantly associated with presence of metabolic syndrome. There was, however, a significant interaction (p=0.003) between depressed mood and urinary cortisol in the probability of having metabolic syndrome. The odds of metabolic syndrome in persons with both depressed mood and urinary cortisol excretion in the highest tertile was 1.84 (95% CI=1.02-3.34) compared to persons with neither condition. Discussion: This study suggests a synergistic relationship between depression, cortisol and metabolic syndrome. Hypercortisolemic depression may constitute a specific risk group for the metabolic syndrome. abstract_id: PUBMED:24643468 Concentrations of chromium, selenium, and copper in the hair of viscerally obese adults are associated with insulin resistance. Visceral adiposity is linked to the development of insulin resistance, which is a condition that may contribute to metabolic abnormalities and cardiovascular disease. Various minerals play essential roles in different metabolic functions in the body. Thus, the relationships between mineral concentrations in the hair and insulin resistance were analyzed in 144 Korean adults (71 viscerally obese subjects and 73 normal control subjects) in this cross-sectional study. Visceral obesity was measured using a bioelectrical impedance analysis (BIA), and insulin resistance levels were assessed using the homeostasis model assessment insulin resistance (HOMA-IR) index. The viscerally obese group exhibited significantly higher levels of serum glucose (96.5 vs 91.0 mg/dL, P = 0.023), insulin concentration (4.78 vs 2.98 μIU/mL, P = 0.003), and the HOMA-IR index (1.18 vs 0.64, P = 0.003) compared with the normal control group. After adjusting for age and sex, there was a positive correlation between copper levels in the hair and the HOMA-IR index in the viscerally obese group (r = 0.241, P = 0.046) whereas chromium and selenium levels in the hair were negatively correlated with the HOMA-IR index (r = -0.256, P = 0.034, and r = -0.251, P = 0.038, respectively). Thus, chromium and selenium levels in the hair of viscerally obese adults were inversely associated with insulin resistance, whereas copper levels in the hair were positively associated with insulin resistance. This suggests that the mineral status of viscerally obese adults might play a role in the development of insulin resistance. abstract_id: PUBMED:17371477 Viscerally obese women with normal body weight have greater brachial-ankle pulse wave velocity than nonviscerally obese women with excessive body weight. Objective: To confirm the association of visceral obesity and brachial-ankle pulse wave velocity (baPWV) and to compare metabolic indices and baPWV between individuals who have normal body weight but are viscerally obese and individuals with excessive body weight who are not viscerally obese. Patients And Measurements: We recruited a total of 150 women, aged 22 to 67 years. We assessed body composition, measured by computed tomography (CT), and divided the study population into four groups, based on visceral adipose tissue area (normal, normal body weight but viscerally obese, excessive body weight but not viscerally obese, and excessive body weight and viscerally obese). The baPWV was measured, using a volume plethysmographic instrument. Results: Despite lower levels of total body fat, the women who had a normal body weight but were viscerally obese had a higher plasma triglyceride level and baPWV measurement and greater subcutaneous fat area (SFA) and thigh SFA than the women with excessive body weight who were not viscerally obese. After adjustment for age, mean blood pressure (BP), body mass index (BMI), triglyceride levels, fasting insulin levels, and free fatty acid (FFA) levels, baPWV was independently correlated with abdominal visceral fat area, as measured by CT (P = 0.001). Conclusions: Mean baPWV was higher in women with normal body weight who were viscerally obese than in women who had excessive body weight but were not viscerally obese, and abdominal visceral fat was an independent factor for baPWV. These results suggest that early detection and intervention in viscerally obese individuals, even those within a normal BMI range, could be needed to prevent atherosclerosis and cardiovascular disease (CVD). abstract_id: PUBMED:36031556 The short and long-term impact of the healthy kids initiative on depressed mood in youth living with obesity. Purpose: The purpose of this study is to determine the effect of a free, comprehensive program with social support on depressed mood in youth (aged 11-17 years) living with obesity in a prospective, longitudinal, community-based study. Methods: Youth living with obesity (BMI &gt; 30), residing in three cities in Saskatchewan, Canada, were eligible to participate in a 12-week program designed to facilitate behavioural change with five physical activity sessions a week, one dietary session a week, and one group cognitive behavioural session per week offered by health care professionals. Baseline data were collected on depressed mood (Centre for Epidemiological Studies-Depression 12 or CES-D 12), health-related quality of life (SF-12), and self-esteem (NLSCY) at baseline, 12-weeks, and at 52-weeks RESULTS: Overall, 2598 youth started and 2292 (88.2 %) completed the 12-week program with 1703 (74.3 %) of participants presenting for 52-week follow-up. Depressed mood improved significantly over the course of the program: 79.2 % of participants had depressed mood at the start of the program and this decreased to 64.7 % at the end of 12-weeks; which was an overall relative reduction of 18.4 % (Cohen's d = 0.94). The mean CES-D 12 score at one-year was significantly lower than the mean baseline score (17.90 compared to 21.11, p &lt; .001). By depressed mood severity, 58.7 % of participants had severe depressed mood at baseline, which reduced to 31.6 % at 12 weeks, and 27.9 % at 52 weeks. Age and maternal occupation were initially significantly associated with depressed mood at 52-week follow-up (unadjusted). However, after binary logistic regression was used to determine the independent risk factors for having depressed mood at 52-week follow up, only older age (OR 1.88, 95 % CI 1.52-2.34, p &lt; .001) was significant. Conclusion: The Healthy Kids Initiative lends more evidence that healthy behavioural interventions can have a positive impact on depressed mood among youth living with obesity. abstract_id: PUBMED:29644388 The relation between obesity and depressed mood in a multi-ethnic population. The HELIUS study. Purpose: To examine the association between obesity and depressed mood in a large multi-ethnic population and check for consistency in this association across six ethnic groups. Methods: Data of 21,030 persons (18-70 years) were sourced from the HELIUS study. Cross-sectional relationships between obesity measures [body mass index (kg/m2) and waist circumference (cm)] and depressed mood (PHQ-9 score ≥ 10) were analysed. Consistency of associations was investigated across ethnic groups by interaction terms (ethnicity*obesity measures) in basic (age, sex, education) and fully (health behaviours and somatic health) adjusted models. Results: Obesity was prevalent in all ethnic groups, but varied substantially. After sociodemographic adjustment, obesity measures were associated with increased odds of depressed mood but this was inconsistent across ethnic groups. Obesity (BMI ≥ 30 or highest waist circumference quartile) was strongly and significantly associated with depressed mood in the Dutch [Odds Ratio (OR) = 1.72; 95% Confidence intervals (CI) 1.24-2.40, and OR = 1.86; 95% CI 1.38-2.50], respectively, and African Surinamese (OR = 1.60; 95% CI 1.29-1.98 and OR = 1.59; 95% CI 1.27-2.00, respectively) but had a weaker, non-significant association in other ethnic groups (South-Asian Surinamese, Ghanaian, Moroccan, Turkish groups). Adjustment for health behaviours and somatic health had limited effect on this pattern. Conclusion: Obesity was associated with a higher risk of depressed mood. However, ethnic differences were found: the obesity-depressed mood association was strong in the Dutch and African Surinamese populations, but not in other ethnic groups. Future studies should explore whether differential normative values or pathophysiology across ethnic groups explain why the obesity-depression association is inconsistent across ethnic groups. abstract_id: PUBMED:11707533 Effects of the FABP2 A54T mutation on triglyceride metabolism of viscerally obese men. Objective: Viscerally obese individuals are frequently characterized by a proatherogenic condition. A missense mutation (A54T) in the fatty acid binding protein type 2 (FABP2) gene has been associated with insulin resistance and obesity. This study examined the effect of this mutation on lipoprotein levels in viscerally obese hyperinsulinemic condition. Research Methods And Procedures: A total of 217 men were assigned to one of two groups based on their FABP2 A54T polymorphism. Results: The two genotypic groups showed no difference in either physiological characteristics or lipoprotein/lipid profile, before or after statistical adjustment for age. From this initial sample, 50 men accepted to have their postprandial lipid response assessed and 10 T54/A54 heterozygotes were then individually matched for visceral adipose tissue accumulation and fasting plasma triglyceride (TG) levels with 10 A54/A54 homozygotes. High-density lipoprotein (HDL)-TG levels were significantly increased in the fasting state as well as 4 hours after the test meal (p = 0.04 and p = 0.0008, respectively) in men bearing the A54T mutation. In addition, the area under the curve of postprandial HDL-TG levels was also significantly higher among T54/A54 heterozygotes than among A54/A54 homozygotes (p = 0.04). Interestingly, fasting TG concentrations in large TG-rich lipoproteins (large-TRL; S(f) &gt; 400) were correlated with HDL-TG levels at 4 (r = 0.74, p = 0.01) and 8 hours (r = 0.73, p = 0.01) after the test meal in T54/A54 heterozygotes only. Discussion: The FABP2 A54T missense mutation may contribute to the TG enrichment of HDL in the postprandial state that, in turn, may alter the risk of atherosclerotic vascular disease. abstract_id: PUBMED:26062015 The Association Between Subcutaneous Fat Density and the Propensity to Store Fat Viscerally. Background: Alterations in the cellular characteristics of subcutaneous adipose tissue (SAT) may reduce its ability to expand in times of caloric excess, increasing the propensity to store excess calories viscerally (visceral adipose tissue [VAT]). We hypothesized (1) that increased SAT density, an indirect marker of fat quality, would be associated with an increased VAT/SAT ratio and increased cardiovascular disease (CVD) risk and (2) that these associations would be independent of the absolute volume of SAT. Methods: We investigated the association of SAT density with the VAT/SAT ratio and CVD risk in 3212 participants (48% women, mean age, 50.7 years) from the Framingham Heart Study. Adipose tissue depot density and volume were quantified by computed tomography; traditional CVD risk factors were quantified. Results: Higher SAT density was correlated with a higher VAT/SAT ratio in men (r = 0.17; P &lt; .0001) but not in women (r = 0.04; P ≥ .05). More adverse levels of CVD risk factors were observed in the high SAT density/high VAT/SAT ratio group than in the referent group (low density/low ratio). For example, women had an increased risk of diabetes (odds ratio [OR], 6.7; 95% confidence interval [CI], 2.6-17.6; P = .0001) and hypertension (OR, 1.6; 95% CI, 1.1-2.4; P = .009). Additional adjustment for SAT volume generally strengthened these associations (diabetes OR, 10.8; 95% CI, 4.1-29.0; hypertension OR, 2.5; 95% CI, 1.7-3.7; all P &lt; .0001). These trends were similar but generally weaker in men. Conclusion: High fat density, an indirect marker of fat quality, is associated with the propensity to store fat viscerally vs subcutaneously and is jointly characterized by an increased burden of CVD risk factors. abstract_id: PUBMED:31809363 Intestinal Dysbiosis and Markers of Systemic Inflammation in Viscerally and Generally Obese Persons Living With HIV. Background: The intestinal microbiota contributes to the pathogenesis of obesity and metabolic disorders. People living with HIV (PLWH) have a higher risk for the development of visceral adiposity with accompanying worsened cardiovascular risk. Setting: Convenience sample from an HIV clinic and research unit. Methods: To understand the relationship between adiposity and intestinal dysbiosis, we compared the gut microbiota and inflammatory markers in a cross-sectional study of viscerally obese, generally obese, and lean PLWH. Fecal intestinal microbiota was characterized by 16S ribosomal DNA sequencing. Abdominal CTs quantified subcutaneous adipose tissue and visceral adipose tissue (SAT; VAT). Serum high sensitivity C-reactive protein, adiponectin, leptin, IL-6, MCP-1, and sCD14 were assayed. Results: We studied 15, 9, and 11 participants with visceral obesity, general obesity, and lean body type, respectively. The generally obese group were all women and 2/3 African American, whereas the visceral obesity and lean groups were predominantly white and men who have sex with men. Markers of systemic inflammation and sCD14 were higher in general obesity compared with lean. sCD14 was positively correlated with VAT, but not SAT. Bacterial diversity was significantly reduced in participants with visceral and general obesity and composition of intestinal microbiota was significantly different from lean body types. Bacterial alpha diversity was negatively correlated with VAT area, waist/hip ratio, and sCD14, but not with SAT area. Conclusions: In this exploratory study, obesity in general was associated with dysbiotic intestinal microbiota. The relationships of VAT to bacterial diversity and sCD14 suggest that dysbiosis in viscerally obese PLWH could be associated with heightened inflammatory state. Answer: The relationship between hypercortisolemia and body composition in depressed women appears to be complex. According to the studies provided, hypercortisolemic depressed women may not necessarily be lean but can exhibit increased visceral adiposity, which is a specific type of obesity characterized by fat accumulation around the abdominal organs. One study found that in female depressed inpatients, cortisol in saliva (indicative of HPA system activity) was negatively associated with fat-to-muscle ratio and BMI, suggesting that activation of the HPA system is related to relatively low body weight and low body fat content (PUBMED:26138707). However, another study reported that hypercortisolemic depressed patients showed greater fat depots in the visceral area compared to normocortisolemic patients, indicating that hypercortisolemia reverses depression-related fat loss, particularly in the visceral area (PUBMED:11914443). Furthermore, hypercortisolemic depression has been associated with the metabolic syndrome in late-life, suggesting a synergistic relationship between depression, cortisol, and metabolic syndrome, where hypercortisolemic depression may constitute a specific risk group for the metabolic syndrome (PUBMED:17224244). Additionally, viscerally obese women with normal body weight have been found to have greater brachial-ankle pulse wave velocity than nonviscerally obese women with excessive body weight, indicating that visceral obesity can be present even in individuals with a normal BMI and can be associated with increased cardiovascular risk (PUBMED:17371477). In summary, while hypercortisolemic depressed women may have a lower BMI and fat-to-muscle ratio, they can still be viscerally obese, which is associated with increased cardiovascular risk and metabolic syndrome. This suggests that hypercortisolemia in depressed women may lead to a specific pattern of fat distribution that is not necessarily reflected in overall leanness but rather in the accumulation of visceral fat.
Instruction: Does postoperative delirium limit the use of patient-controlled analgesia in older surgical patients? Abstracts: abstract_id: PUBMED:19672166 Does postoperative delirium limit the use of patient-controlled analgesia in older surgical patients? Unlabelled: BACKGROUNDPostoperative pain Is an independent predictor of postoperative delirium. Whether postoperative delirium limits patient-controlled analgesia (PCA) use has not been determined. Methods: The authors conducted a nested cohort study in older patients undergoing noncardiac surgery and used PCA for postoperative analgesia. Delirium was measured by using the Confusion Assessment Method. The authors computed a structural equation model to determine the effects of pain and opioid consumption on delirium status and the effect of delirium on opioid use. Results: Of 335 patients, 108 (32.2%) developed delirium on postoperative day (POD) 1, and 120 (35.8%) on POD 2. Postoperative delirium did not limit the use of PCA. Patients with postoperative delirium used more PCA in a 24-h period (POD 2) compared to those without delirium (mean dose of hydromorphone +/- SE adjusted for covariates was 2.24 +/- 0.71 mg vs. 1.25 +/- 0.67 mg, P = 0.02). Despite more opioid use, patients with delirium reported higher Visual Analogue Scale scores than those without delirium (POD 1: mean visual analog scale +/- SE at rest 4.2 +/- 0.23 vs. 3.3 +/- 0.22, P = 0.0051; POD 2: 3.3 +/- 0.23 vs. 2.5 +/- 0.19, P = 0.004). Path coefficients from structural equation model revealed that pain and opioid use affect delirium status, but delirium does not affect subsequent opioid dose. Conclusions: Postoperative delirium did not limit PCA use. Despite more opioid use, visual analog scale scores were higher in patients with delirium. Future studies on delirium should consider the role of pain and pain management as potential etiologic factors. abstract_id: PUBMED:20382014 Intravenous patient-controlled analgesia and management of pain in post-surgical elderly with cancer. Postoperative pain is often underestimated in elderly patients, based on considerations of the limited function of kidney and liver with advancing age or presumed high threshold of pain sensation. Achieving adequate pain management for the older patient is complicated by comorbid diseases, increased risk of adverse drug reactions, and physician factors such as inadequate training and reluctance to prescribe opioid medications. Anticholinergic load has been related to impaired cognitive and physical function. Older patients are more likely than younger patients to have impaired physical status. Cognitive problems frequently observed may make evaluation of pain difficult. Patient-controlled analgesia (PCA) permits analgesic titration according to the patient's own needs and greatly reduces the risk of overdose because of increased individual sensitivity to analgesic drugs. The success of such a technique depends on close evaluation of all preoperative and intraoperative factors that can cause or contribute to acute delirium. In the present paper we review the literature on this pivotal field. abstract_id: PUBMED:34498474 Intravenous patient-controlled analgesia does not increase the risk of postoperative delirium compared to patient-controlled epidural analgesia: a propensity score-matched retrospective cohort study. Background: It is unclear whether the doses of opioids and the routes of administration used for postoperative analgesic management are associated with delirium. We aimed to compare the incidence of postoperative delirium (POD) between intravenous patient-controlled analgesia (IVPCA) and patient-controlled epidural analgesia (PCEA) in patients who underwent postoperative analgesic management using opioids. Methods: We retrospectively investigated surgical patients (n=3,324) who received patient-controlled analgesia (PCA). Morphine was used for IVPCA, and fentanyl and ropivacaine were used for PCEA. The patients' background characteristics, perioperative management, presence of POD, and postoperative analgesia technique after IVPCA (n=1,184) or PCEA (n=2,140) were assessed. We divided the patients into IVPCA and PCEA groups and compared the incidence of POD by propensity score matching. We used the independent t-test for comparisons between the groups, and P&lt;0.05 as considered as statistically significant. Results: POD was noted in a total of 125 patients (3.8%); 55 patients (4.6%) with IVPCA and 70 patients (3.3%) with PCEA (P=0.046). There was no statistically significant difference in cumulative opioid usage up to postoperative day 2 (in mg) between patients with and without POD (POD 62.7±39.8 vs. non-POD 48.9±50.3, P=0.10). After propensity score matching, 1,156 patients with similar baseline characteristics were selected. POD was noted in 22 of 578 patients (3.8%) in the IVPCA group and 30 of 578 patients (5.2%) in the PCEA group, with no difference between the two groups (P=0.256). On the other hand, opioid usage was higher in the IVPCA group than in the PCEA group (P&lt;0.001). Conclusions: There was no difference in the incidence of POD between morphine IVPCA and fentanyl PCEA when the patient characteristics were matched using propensity score matching. POD occurs regardless of the route and dose of opioid administration. abstract_id: PUBMED:38284498 The effectiveness of preoperative delirium prevention in intermediate to high-risk older surgical patients: A systematic review. Background: Few reviews have addressed delirium prevention among intermediate to high-risk older surgical patients. Aims: To map preoperative delirium prevention interventions for older surgical patients at intermediate to high risk of developing delirium, assess outcomes and identify gaps in knowledge. Design: Systematic narrative review of randomised controlled trials reported following the PRISMA checklist. Methods: A systematic search was conducted of the literature published from 1990 to October 2022 in Medline, CINAHL and Ageline and of the grey literature in Google Scholar. Randomised controlled trials were retrieved that assessed the effectiveness of preoperative delirium prevention interventions for older surgical patients at intermediate to high risk of delirium. Data were extracted using a data extraction tool, and results were tabulated. Studies were assessed for bias using the Cochrane Collaboration Risk of Bias tool. Results: Twenty-one studies met the selection criteria including N = 5096 participants. Two studies tested cognitive training, two studies tested fascia iliaca compartment block and one study assessed femoral nerve block. Ten studies tested prophylactic medications including methylprednisolone. Five studies investigated geriatric assessment and management. One study assessed transcutaneous electrical acupoint stimulation. In the two studies testing fascia iliaca compartment block, there was a reduction in postoperative delirium for orthopaedic patients. Methylprednisolone reduced postoperative delirium in orthopaedic patients and in those undergoing gastrointestinal surgery. Results of all other interventions on the occurrence of postoperative delirium and additional outcomes including the severity and duration of delirium were inconclusive. Conclusions: Despite the promising results for fascia iliaca compartment block and methylprednisolone, there is limited knowledge regarding evidence-based delirium prevention interventions. Most studies had small sample sizes indicating that the current evidence is exploratory. There is an urgent need for the funding and conduct of trials to test preventative interventions for older surgical patients at intermediate to high risk of developing delirium. abstract_id: PUBMED:21444180 Patient-controlled analgesia in patients with advanced cancer. Should patients be in control? Patient-controlled analgesia (PCA) has been incorporated into the management of chronic pain in cancer patients despite limited evidence of safety and efficacy. Potential benefits of PCA include decreased delay in the administration of opioids from the time requested, rapidity and ease of dose titration, and adaptability to the variable analgesic dosing needs, as well as diurnal changes in patients. PCA may be beneficial for the initial titration of opioids but has the potential to either induce or exacerbate delirium in cancer patients. Clinicians need to closely monitor for symptoms of delirium in advanced cancer patients. The following case presentation highlights the complication of delirium in a cancer patient who was prescribed PCA. Patients with advanced cancer are at increased risk for delirium, which is often difficult to predict. abstract_id: PUBMED:10691230 Comparison of intravenous or epidural patient-controlled analgesia in the elderly after major abdominal surgery. Background: Patient-controlled analgesia (PCA) with intravenous morphine and patient-controlled epidural analgesia (PCEA), using an opioid either alone or in combination with a local anesthetic, are two major advances in the management of pain after major surgery. However, these techniques have been evaluated poorly in elderly people. This prospective, randomized study compared the effectiveness on postoperative pain and safety of PCEA and PCA after major abdominal surgery in the elderly patient. Methods: Seventy patients older than 70 yr of age and undergoing major abdominal surgery were assigned randomly to receive either combined epidural analgesia and general anesthesia followed by postoperative PCEA, using a mixture of 0.125% bupivacaine and sufentanil (PCEA group), or general anesthesia followed by PCA with intravenous morphine (PCA group). Pain intensity was tested three times daily using a visual analog scale. Postoperative evaluation included mental status, cardiorespiratory and gastrointestinal functions, and patient satisfaction scores. Results: Pain relief was better at rest (P = 0.001) and after coughing (P = 0.002) in the PCEA group during the 5 postoperative days. Satisfaction scores were better in the PCEA group. Although incidence of delirium was comparable in the PCA and PCEA groups (24% vs. 26%, respectively), mental status was improved on the fourth and fifth postoperative days in the PCEA group. The PCEA group recovered bowel function more quickly than did the PCA group. Cardiopulmonary complications were similar in the two groups. Conclusion: After major abdominal surgery in the elderly patient, patient-controlled analgesia, regardless of the route (epidural or parenteral), is effective. The epidural route using local anesthetics and an opioid provides better pain relief and improves mental status and bowel activity. abstract_id: PUBMED:25031814 Intravenous Patient-controlled Analgesia Has a Positive Effect on the Prognosis of Delirium in Patients Undergoing Orthopedic Surgery. Background: Postoperative delirium is relatively common. However, the relationship between intravenous patient-controlled analgesia (IV-PCA) and delirium has not been thoroughly investigated. The aim of this study was to evaluate the effects of IV-PCA on the prognosis of postoperative delirium in patients undergoing orthopedic surgery. Methods: Medical records of 129 patients with postoperative delirium were reviewed. Patients were divided into two groups according to whether they used IV-PCA with fentanyl and ketorolac. The IV-PCA group consisted of 73 patients who were managed with IV-PCA; the NO-PCA group consisted of 56 patients who were managed without PCA. Results: Incidences of multiple psychiatric consultations and prolonged delirium were significantly lower in patients using IV-PCA with fentanyl and ketorolac than in those without PCA. Conclusions: We recommend the use of IV-PCA for pain control and management of delirium in patients with postoperative delirium. abstract_id: PUBMED:34853109 Exploring patients' experiences of analgesia after major lower limb amputation: a qualitative study. Objectives: To explore patient experiences, understanding and perceptions of analgesia following major lower limb amputation. Design: Qualitative interview study, conducted as part of a randomised controlled feasibility trial. Setting: Participants were recruited from two general hospitals in South Wales. Participants: Interview participants were patients enrolled in PLACEMENT (Perineural Local Anaesthetic Catheter aftEr Major lowEr limb amputatioN Trial): a randomised controlled feasibility trial comparing the use of perineural catheter (PNC) versus standard care for postoperative pain relief following major lower limb amputation. PLACEMENT participants who completed 5-day postoperative follow-up, were able and willing to participate in a face-to-face interview, and had consented to be contacted, were eligible to take part in the qualitative study. A total of 20 interviews were conducted with 14 participants: 10 male and 4 female. Methods: Semi-structured, face-to-face interviews were conducted with participants over two time points: (1) up to 1 month and (2) at least 6 months following amputation. Interviews were audio-recorded, transcribed verbatim and analysed using a framework approach. Results: Interviews revealed unanticipated benefits of PNC usage for postoperative pain relief. Participants valued the localised and continuous nature of this mode of analgesia in comparison to opioids. Concerns about opioid dependence and side effects of pain relief medication were raised by participants in both treatment groups, with some reporting trying to limit their intake of analgesics. Conclusions: Findings suggest routine placement of a PNC following major lower limb amputation could reduce postoperative pain, particularly for patient groups at risk of postoperative delirium. This method of analgesic delivery also has the potential to reduce preoperative anxiety, alleviate the burden of pain management and minimise opioid use. Future research could further examine the comparison between patient-controlled analgesia and continuous analgesia in relation to patient anxiety and satisfaction with pain management. Trial Registration Number: ISRCTN: 85710690; EudraCT: 2016-003544-37. abstract_id: PUBMED:26890834 Frailty and cognitive impairment: Unique challenges in the older emergency surgical patient. Introduction: Older patients (&gt;65 years of age) admitted as general surgical emergencies increasingly require improved recognition of their specific needs relative to younger patients. Two such needs are frailty and cognitive impairment. These are evolving research areas that the emergency surgeon increasingly requires knowledge of to improve short- and long-term patient outcomes. Methods: This paper reviews the evidence for frailty and cognitive impairment in the acute surgical setting by defining frailty and cognitive impairment, introducing methods of diagnosis, discussing the influence on prognosis and proposing strategies to improve older patient outcomes. Results: Frailty is present in 25% of the older surgical population. Using frailty-scoring tools, frailty was associated with a significantly longer hospital stay and higher mortality at 30 and 90 days after admission to an acute surgical unit. Cognitive impairment is present in a high number of older acute surgical patients (approximately 70%), whilst acute onset cognitive impairment, termed delirium, is documented in 18%. However, patients with delirium had significantly longer hospital stays and higher in-hospital mortality than those with cognitive impairment. Conclusions: Improved knowledge of frailty and delirium by the emergency surgeon allows the specialised needs of older surgical patients to be taken into account. Early recognition, and consideration of minimally invasive surgery or radiological intervention alongside potentially transferable successful elective interventions such as comprehensive geriatric assessment, may help to improve short- and long-term patient outcomes in this vulnerable population. abstract_id: PUBMED:33192244 The Effect and Optimal Dosage of Dexmedetomidine Plus Sufentanil for Postoperative Analgesia in Elderly Patients With Postoperative Delirium and Early Postoperative Cognitive Dysfunction: A Single-Center, Prospective, Randomized, Double-Blind, Controlled Trial. Background: Postoperative delirium (POD) and postoperative cognitive dysfunction (POCD) are common complications after major surgery among elderly patients. Dexmedetomidine (DEX) is less frequently explored for its effects in patients with postoperative neurocognitive disorders. This study investigated the effect and optimal dosage of DEX for patient-controlled analgesia (PCA) on POD and early POCD after major surgery among elderly patients. Methods: Patients in four groups received continuous infusion of DEX 0, 100, 200, and 400 μg with sufentanil 150 μg for PCA immediately after surgery. POD and POCD were assessed on postoperative days 1, 2, 3, and 7 by using the Confusion Assessment Method (CAM) and Mini-Mental State Examination (MMSE) scales. Furthermore, the incidence of POD and POCD of all the four groups in postoperative 7 days classified by high risk factors (age, education, surgical site, and surgical category), sedation level, postoperative pain intensity, and side effects were assessed. Results: The overall incidence rates of POD and early POCD 7 days after surgery were lower in the DEX 200 μg 400 μg groups than in the DEX 0 μg and 100 μg groups (P &lt; 0.05). Compared with DEX 200 μg, DEX 400 μg reduced early POCD in patients who underwent open surgery (P &lt; 0.05). There were no intergroup differences in the postoperative sedation level, pain intensity, and side effects. Conclusion: The continuous infusion of DEX 200 μg or DEX 400 μg in PCA significantly decreased the incidence of POD and early POCD after major surgery without increasing any side effects. Compared with DEX 200 μg, DEX 400 μg was preferred for reducing early POCD in patients who underwent open surgery. Answer: Postoperative delirium does not limit the use of patient-controlled analgesia (PCA) in older surgical patients. In fact, a study found that patients with postoperative delirium used more PCA in a 24-hour period on postoperative day 2 compared to those without delirium. Despite using more opioids, these patients reported higher pain scores, suggesting that pain and opioid use affect delirium status, but delirium does not affect subsequent opioid dose (PUBMED:19672166). Additionally, intravenous PCA does not increase the risk of postoperative delirium compared to patient-controlled epidural analgesia, and the incidence of delirium occurs regardless of the route and dose of opioid administration (PUBMED:34498474). Furthermore, the use of IV-PCA with fentanyl and ketorolac has been shown to have a positive effect on the prognosis of delirium in patients undergoing orthopedic surgery (PUBMED:25031814). However, it is important to note that achieving adequate pain management in older patients is complicated by factors such as comorbid diseases, increased risk of adverse drug reactions, and cognitive problems that may make the evaluation of pain difficult (PUBMED:20382014). Clinicians need to closely monitor for symptoms of delirium in advanced cancer patients using PCA, as PCA has the potential to either induce or exacerbate delirium (PUBMED:21444180). In summary, while postoperative delirium does not limit the use of PCA in older surgical patients, careful consideration and monitoring are required to manage pain effectively and minimize the risk of exacerbating delirium.
Instruction: Are there HLA combinations typical supporting for or making resistant against aggressive and/or chronic periodontitis? Abstracts: abstract_id: PUBMED:12941076 Are there HLA combinations typical supporting for or making resistant against aggressive and/or chronic periodontitis? Objective And Background: Human leukocyte antigens (HLA)/alleles have been considered as risk factors for periodontal disease. However, data from HLA associations is not consistent. Diversity of HLA antigen combinations and en bloc inherited HLA alleles (haplotypes), as known in systemic diseases, can be variable factors in disease association. Therefore, the aim of this study was to investigate the incidence of HLA homozygosities, heterozygosities and estimated haplotypes in German Caucasian groups with generalized aggressive (N = 50) and chronic (N = 102) periodontitis in comparison to control probands without periodontitis (N = 102). Methods: HLA-A, -B, -Cw, -DRB1, -DRB3/4/5, -DQB1 typing was carried out using both serologic (microlymphocytotoxicity test) and genomic (PCR-SSP: PCR with sequence specific primers) techniques. Frequencies of all homozygosities, heterozygosities and haplotypes were determined in all patients and controls. Results: In both patient groups, associations to HLA homozygosities and heterozygosities were found. Most striking was the significantly lower frequency of HLA-DRBblank* homozygosity (non-DRB3*/DRB4*/DRB5*) in chronic periodontitis (p &lt; 0.05), whereas HLA-DRB1*15 : DRB5*(DR51) : DQB1*06 showed a slightly higher homozygosity rate in all patients. As the combination HLA-A*02,A*03 was significantly decreased in aggressive periodontitis (p &lt; 0.05), HLA-A*01,A*03 heterozygosity was significantly lowered in chronic periodontitis (p &lt; 0.05). Among others, the known positive associations for HLA-A*68/69 (A28) and HLA-DRB1*04 were confirmed by the haplotypes HLA-A*68/69 : Cw*07 : B*18 in aggressive periodontitis (p &lt; 0.05) and HLA-Cw*08 : B*14 : DRB1*04 in chronic periodontitis (p &lt; 0.05). Conclusion: The present study elucidates the variety of HLA associations and therefore the difficulty to assign single HLA markers to periodontal disease. Susceptibility/resistance of both aggressive and chronic periodontitis may rather be influenced by particular HLA marker combinations. Associated HLA haplotypes may be of further importance for unknown gene loci representing a part of the genetic background for periodontitis. The different associations in aggressive and chronic periodontitis indicate different susceptibility/resistance factors for both diseases. abstract_id: PUBMED:12485327 Gender differences in HLA phenotype frequencies found in German patients with generalized aggressive periodontitis and chronic periodontitis. HLA antigens have been considered as risk factors for periodontitis. Differences in prevalence and in the extent of attachment loss between males and females have suggested that gender-dependent HLA deviations could play a role in individual predisposition to periodontitis. The aim of the present study was therefore to investigate the incidence of gender-dependent HLA associations in 50 patients with generalized aggressive periodontitis (AP) and 102 patients with chronic periodontitis (CP) in comparison to 102 probands without any attachment loss caused by periodontitis. HLA typing was carried out using a microlymphocytotoxic test and a polymerase chain reaction with sequence-specific primers (PCR-SSP). Female AP patients showed an increase in the frequency of HLA-A*68/69 and a decrease in the frequency of DRBblank* (non-DRB3/4/5*) and DQB1*05-positive probands. Only in female CP patients was HLA-DQB1*0303 absent, whereas HLA-DQB1*06 homozygosity increased significantly. With regard to the (AP + CP) periodontitis group as a whole, the increased frequency of HLA-DQB1*06 homozygosity in females was similar to the findings obtained in the AP group. Evidently, gender is a confounding variable, which should be considered in further studies of HLA and periodontitis. abstract_id: PUBMED:24738589 Treatment of aggressive periodontitis. Despite etiological differences between aggressive and chronic periodontitis, the treatment concept for aggressive periodontitis is largely similar to that for chronic periodontitis. The goal of treatment is to create a clinical condition that is conducive to retaining as many teeth as possible for as long as possible. When a diagnosis has been made and risk factors have been identified, active treatment is commenced. The initial phase of active treatment consists of mechanical debridement, either alone or supplemented with antimicrobial drugs. Scaling and root planing has been shown to be effective in improving clinical indices, but does not always guarantee long-term stability. Antimicrobials can play a significant role in controlling aggressive periodontitis. Few studies have been published on this subject for localized aggressive periodontitis, but generalized aggressive periodontitis has been subject to more scrutiny. Studies have demonstrated that systemic antibiotics as an adjuvant to scaling and root planing are more effective in controlling disease compared with scaling and root planing alone or with supplemental application of local antibiotics or antiseptics. It has also become apparent that antibiotics ought to be administered with, or just after, mechanical debridement. Several studies have shown that regimens of amoxicillin combined with metronidazole or regimens of clindamycin are the most effective and are preferable to regimens containing doxycycline. Azithromycin has been shown to be a valid alternative to the regimen of amoxicillin plus metronidazole. A limited number of studies have been published on surgical treatment in patients with aggressive periodontitis, but the studies available show that the effect can be comparable with the effect on patients with chronic periodontitis, provided that proper oral hygiene is maintained, a strict maintenance program is followed and modifiable risk factors are controlled. Both access surgery and regenerative techniques have shown good results in patients with aggressive periodontitis. Once good periodontal health has been obtained, patients must be enrolled in a strict maintenance program that is directed toward controlling risk factors for disease recurrence and tooth loss. The most significant risk factors are noncompliance with regular maintenance care, smoking, high gingival bleeding index and poor plaque control. There is no evidence to suggest that daily use of antiseptic agents should be part of the supportive periodontal therapy for aggressive periodontitis. abstract_id: PUBMED:7913961 Unique intronic variations of HLA-DQ beta gene in early-onset periodontitis. Human leukocyte antigen (HLA) class II beta chain plays an important role in the recognition of foreign antigens in immune reactions. Different forms of immune reaction may be concerned with initiation and progression of infectious diseases such as periodontitis. In this study we examined the frequency of HLA class II serotype and the variation of HLA class II beta gene in periodontitis patients. HLA serotypic frequencies in 70 Japanese patients with periodontitis and 26 individuals with periodontal health were examined. No HLA serotype specific to any type of periodontitis was observed. In order to detect differences among some HLA serotypes, restriction fragment length polymorphism (RFLP) analysis was undertaken with cDNA probes for HLA-DR beta and HLA-DQ beta genes in 20 subjects (15 patients and 5 healthy individuals). Atypical BamHI and EcoRI restriction sites were found in the HLA-DQ beta gene from 3 patients with early-onset periodontitis. In addition to these 20 subjects, an additional 80 subjects (40 patients and 40 healthy individuals) were screened for the atypical BamHI restriction site using the polymerase chain reaction method. It was detected in 7 patients with early-onset periodontitis, 1 patient with adult periodontitis, and 3 healthy subjects. No clinical differences except age were found between patients with this gene variation and other patients. Interestingly, all 3 healthy subjects with this gene variation were from subjects whose family members developed early-onset periodontitis with the gene variation. Atypical BamHI and EcoRI restriction sites and 41-nt repeated sequence were found in the intron before the third exon of HLA-DQB gene. These results suggest that these intronic gene variations may be useful as gene markers for a subpopulation of early-onset periodontitis and might affect immune reactions such as antigen recognition. abstract_id: PUBMED:12108119 Localized juvenile periodontitis or localized aggressive periodontitis. In general, the new term "localized aggressive periodontitis" replaces what was previously known as "localized juvenile periodontitis." Similarly, "generalized aggressive periodontitis" replaces what used to be called "generalized juvenile periodontitis." Aggressive periodontitis is a specific type of periodontitis with clearly identifiable clinical and laboratory findings, making it sufficiently different from chronic periodontitis and warranting separate classification. Not all the characteristics must be present to assign a diagnosis or classify the disease. The diagnosis may be based on clinical, radiographic, and historical data, and laboratory testing may not be essential for assigning a diagnosis. abstract_id: PUBMED:12296785 HLA-A, B, Cw, DRB1, DRB3/4/5, DQB1 in German patients suffering from rapidly progressive periodontitis (RPP) and adult periodontitis (AP). Background/aim: There is growing indication that differences in host response determine susceptibility and resistance to periodontal disease. Particularly, the effect of histocompatibility antigens (HLA) on early onset periodontitis (EOP) has been studied. As most of the results are not conclusive and to date no report has been done on German patients, the aim of this study was to investigate the distribution of HLA alleles in a group of 50 German RPP patients and 102 German AP patients and to compare them to 102 control probands without periodontitis. Methods: Diagnosis was established according to standardised clinical criteria. HLA typing was performed using serologic and molecular biologic (PCR-SSP) techniques. Results: Compared to the controls, RPP patients had a significantly higher frequency of HLA-DRB1*13 and a significantly lower frequency of HLA-DRBblank*(non-DRB3/4/5). AP patients showed a significantly increased occurrence of HLA-B*14 and -Cw*08 as well as a significantly decreased frequency of HLA-A*03. In both patient groups HLA-A*11 and -A*29 had an increased frequency and HLA-A*31 and -A*30/31 were decreased. These differences were statistical significant in the whole patient group (RPP + AP). Conclusions: Based on modern DNA techniques the present study shows an association of HLA to both RPP and AP. Certain HLA alleles seem to be associated with susceptibility or resistance to periodontitis in general. However, before this knowledge can be used for differential diagnosis or prognosis, further investigations are necessary. abstract_id: PUBMED:24872623 Aggressive periodontitis: A clinico-hematological appraisal. Background: Human leukocyte antigens (HLA) have been considered a candidate of genetic risk markers for aggressive periodontitis (AP). AP has also been associated with polymorphonuclear leukocyte (PMN) dysfunction. The role of monocyte subsets in AP has also not been completely explored. Therefore, the present study was undertaken to assess in, AP subjects, the possible association between defective PMN adhesion and β2-integrin expression; defective neutrophil migration and actin polymerization level; the expression of ABO blood group and HLA antigen; and the percentage of CD14+ CD16+ monocytes and CD45RA monocytes. All these parameters have been compared with the subjects of chronic periodontitis (CP) and healthy controls. Materials And Methods: A total of 45 subjects of the age group 20-50 years, free from any known systemic disease, were divided into three groups - Group I - periodontally healthy control (n = 15), Group II - CP (n = 15) and Group III - AP (n = 15). Peripheral blood samples were collected. ABO grouping and HLA typing were performed. β2-integrin expression, actin polymerization level and percentage of CD14+ CD16+ monocytes and CD45RA monocytes were estimated by fluorescence-activated cell sorter analysis. Results: Most of the subjects of AP belonged to the blood group AB, and an increased frequency of HLA-A30, CW1 and DR1 (P &lt; 0.1) and B44 and DQ2 (P &lt; 0.05) were also observed in this group. In the AP group, both average values (β2-integrin and actin level) were significantly less than those of normal subjects (P &lt; 0.001). The mean percentage of CD14+ CD16+ monocytes was found to be maximum in CP, followed by AP, and then in healthy subjects, while the mean percentage of CD45RA was maximum in AP, followed by CP, and then in healthy subjects. Conclusions: With the present state of knowledge from this study, a definite association of ABO blood groups and HLA phenotypes with periodontal diseases is yet to be established. Leukocytic functional defects were found in AP subjects. A statistically significant percentage of CD14+ CD16+ and CD45RA monocytes were found in AP subjects as compared with the normal control and CP groups. abstract_id: PUBMED:27456238 Effect of patient age awareness on diagnostic agreement of chronic or aggressive periodontitis between clinicians; a pilot study. Background: The aim of this pilot study was to test whether diagnostic agreement of aggressive and chronic periodontitis amongst Board Certified Periodontists, is influenced by knowledge of a patient's age. In 1999 at the International World Workshop age was removed as a diagnostic criteria for aggressive periodontitis. The impact of this change on the diagnostic reliability amongst clinicians has not yet been assessed. Methods: Nine periodontal case reports were twice presented to sixteen board certified periodontists, once with age withheld and again with patient age provided. Participants were instructed to choose a diagnosis of Chronic Periodontitis or Aggressive Periodontitis. Diagnostic agreement was calculated using the Fleiss Kappa test. Results: Including the patients' age in case report information increased diagnostic agreement (the kappa statistic) from 0.49 (moderate agreement) to 0.61 (substantial agreement). Conclusion: These results suggest that knowledge of a patients' age influenced clinical diagnosis, when distinguishing between aggressive periodontitis and chronic periodontitis, which may in turn impact treatment decision-making. abstract_id: PUBMED:26319714 Association of levels of antibodies against citrullinated cyclic peptides and citrullinated α-enolase in chronic and aggressive periodontitis as a risk factor of Rheumatoid arthritis: a case control study. Background: Periodontal disease could be a risk factor for rheumatoid arthritis (RA). It is assumed that the bacterial strain Porphyromonas gingivalis mediates citrullination of host peptides and thereby the generation of RA-associated autoantibodies in genetically predisposed individuals. For that reason non-RA individuals who suffered from generalized aggressive (GAgP, N = 51) and generalized chronic periodontitis (GChP, N = 50) were investigated regarding the occurrence of antibodies against citrullinated cyclic peptides (anti-CCP) and citrullinated α-enolase peptide-1 (anti-CEP-1) in comparison to non-RA non-periodontitis controls (N = 89). Furthermore, putative associations between infections with five periodontopathic bacteria or expression of certain human leucocyte antigens (HLA) to these autoantibodies were investigated. Methods: The presence of anti-CCP and anti-CEP-1 in plasma samples was conducted with enzyme linked immunosorbent assay. Subgingival plaque specimens were taken from the deepest pocket of each quadrant and pooled. For detection of DNA of five periodontopathic bacteria PCR with sequence specific oligonucleotides was carried out. Low resolution HLA typing was carried out with PCR with sequence specific primers. Differences between patients and controls were assessed using Chi square test with Yates correction or Fisher`s exact test if the expected number n in one group was &lt;5. Results: Two patients with GAgP (3.9%), no patient with GChP and two controls (2.2%, pFisher = 0.662) were positive for anti-CEP-1 whereas no study participant was anti-CCP positive. Individuals with P. gingivalis were slightly more often anti-CEP-1 positive in comparison to individuals without P. gingivalis (3.2 vs. 1.1%, pFisher = 0.366). Carrier of HLA-DQB1*06 or the HLA combination DRB1*13; DRB3*; DQB1*06 were slightly more anti-CEP-1 positive (6.1 and 4.3%) than no carriers (0.7 and 0%, pFisher 0.053). Conclusions: GAgP and GChP and the presence of periodontopathic bacteria are not associated with an increased risk for occurrence of anti-CCP and anti-CEP-1 autoantibodies. The putative relationship between periodontitis and RA should be investigated in further studies. abstract_id: PUBMED:21976840 Myeloperoxidase staining in the diagnosis of aggressive periodontitis. Aims: To evaluate neutrophil myeloperoxidase (MPO) staining procedure as a reliable, affordable and easily available diagnostic assay for aggressive periodontitis. Materials And Methods: Fifteen subjects were recruited in the study wherein five each were diagnosed as aggressive periodontitis and chronic periodontitis respectively, and five were periodontally healthy. Three millilitres (ml) of venous blood was collected using Vacutainers containing ethylene diamine tetra acetate (EDTA) and was subjected to MPO staining procedure. Histological picture was evaluated using a visual analogue scale (VAS). Results: MPO stained specimen of all the patients showed positive MPO staining of the neutrophils. The intensity of the stain of MPO granules was more in aggressive periodontitis specimen as compared to the chronic periodontitis patient specimen and healthy subject specimen. The staining characteristics were comparable for chronic periodontitis patients and healthy subject. Conclusions: This study shows that there is a potential and probable place for MPO staining as an economical, relatively convenient and easily available assay in the diagnosis of aggressive periodontitis. Answer: Yes, there are HLA combinations that have been associated with either susceptibility or resistance to aggressive and chronic periodontitis. Studies have found associations between certain HLA homozygosities, heterozygosities, and haplotypes with both aggressive and chronic periodontitis. For example, the significantly lower frequency of HLA-DRBblank* homozygosity (non-DRB3*/DRB4*/DRB5*) was found in chronic periodontitis, while HLA-DRB1*15 : DRB5*(DR51) : DQB1*06 showed a slightly higher homozygosity rate in all patients with periodontitis. The combination HLA-A*02,A*03 was significantly decreased in aggressive periodontitis, and HLA-A*01,A*03 heterozygosity was significantly lowered in chronic periodontitis. Additionally, haplotypes such as HLA-A*68/69 : Cw*07 : B*18 in aggressive periodontitis and HLA-Cw*08 : B*14 : DRB1*04 in chronic periodontitis were associated with these conditions (PUBMED:12941076). Gender differences in HLA phenotype frequencies have also been observed, with female patients with aggressive periodontitis showing an increase in the frequency of HLA-A*68/69 and a decrease in the frequency of DRBblank* (non-DRB3/4/5*) and DQB1*05-positive probands. In female patients with chronic periodontitis, HLA-DQB1*0303 was absent, whereas HLA-DQB1*06 homozygosity increased significantly (PUBMED:12485327). Other studies have found an increased frequency of HLA-DRB1*13 and a decreased frequency of HLA-DRBblank*(non-DRB3/4/5) in rapidly progressive periodontitis patients, and an increased occurrence of HLA-B*14 and -Cw*08 as well as a decreased frequency of HLA-A*03 in adult periodontitis patients. HLA-A*11 and -A*29 had an increased frequency, while HLA-A*31 and -A*30/31 were decreased in both patient groups (PUBMED:12296785).
Instruction: Recanalization after intravenous thrombolysis: does a recanalization time window exist? Abstracts: abstract_id: PUBMED:17452580 Recanalization after intravenous thrombolysis: does a recanalization time window exist? Background: To evaluate the time course of major vessel recanalization under IV thrombolysis in relation to functional outcome in acute ischemic stroke. Methods: A total of 99 patients with an acute anterior circulation vessel occlusion who underwent IV thrombolysis were included. All patients had a standardized admission and follow-up procedure. Color-coded duplex sonography was performed on admission, 30 minutes after thrombolysis, and at 6 and 24 hours after onset of symptoms. Recanalization was classified as complete, partial, and absent. Functional outcome was rated with the modified Rankin Scale on day 30. Results: Complete recanalization occurred significantly more frequently in patients with multiple branch occlusions compared to those with mainstem occlusion (OR 5.33; 95% CI, 2.18 to 13.05; p &lt; 0.0001) and was associated with lower NIH Stroke Scale (NIHSS) scores (p &lt; 0.001). Not the specific time point of recanalization at 6 or 24 hours after stroke onset, but recanalization per se within 24 hours (OR 7.8; 95% CI 2.2 to 28.2; p = 0.002) was significantly associated with a favorable outcome. Multivariate analysis revealed recanalization at any time within 24 hours and NIHSS scores on days 1 and 7 together explaining 75% of the functional outcome variance 30 days after stroke. Conclusions: Complete recanalization up to 24 hours after stroke onset is significantly associated with the short-term clinical course and functional outcome 30 days after acute stroke. abstract_id: PUBMED:26574535 Time window for recanalization in basilar artery occlusion: Speculative synthesis. Basilar artery occlusion (BAO) is one of the most devastating forms of stroke and few patients have good outcomes without recanalization. Most centers apply recanalization therapies for BAO up to 12-24 hours after symptom onset, which is a substantially longer time window than the 4.5 hours used in anterior circulation stroke. In this speculative synthesis, we discuss recent advances in BAO treatment in order to understand why and under which circumstances longer symptom duration might not necrotize the brainstem and turn therapeutic attempts futile. We raise the possibility that distinct features of the posterior circulation, e.g., highly developed, persistent collateral arterial network, reverse filling of the distal basilar artery, and delicate plasma flow siding the clot, might sustain brittle patency of brainstem perforators in the face of stepwise growth of the thrombus. Meanwhile, the tissue clock characterizing the rapid necrosis of a typical anterior circulation penumbra will not start. During this perilous time period, recanalization at any point would salvage the brainstem from eventual necrosis caused by imminent reinforcement and further building up of the clot. abstract_id: PUBMED:29033392 Mechanical recanalization in ischemic anterior circulation stroke within an 8-hour time window: a real-world experience. Purpose: We aimed to assess the safety and effectiveness of mechanical recanalization in patients with ischemic stroke in the anterior circulation within 8 h since symptoms onset and with unknown onset time. We compared time intervals &lt;6 h vs. 6-8 h/unknown onset time, as only limited data are available for a time window beyond 6 h. Methods: Our cohort included 110 consecutive patients (44 males; mean age, 73.0±11.5 years) with ischemic stroke in the anterior circulation due to the acute occlusion of a large intracranial artery who underwent mechanical recanalization within an 8-hour time window or with unknown onset time. All patients underwent unenhanced computed tomography (CT) of the brain, CT angiography of the cervical and intracranial arteries and digital subtraction angiography. Perfusion CT was performed in patients beyond a 6-hour time window/with unknown onset time. We collected the following data: baseline characteristics, presence of risk factors, neurologic deficit at the time of treatment, time to therapy, recanalization rate, and 3-month clinical outcome. Successful recanalization was defined as Thrombolysis in Cerebral Infarction score of 2b/3 and good clinical outcome as modified Rankin scale value of 0-2 points. Results: Successful recanalization was achieved in 82 patients (74.5%): in 61 patients treated within 6 h (73.5%), 7 patients treated within 6-8 h (63.6%), and 13 patients with unknown onset time (81.3%). Good 3-month clinical outcome was achieved in 61 patients (55.5%): in 46 patients treated within 6 h (55.4%), 5 patients treated within 6-8 h (45.5%), and 10 patients with unknown onset time (62.5%). Recanalization success or clinical outcome were not significantly different between patients treated at different time windows. Conclusion: Our data confirms the safety and effectiveness of mechanical recanalization performed in carefully selected patients with ischemic stroke in the anterior circulation within 8 h of stroke onset or with unknown onset time in everyday practice. abstract_id: PUBMED:21660098 Pharmacological and non-pharmacological recanalization strategies in acute ischemic stroke. According to the guidelines of the European Stroke Organization (ESO) and the American Stroke Association (ASA), acute stroke patients should be managed at stroke units that include well organized pre- and in-hospital care. In ischemic stroke the restoration of blood flow has to occur within a limited time window that is accomplished by fibrinolytic therapy. Newer generation thrombolytic agents (alteplase, pro-urokinase, reteplase, tenecteplase, desmoteplase) have shorter half-life and are more fibrin-specific. Only alteplase has Food and Drug Administration (FDA) approval for the treatment of acute stroke (1996). The National Institute of Neurological Disorders and Stroke (NINDS) trial proved that alteplase was effective in all subtypes of ischemic strokes within the first 3 h. In the European cooperative acute stroke study III trial, intravenous (IV) alteplase therapy was found to be safe and effective (with some restrictions) if applied within the first 3-4.5 h. In middle cerebral artery (MCA) occlusion additional transcranial Doppler insonication may improve the breakdown of the blood clot. According to the ESO and ASA guidelines, intra-arterial (IA) thrombolysis is an option for recanalization within 6 h of MCA occlusion. Further trials on the IA therapy are needed, as previous studies have involved relatively small number of patients (compared to IV trials) and the optimal IA dose of alteplase has not been determined (20-30 mg is used most commonly in 2 h). Patients undergoing combined (IV + IA) thrombolysis had significantly better outcome than the placebo group or the IV therapy alone in the NINDS trial (Interventional Management of Stroke trials). If thrombolysis fails or it is contraindicated, mechanical devices [e.g., mechanical embolus removal in cerebral ischemia (MERCI)- approved in 2004] might be used to remove the occluding clot. Stenting can also be an option in case of acute internal carotid artery occlusion in the future. An intra-aortic balloon was used to increase the collateral blood flow in the Safety and Efficacy of NeuroFlo(™) Technology in Ischemic Stroke trial (results are under evaluation). Currently, there is no approved effective neuroprotective drug. abstract_id: PUBMED:25692110 Impacts of rapid recanalization and collateral circulation on clinical outcome after intraarterial thrombolysis. Background And Purpose: Rapid recanalization might improve clinical outcomes after intraarterial thrombolysis (IAT) for acute ischemic stroke patients with collateral circulation. We determined whether rapid recanalization and collateral circulation affect clinical outcomes after IAT. Methods: We retrospectively evaluated the clinical and radiological data of 134 consecutive patients who underwent IAT for intracranial artery occlusion. The interval from symptom onset to recanalization after IAT (onset-to-recanalization time) as an estimate of the probability of good clinical outcome (modified Rankin scale 0-2) was calculated in patients with collateral circulation in the ischemic hemisphere, which was rated poor (0/1 American Society of Interventional and Therapeutic Neuroradiology criteria) or good (2-4). Changes in National Institute of Health Stroke Scale (NHISS) score before and after IAT and modified Rankins scale scores 3 months after discharge were compared with respect to onset-to-recanalization time. Results: In patients with good collateral circulation, the estimated onset-to-recanalization time for a 0.5 probability of a good clinical outcome was 347 minutes; with poor collateral circulation, it was 172 minutes for a 0.2 probability of good clinical outcome. Outcome analyses according to onset-to-recanalization time showed patients recanalized &lt;6 hours had lower NHISS scores (&lt;4.5, 4.5-6, &gt;6 hours of onset-to-recanalization time, and non-recanalization: 5.1, 6.9, 11.9, and 19.8, respectively) at discharge and higher percentages of good clinical outcome (69%, 66.7%, 21.9%, and 0%, respectively) 3 months after IAT. Conclusions: The time window to expect a high probability of a good clinical outcome after IAT is highly dependent on the collateral circulation. abstract_id: PUBMED:33325765 Evolution of the stroke paradigm: A review of delayed recanalization. While the time window for reperfusion after ischemic stroke continues to increase, many patients are not candidates for reperfusion under current guidelines that allow for reperfusion within 24 h after last known well time; however, many case studies report favorable outcomes beyond 24 h after symptom onset for both spontaneous and medically induced recanalization. Furthermore, modern imaging allows for identification of penumbra at extended time points, and reperfusion risk factors and complications are becoming better understood. Taken together, continued urgency exists to better understand the pathophysiologic mechanisms and ideal setting of delayed recanalization beyond 24 h after onset of ischemia. abstract_id: PUBMED:34906857 Diverse thrombus composition in thrombectomy stroke patients with longer time to recanalization. Background And Purpose: Delayed time to recanalization is associated with reduced recanalization success of mechanical thrombectomy (MT) and thrombolysis in acute ischemic stroke (AIS). The reasons for this are unclear. We hypothesized that alterations in thrombus structure and composition could be responsible for this. Methods: Retrieved thrombi from AIS patients who underwent MT less than 8 h from symptom onset to groin puncture (SOGP) were evaluated. Patients were divided into early (≤4 h.) vs delayed (&gt; 4 h) groups based SOGP timing. Thrombi were histologically analysed using Martius Scarlett Blue and immunohistochemistry staining for von Willebrand Factor (vWF), anti-citrullinated H3 (H3Cit; NETs [neutrophil extracellular traps] marker). We used inferential statistics including, t-test, artificial neural network (ANN) to interpret the data. Results: A total of 137 thrombi were collected. The overall average percentage of red blood cells (RBC), white blood cells (WBC), platelet, fibrin, H3Cit, and vWF components in thrombi was 45.83%, 3.58%, 22.23%, 28.27%, 19.97% and 16.23% respectively. Delayed group had higher WBCs, (p = 0.02), fibrin (p = 0.02), H3Cit (p = 0.04) and vWF (p = 0.03) thrombus fractions compared to early group. Based on ANN model, the most important factors for predicting the number of passes required for successful recanalization are fibrin and RBC contents of the thrombus followed by vWF and H3Cit contents. Conclusions: Longer time to recanalization was associated with increased WBCs, fibrin, H3Cit and vWF fractions of thrombi reflecting possible in situ maturation of thrombus components. Increased fibrin, NETs and vWF composition may reduce likelihood of revascularization by altering thrombus mechanical properties. abstract_id: PUBMED:33210039 Frequency of thrombolytic targets in stroke patients presenting in an extended time window. Objective: The objective of this study was to determine the proportion of stroke patients presenting in an extended time window who have a thrombolytic treatment target. Background: Patients presenting up to 24 h after stroke onset have been found to have penumbral tissue on multimodal imaging. Stroke patients presenting in this extended time window without a large vessel occlusion (LVO) may benefit from reperfusion therapy using thrombolysis. Methods: Patients seen at our institutions from 2011 through 2015 were reviewed to identify those who presented &gt;4 h and &lt;24 h from last seen normal (LSN) and did not receive acute treatment. Magnetic resonance imaging (MRI) scans were used to dichotomize patients using a diffusion-perfusion mismatch ratio of 1.2. Results: During the study period, 3469 patients were evaluated by our stroke service, with 893 seen 4-24 h from LSN who were not treated. MRI was performed with diffusion and perfusion imaging in 439 patients, of whom 26 were excluded due to hemorrhage and 37 were excluded due to LVO. This left 376 patients who potentially could have been treated with thrombolysis in an extended time window and were included in the analysis. Of these, 156 (42%) demonstrated a mismatch ratio &gt;1.2. Patients with a mismatch presented earlier (P = 0.012), were more likely to be female (P = 0.03), and had higher National Institutes of Health Stroke Scale (P &lt; 0.001). Conclusions: Almost half of the patients presenting 4-24 h from LSN had a target for thrombolysis in our study. Multimodal imaging may be able to expand the population of treatable stroke patients given the results of recent clinical trials. abstract_id: PUBMED:35518516 Time Window for Acute Stroke Management: A Cross-Sectional Study Among Community Healthcare Practitioners in Primary Care. Introduction: Time-to-treatment window is critical for managing acute ischaemic stroke. The community healthcare practitioners (CHPs) who deliver frontline care in the health system play an important role in stroke prevention and treatment. Methods: A multi-stage sampling design was adopted in Guangdong province, China. A total of 997 CHPs who participated in the survey were divided into two groups (the awareness group vs the unawareness group) according to their knowledge on the time window for stroke management. Logistic regression analysis was performed to explore factors associated with the awareness of "time window". Results: Overall, less than half (49.1%) of CHPs were aware of the time window for stroke management. The proportion of CHPs who were able to recognise stroke symptoms were higher in the awareness group (42.7%) than that in the unawareness group (38.8%). Most CHPs (82.9%) in the awareness group had the knowledge about the effectiveness of intravenous thrombolysis in treating acute cerebral infarction, whereas this was perceived by only less than half (43.6%) of CHPs in the unawareness group. Factors associated with the knowledge of time window for stroke management included participation in cerebrovascular disease management training (adjusted odds ratio [aOR]=4.203, 95% CI: 1.707-10.348, p=0.002), awareness of the time frame for CT initiation (aOR=5.214, 95% CI: 1.803-15.078, p=0.002) and for urokinase thrombolysis administration (aOR=11.927, 95% CI: 4.393-32.382, p&lt;0.001), accurate perceptions about the target for blood pressure lowering (aOR=4.181, 95% CI: 1.713-10.207, p=0.002) and blood glucose control (aOR=2.446, 95% CI: 1.019-5.869, p=0.045), and the familiarity with prehospital stroke management principles (aOR=3.593, 95% CI: 1.383-9.332, p=0.009). Conclusion: The CHPs need to enhance their ability to address the acute ischaemic stroke onset promptly to provide effective treatment within the beneficial "time window". This may help improve the stroke chain of survival with better multidisciplinary decision support systems that enable optimal stroke care delivery. abstract_id: PUBMED:23149243 Recanalization of occluded large arteries with broadened therapeutic window for acute cerebral infarction. Objective: To investigate the safety and efficacy of recanalization with a broadened therapeutic window for acute occlusion of large cerebral arteries. Methods: We assessed 38 patients who underwent the hyper-selective intra-arterial administration of low-dose urokinase, along with mechanical thrombus disruption or mechanical thrombus disruption recanalization (34 stents in 33 patients) of occluded vessels, 20 with onset time-to-treatment (OTT) &gt;6 h (observation group; mean OTT, 20.10±15.67 h) and 18 with OTT ≤6 h (control group). NIHSS scores, mRS scores (≤2) at 3 months, recanalization rates, severe cerebral infarctions on CT, and symptomatic hemorrhagic conversions after surgery were compared. Results: Postoperative recanalization rates were 100% in both groups, and other results were equivalent. Conclusions: Recanalization at longer OTT was safe and effective with acute occlusions of large cerebral arteries. Time to recanalization could be safely prolonged for up to 20 h in these patients. Answer: Yes, a recanalization time window does exist after intravenous thrombolysis in the context of acute ischemic stroke. The time course of major vessel recanalization under intravenous thrombolysis is associated with functional outcomes. A study including 99 patients with acute anterior circulation vessel occlusion who underwent intravenous thrombolysis found that complete recanalization up to 24 hours after stroke onset was significantly associated with the short-term clinical course and functional outcome 30 days after acute stroke (PUBMED:17452580). This suggests that there is a 24-hour window within which recanalization can significantly impact patient outcomes. In the case of basilar artery occlusion (BAO), which is a particularly devastating form of stroke, most centers apply recanalization therapies up to 12-24 hours after symptom onset, which is a longer time window than the 4.5 hours typically used for anterior circulation stroke (PUBMED:26574535). This extended window for BAO is due to the distinct features of the posterior circulation, such as a highly developed collateral arterial network, which might sustain brainstem perfusion despite thrombus growth. For ischemic anterior circulation stroke, mechanical recanalization within an 8-hour time window has been shown to be safe and effective, with no significant difference in recanalization success or clinical outcome between patients treated at different time windows within this period (PUBMED:29033392). The impact of rapid recanalization and collateral circulation on clinical outcomes after intraarterial thrombolysis (IAT) has also been studied, indicating that the time window for a high probability of a good clinical outcome after IAT is highly dependent on the collateral circulation (PUBMED:25692110). Overall, while the optimal time window for recanalization may vary depending on the type of occlusion and individual patient factors, there is a consensus that earlier recanalization is generally associated with better outcomes, and efforts should be made to achieve recanalization within the identified therapeutic windows.
Instruction: Are gender differences important for the clinical effects of antidepressants? Abstracts: abstract_id: PUBMED:12944340 Are gender differences important for the clinical effects of antidepressants? Objective: Gender differences in antidepressant treatment response, side effects, dropout rates, and plasma concentrations were examined in patients with major and predominantly melancholic depression. Method: The study included a subgroup of 292 inpatients (96 men, 196 women) from three Danish double-blind, randomized, controlled trials. All patients completed a 5-week treatment period and fulfilled the DSM-III or DSM-III-R criteria for major depression. Clomipramine (150 mg/day) was the reference treatment, and comparable treatments were citalopram (40 mg/day), paroxetine (30 mg/day), and moclobemide (400 mg/day). Assessments were performed by using the 17-item Hamilton Depression Rating Scale and the Udvalg for Kliniske Undersøgelser Side Effect Rating Scale. In a subgroup of 110 patients, weekly measurements of clomipramine plasma concentrations were obtained. Nonparametric statistical tests and multiple linear and logistic regression models were used for statistical evaluations. Results: Both genders had similar remission rates (Hamilton depression scale score &lt;8) when treated with clomipramine and had significantly higher remission rates with clomipramine than with the comparable treatments. The plasma concentrations of clomipramine were significantly higher for female than for male patients. No gender differences were found in posttreatment Hamilton depression scale scores, nor did the therapeutic effects of treatment depend on gender. Rates of dropout and side effects were similar for men and women. No relationship between plasma concentrations, gender, and therapeutic outcome was found. Conclusions: In a group of patients with major and predominantly melancholic depression, differentiation according to gender was not important in treatment with common antidepressants. Women appeared to have higher plasma concentrations of tricyclic antidepressants than men. The consequences of this difference for clinical effects are unclear. Gender-specific recommendations for dosing of tricyclic antidepressants may be considered. abstract_id: PUBMED:31994531 Bipolar affective disorder: gender differences in the course and treatment Bipolar affective disorder (BAD) according to the latest international classifications (ICD-10, DSM-5) is recognized as its own disease, characterized by episodes of mania (or hypomania), hyperthymic personality and (or) a genetic predisposition to BAR in contrast to recurrent depressive disorder (RD). The history of development and classification of manic depressive disorder (MDD), difficulties of differential diagnosis between BAD and RD are presented in this literature review. It considers the clinical features of the course of BAR in men and women and also gender-specific therapy. Gender differences in the clinical picture, course and treatment of BAD are discussed. Attention is drawn to the differences in effectiveness and tolerability of antidepressants in men and women. It is concluded that identifying the gender particularities is important for treatment and prevention optimization. abstract_id: PUBMED:30034992 Rapid-Acting Antidepressants: Mechanistic Insights and Future Directions. Purpose Of Review: Ketamine produces rapid (within hours) antidepressant actions, even in patients considered treatment resistant, and even shows promise for suicidal ideation. Here, we review current research on the molecular and cellular mechanisms of ketamine and other novel rapid-acting antidepressants, and briefly explore gender differences in the pathophysiology and treatment of MDD. Recent Findings: Ketamine, an NMDA receptor antagonist, increases BDNF release and synaptic connectivity, opposing the deficits caused by chronic stress and depression. Efforts are focused on the development of novel rapid agents that produce similar synaptic and rapid antidepressant actions, but without the side effects of ketamine. The impact of gender on the response to ketamine and other rapid-acting antidepressants is in early stages of investigation. Summary: The discovery that ketamine produces rapid therapeutic actions for depression and suicidal ideation represents a major breakthrough and much needed alternative to currently available medications. However, novel fast acting agents with fewer side effects are needed, as well as elucidation of the efficacy of these rapid-acting antidepressants for depression in women. abstract_id: PUBMED:37173796 Sex and gender differences in drug treatment: experiences from the knowledge database Janusmed Sex and Gender. Background: Evidence from clinical research indicates that men and women can differ in response to drug treatment. The knowledge database Janusmed Sex and Gender was developed to illuminate potential sex and gender differences in drug therapy and, therefore, achieve a better patient safety. The database contains non-commercial evidence-based information on drug substances regarding sex and gender aspects in patient treatment. Here, we describe our experiences and reflections from collecting, analyzing, and evaluating the evidence. Janusmed Sex And Gender: Substances have been systematically reviewed and classified in a standardized manner. The classification considers clinically relevant sex and gender differences based on available evidence. Mainly biological sex differences are assessed except for gender differences regarding adverse effects and compliance. Of the 400 substances included in the database, clinically relevant sex differences were found for 20%. Sex-divided data were missing for 22% and no clinically relevant differences were found for more than half of the substances (52%). We noted that pivotal clinical studies often lack sex analyses of efficacy and adverse effects, and post-hoc analyzes are performed instead. Furthermore, most pharmacokinetic analyses use weight correction, but medicines are often prescribed in standard doses. In addition, few studies have sex differences as a primary outcome and some pharmacokinetic analyses are unpublished, which may complicate the classification of evidence. Conclusions: Our work underlines the need of sex and gender analyses, and sex-divided data in drug treatment, to increase the knowledge about these aspects in drug treatment and contribute to a more individualized patient treatment. abstract_id: PUBMED:21374019 The impact of gender on antidepressants. There is a large body of literature debating whether and how gender affects the metabolism, side-effect profile, and efficacy of antidepressants. Gender differences in antidepressant pharmacokinetics and efficacy profiles have been attributed to not only anatomic and physiological differences between the sexes, but also behavioral factors, comorbid disorders, and gender-specific conditions, such as pregnancy and menopause. Despite the large body of research on this topic, few definitive conclusions regarding effects of gender on antidepressant treatment exist, and much of this research is incomplete, contradictory, or not fully used to optimize the administration of antidepressants and the response to treatment. This chapter will review the latest research on gender-specific effects of antidepressant treatment, focusing on the overall, gender-related differences in efficacy, metabolism, and side-effect profile of antidepressants, and how these differences can be used to better optimize treatment of depression in a clinical setting. abstract_id: PUBMED:32899299 Sex and Gender Differences in the Effects of Novel Psychoactive Substances. Sex and gender deeply affect the subjective effects and pharmaco-toxicological responses to drugs. Men are more likely than women to use almost all types of illicit drugs and to present to emergency departments for serious or fatal intoxications. However, women are just as likely as men to develop substance use disorders, and may be more susceptible to craving and relapse. Clinical and preclinical studies have shown important differences between males and females after administration of "classic" drugs of abuse (e.g., Δ9-tetrahydrocannabinol (THC), morphine, cocaine). This scenario has become enormously complicated in the last decade with the overbearing appearance of the new psychoactive substances (NPS) that have emerged as alternatives to regulated drugs. To date, more than 900 NPS have been identified, and can be catalogued in different pharmacological categories including synthetic cannabinoids, synthetic stimulants (cathinones and amphetamine-like), hallucinogenic phenethylamines, synthetic opioids (fentanyls and non-fentanyls), new benzodiazepines and dissociative anesthetics (i.e., methoxetamine and phencyclidine-derivatives). This work collects the little knowledge reached so far on the effects of NPS in male and female animal and human subjects, highlighting how much sex and gender differences in the effects of NPS has yet to be studied and understood. abstract_id: PUBMED:34970207 Efficacy of Safinamide and Gender Differences During Routine Clinical Practice. Background: There is increasing evidence of gender differences in the epidemiology and clinical manifestation of both motor and non-motor symptoms of Parkinson's disease (PD). Nevertheless, few data are available on gender differences in the response to antiparkinsonian drugs. Safinamide is a multimodal drug with positive effects on motor and non-motor fluctuations that might improve patients' care and quality of life. Objective: To analyze gender differences on clinical effects of safinamide in PD patients treated in real-life conditions during the SYNAPSES trial. Methods: SYNAPSES was a multinational, multicenter, observational study. At baseline, patients with PD diagnosis received safinamide as an add-on to levodopa and were followed up for 12 months, with visits performed every 4 months. A new statistical analysis was performed to describe the efficacy of safinamide in men and women on motor complications, motor symptoms, and adverse events. Results: Six hundred and sixteen (38%) out of 1,610 patients enrolled in the SYNAPSES study were women and 994 (62%) men. Safinamide improved motor symptoms and motor complications (fluctuations and dyskinesia) in both genders, with a good safety profile and without requiring any change in the concomitant dopaminergic therapy. Clinically significant improvements, according to the criteria developed by Shulman et al., were seen in 46% of male and female patients for the UPDRS motor score and 43.5% of men vs. 39.1% of women for the UPDRS total score. Conclusions: Safinamide was effective in improving motor fluctuations and dyskinesia and proved to be safe in both male and female patients with PD. Further prospective studies, specifically addressing potential gender differences in response to PD therapies, are needed to develop tailored management strategies. abstract_id: PUBMED:34262956 Gender Differences in Atherosclerotic Vascular Disease: From Lipids to Clinical Outcomes. Cardiovascular diseases (CVDs) are one of the main reasons of death and morbidity in the world. Both women and men have high rates of cardiovascular morbidity and mortality, although gender-related differences in mortality and morbidity rates are observed in different age groups of the population. In the large cohort of cardiovascular disease, ischemic heart disease (IHD), heart failure (HF), systemic hypertension, and valvular heart disease are particularly common in the population. CVDs caused by atherosclerosis are in the first place in terms of frequency, that is why society is particularly interested in this problem. The development and course of atherosclerotic processes associated with lipid and other metabolic changes are characterized by a long latent period, the clinical manifestation is often an acute vascular catastrophe, which can lead to human disability and death. Differences associated with sex are observed in the clinical course and manifestations, which raises the suspicion that gender influences processes related to atherosclerosis. Atherosclerotic cardiovascular disease (ACD) includes two main most dangerous clinical manifestations: IHD and cerebrovascular disease (mainly ischemic stroke). Other less common clinical manifestations of atherosclerosis include aortic atherosclerosis and peripheral vascular disease. Gender-related differences were also identified concerning these diseases. The present review discusses the effects of gender and age on atherosclerotic processes, disease development, and clinical manifestations. The metabolic basis for the development of atherosclerosis appears to be related to sex hormones. Thus this issue is interesting and useful for doctors of different specialties. abstract_id: PUBMED:34007693 An Exploratory Survey of Incorporation of Gender- and Sex-related Differences in the PharmD Curricula. Introduction: Clinical presentation and treatment in many disease states vary due to sex- and gender-differences. Sex-related pharmacokinetic differences are particularly important for pharmacists. The Accreditation Council for Pharmacy Education (ACPE) currently has no standard for the inclusion of gender- and sex-related differences in the didactic PharmD curriculum, but encourages advanced pharmacy practice experiences (APPEs) to include diverse populations related to gender. The purpose of this survey is to explore faculty incorporation of gender and sex differences within the PharmD didactic curriculum in preparation for a nation-wide survey. Methods: A survey was created to determine how many clinical topics incorporated gender- and sex-related differences and to what extent this information was discussed in the classroom. The survey link was emailed to pharmacotherapeutics and pharmacokinetics faculty at Midwestern University Chicago College of Pharmacy, University of Illinois at Chicago, Roosevelt University, Rosalind Franklin University, and Chicago State University. Chi square analyses were performed to examine relationships across participant responses. Results: A total of 56 faculty members participated in the survey, resulting in a 20% response rate. Of these, 30 (54.5%) faculty indicated that they discussed gender- and sex-related differences in the subject area in which they teach. Approximately 33% of respondents found gender- and sex-related differences very clinically important. Gender- and sex- related differences were taught in a variety of subject areas, including cardiology, diabetes, and chronic obstructive pulmonary disease (COPD). Conclusion: With no current standard, faculty members independently choose to include gender- and sex-related differences in their lecture topics and the extent of the discussion. Faculty should be aware of this lack of standardization and that they are independently responsible for including gender- and sex-related differences in their particular topics. Because the surveyed faculty find gender- and sex-related differences clinically important and literature suggests differences in medications depending on gender and sex, further research is planned to provide insight on a national level. abstract_id: PUBMED:36699713 Social Perceptions of Gender Differences and the Subjective Significance of the Gender Inequality Issue. Background: Gender inequality continues to reproduce itself in hidden and ambivalent forms and leads to invisible barriers in women's careers and lives. The authors were interested in how social perceptions of gender differences would relate to the maintenance of gender inequality in various spheres of life. Objective: The purpose of the presented research was to study social perceptions of gender differences in relation to the subjective significance of the gender inequality issue. Design: The study was conducted via an online survey throughout February-September of 2019. The sample included 106 people aged 18 to 68 (M = 30.2, σ = 10.5), 49% of respondents were women. The authors have developed and tested a questionnaire assessing the adherence to ideas regarding evident gender differences in various spheres of life. The reliability of all scales of the questionnaire has been tested. Respondents also completed a questionnaire identifying their perceptions of gender inequality and shared their life experience with respect to this phenomenon in the form of free description. Results: The following two latent factors reflecting different aspects of gender perceptions have been identified: "Career Inequality" and "Differences in Social Spheres". Indicators of the subjective significance of gender inequality (which include gender awareness, frequency of gender inequality witnessing, personal experience of gender discrimination and the emotional significance of this experience) were positively correlated with perceptions of career inequalities (these support ideas regarding gender differences when it comes to opportunities for professional realization) and negatively correlated with perceptions of differences within social spheres (these support ideas regarding the existence of essential gender differences within the family, politics and everyday life). Conclusion: Articulation of personal experiences of gender inequality is associated with social perceptions of the absence of essential gender differences in various social domains (egalitarianism) and sensitivity to gender inequality with regards to career opportunities. Answer: Gender differences have been a topic of interest in the clinical effects of antidepressants, with various studies examining whether men and women respond differently to these medications. According to the study with PUBMED:12944340, in a group of patients with major and predominantly melancholic depression, gender differentiation was not found to be important in treatment with common antidepressants such as clomipramine, citalopram, paroxetine, and moclobemide. Both genders had similar remission rates, and no gender differences were found in posttreatment scores or in the rates of dropout and side effects. However, it was noted that women had higher plasma concentrations of tricyclic antidepressants than men, suggesting that gender-specific recommendations for dosing may be considered. Other literature reviews and studies have also discussed gender differences in the clinical picture, course, and treatment of bipolar affective disorder (BAD), with attention drawn to differences in the effectiveness and tolerability of antidepressants between men and women (PUBMED:31994531). The impact of gender on the response to rapid-acting antidepressants like ketamine is still in the early stages of investigation, but it is recognized that gender differences in the pathophysiology and treatment of major depressive disorder (MDD) may exist (PUBMED:30034992). The knowledge database Janusmed Sex and Gender highlights that there are clinically relevant sex differences in drug treatment for about 20% of the 400 substances reviewed, emphasizing the need for sex and gender analyses in drug treatment to contribute to more individualized patient treatment (PUBMED:37173796). The literature also suggests that gender affects the metabolism, side-effect profile, and efficacy of antidepressants, although research on this topic is often incomplete or contradictory (PUBMED:21374019). In summary, while some studies suggest that gender differences may not be significant in the clinical effects of antidepressants for certain conditions, other research indicates that gender can influence the effectiveness and tolerability of these medications. The evidence is mixed, and further research is needed to fully understand the impact of gender on antidepressant treatment outcomes.
Instruction: Limited nasal septoplasty by ophthalmologists during endonasal dacryocystorhinostomy: is it safe? Abstracts: abstract_id: PUBMED:19617788 Limited nasal septoplasty by ophthalmologists during endonasal dacryocystorhinostomy: is it safe? Purpose: Significant deviated nasal septum (DNS) may preclude the performance of endonasal dacryocystorhinostomy (DCR). The purpose of this study was to investigate the outcomes of limited nasal septoplasty in tackling signficant DNS performed by trained ophthalmologists during endonasal DCR. Methods: In this retrospective interventional case series, the records of all patients requiring limited nasal septoplasty for significant DNS during endonasal DCR at 2 tertiary ophthalmic centers in Hong Kong during January 2006 to December 2007 were reviewed. Surgical outcomes, demographic factors, and intraoperative and postoperative details were recorded and analyzed. Results: A total of 25 septoplasties were performed in 25 patients (total of 28 DCR). After the nasal septoplasty, all nasal cavities had better exposure of the surgical site allowing completion of the endonasal DCR. At the latest follow-up, the anatomical and functional success rates of the endonasal DCR were both 96.4% (27/28). In one patient, septoplasty was complicated by asymptomatic mucosal adhesion between the lateral nasal wall and the nasal septum. Conclusions: In the hands of trained ophthalmic lacrimal surgeons, limited nasal septoplasty is an effective and safe procedure during endonasal DCR, allowing better exposure of the surgical field in patients with significant DNS. abstract_id: PUBMED:30456026 Is Septoplasty Necessary When Using the Endoscopic Endonasal Transsphenoidal Approach for a Deviated Nasal Septum? Objective This article describes the role played by endoscopic endonasal transsphenoidal approach (EETSA) to the sphenoidal process of the septal cartilage of a deviated nasal septum. Design Case series with chart review. Setting Tertiary referral center. Participants Between 2009 and 2016, 177 patients with skull base tumors who underwent EETSA were included. Main Outcome Measures In 8 cases, the conventional two nostrils-four hands technique was employed (group A). In 16 cases, we placed a right-side conventional nasoseptal flap and a left-side modified nasoseptal rescue flap (group B), and in 153 cases, bilateral modified nasoseptal rescue flaps (group C). The number of septoplasty-required cases and the change of nasal cavity area differences reflecting septal deviation were measured. Results Septoplasty during EETSA was performed in two cases: one from group B and one from group C. There was no significant difference in the ratio of septoplasty-required cases among the three groups ( p = 0.127). Between pre- and postoperative nasal cavity, the cross-sectional area difference at the anterior end of the middle turbinate level significantly decreased ( p = 0.045). Also, the angle of deviation at the level of ostiomeatal unit significantly decreased after EETSA ( p &lt; 0.001). Conclusion Separation of a deviated complex surrounding the sphenoidal process of the septal cartilage is the key to relieving a deviated nasal septum. EETSA combined with the two nostrils-four hands technique allows posterior septectomy (including removal of this deviated complex) to be performed. Thus, EETSA may commence without preceding septoplasty even in cases with severe nasal septum deviations. abstract_id: PUBMED:31996042 Outcomes of Endonasal Septoplasty and Extracorporeal Septal Reconstruction in Anterocaudal Septal Deviation. Objective: The aim of this study was to compare the functional and aesthetic outcomes of endonasal septoplasty (ES) and extracorporeal septal reconstruction (ESR) in anterocaudal septal deviation. Methods: In this study, patients (n = 46) who underwent nasal septoplasty surgery due to anterocaudal septal deviation during February 2015 to August 2017 were analyzed; 23 patients underwent ES and the others (n = 23) underwent ESR. The decision of whether to use the ES or ESR was randomized by random number table method. Nasal obstruction symptoms evaluation (NOSE) scores, total nasal resistance (TNR), aesthetic visual analog scale (VAS), nasal anatomical angles, and incidence of complications were used to assess the patients in 2 groups. Results: The NOSE scores, TNR, aesthetic VAS, tip deviation angle (TDA), nasolabial angle (NLA), nasofrontal angle (NFA) in the ESR group were significantly improved from preoperative to postoperative 1 year, whereas in the ES group, except aesthetic VAS, NLA, NFA, all other postoperative outcomes were improved from preoperative values. The objective and subjective postoperative results of ESR group were better than the ES group except TDA. The incidence of complications was not significantly different between the 2 groups. Conclusion: Our study have compared the nasal functional and aesthetic outcomes of 2 septoplasty techniques in a randomized controlled trial. The ESR technique is more effective than ES technique in correcting functional and aesthetic disorders caused by anterocaudal septal deviation. abstract_id: PUBMED:34976754 Pediatric nasal septoplasty outcomes. Background: Corrective nasal surgery has historically been avoided in the pediatric population out of concerns surrounding the potential disruption of nasal growth centers. There is a paucity of data on the rate of complications or revision surgery following septoplasty in this population. As such, the purpose of this study is to review the long-term outcomes of a large cohort of children who underwent nasal septoplasty and to compare outcomes of septoplasty patients under the age of 14 to those 14 years and older. Methods: A retrospective review was performed on all patients who received nasal septoplasty at our tertiary care pediatric referral center between October 2009 and September 2016. All patients who underwent septoplasty for a deviated nasal septum and were 0-18 years of age at the time of surgery were included in this analysis. Outcomes were compared between patients under the age of 14 to those 14 years and older. Demographic, surgical, and follow-up data were collected including complications and the need for revision surgery. Results: A total of 194 pediatric patients were identified as meeting inclusion criteria for the study. Mean age for the total cohort was 14.6 years (0-18 years), with a mean of 15.9 years in the older group and 10.6 years in the younger group. Revision septoplasty was performed more frequently in the younger group. However, no significant difference in the rate of complications was seen between the two groups. Conclusions: To the best of our knowledge, this is the largest retrospective study examining outcomes following septoplasty in pediatric patients. We also specifically examine outcomes of very young septoplasty patients, a population for which limited evidence exists. Further retrospective studies are needed to validate the use of nasal septoplasty in the pediatric population. abstract_id: PUBMED:32170417 Short- and mid-term results of limited approach septoplasty. Purpose: Limited approach septoplasty (LAS) follows the principle of respecting nasal structures and aims to minimize complications while ensuring proper nasal respiratory function. LAS is only applicable to selected cases of septal deviation. The present study aimed to: (1) compare short- and mid-term complications in two consecutive series of patients with the same type of septal deviation treated with LAS or classical septoplasty; and (2) examine postoperative respiratory function with active anterior rhinomanometry in the two series, and in a group of healthy, non-surgical volunteers. Methods: The study concerned two groups of 20 consecutive patients who underwent LAS or classical septoplasty for deviation in Cottle's areas 4/5, and a control group of 11 healthy adult volunteers with no sinonasal disorders. Results: The mean operating time did not differ significantly between the two groups. Three patients in each group developed minor complications. In a sitting position, the mean total nasal inspiratory resistance was 0.018 and 0.019 Pa ml/s (p = 0.46) in the LAS and classical septoplasty groups, respectively, and the total expiratory resistance was 0.019 and 0.018 Pa ml/s (p = 0.30). In a supine position, the mean total nasal inspiratory resistance was 0.017 and 0.021 Pa ml/s (p = 0.05), and the total expiratory resistance was 0.017 and 0.019 Pa ml/s (p = 0.14) in the LAS and classical septoplasty groups, respectively. Conclusion: In selected cases, LAS achieved much the same short- and mid-term results as classical septoplasty. The mini-invasive approach is certainly more respectful of the nasal mucosa, requiring a less extensive detachment, with consequent faster healing times and less tissue damage. abstract_id: PUBMED:36141351 Endonasal Septoplasty Using a Septal Cartilaginous Batten Graft for Managing Caudal Septal Deviation. Caudal nasal septal deviation is an important condition altering nasal obstruction and cosmetic appearance and many surgical techniques have been published on how to correct caudal septal deviation, as successful management of caudal septal deviation is challenging. The goal of our study was to explore the effect of endonasal septoplasty using a septal cartilaginous batten graft for managing caudal septal deviation. We tested 26 participants with caudal septal deviation who received endonasal septoplasty using a septal cartilaginous batten graft from 1 April 2019 to 29 June 2022, and followed up for at least 6 months. Nasal Obstruction Symptom Evaluation (NOSE) Scale and visual analog scale (VAS) were recorded at baseline, 1 month, and 6 months after surgery. Valid samples were analyzed by repeated measures ANOVA and paired sample t-test. Average participant age was 36.15 ± 11.02 years old. The preoperative, 1-month postoperative, and 6-month postoperative NOSE scale decreased significantly (75.38 ± 15.62, 13.85 ± 7.79, and 14.04 ± 9.90; p &lt; 0.001), while preoperative, 1-month postoperative, and 6-month postoperative VAS (convex/concave side) also improved (7.50 ± 0.81/3.38 ± 0.94, 2.27 ± 0.53/1.54 ± 0.58, and 2.31 ± 0.55/1.58 ± 0.58; p &lt; 0.001). Our results showed that endonasal septoplasty using a septal cartilaginous batten graft had good surgical outcomes without an open scar or severe complications. abstract_id: PUBMED:34130761 Effectiveness of modified cutting and suture technique for endonasal caudal septoplasty in correcting nasal obstruction and preventing nasal tip projection loss. Purpose: Caudal septoplasty is a difficult procedure. The cutting and suture technique is suitable for caudal septoplasty, but a batten graft is always necessary and bears the risk of nasal tip projection loss. We established a modified cutting and suture technique (MCAST), without using a batten graft, and investigated its effectiveness in correcting nasal obstruction and preventing nasal tip projection loss. Methods: We retrospectively reviewed the medical records of 22 patients who underwent caudal septoplasty using MCAST. Subjective assessment by Nasal Obstruction Symptom Evaluation (NOSE) score and objective assessment by computed tomography (CT) were performed before and after the surgery. For evaluating nasal tip projection, we asked patients about their awareness of external nasal deformity. Additionally, the nasal tip projection was measured by CT and compared before and after surgery. Results: The median preoperative NOSE score reduced significantly after MCAST (P &lt; 0.001). On CT, the ratio of the area of the convex side to that of the concave side in the anterior portion of the nasal cavity increased significantly after MCAST (P &lt; 0.001). All patients were unaware of external nasal deformity. There were no significant differences in the mean preoperative and postoperative nasal tip height and nasolabial angle. The mean supra tip height was significantly greater postoperatively than preoperatively (P = 0.02). Conclusions: The MCAST was useful for correcting nasal obstruction with caudal septal deviation. There was no postoperative loss of nasal tip projection. The MCAST can be suitable for correcting C-shaped caudal deviations without dislocating the caudal septum from the anterior nasal septum. abstract_id: PUBMED:33474919 Endoscopic septoplasty as a stage of endonasal dacryocystorhinostomy Endoscopic endonasal dacryocystorhinostomy (EN-DCR) is a modern and effective method for the surgical treatment of distal part obstruction of lacrimal pathway. Unsuccessful outcome and dacryocystitis relapses may be associated with concomitant rhinological pathology. Deviation of the nasal septum in the surgical area can affect the operation and contribute to stenosis. Additional correction of the nasal septum is necessary to obtain free surgical approach to the lacrimal sac. Endoscopic septoplasty provides high-quality surgical approach to the lateral wall of the nasal cavity. Objective: To evaluate simultaneous endoscopic septoplasty when performing EN-DCR. Methods: A retrospective analysis patients clinical data was conducted who were operated at the ENT department of Pavlov First Saint Petersburg State Medical University between January 2014 and December 2019. Results: 232 EN-DCR were performed in 212 patients (21 men, 191 women). The average age at the time of the operation was 54.3 years (range 19-87). Of the 212 people 192 underwent unilateral surgery, 20 - bilateral EN-DCR. Simultaneous endoscopic septoplasty was required in 36 (15.5%) cases. Conclusion: Performing a simultaneous endoscopic septoplasty provides comfortable surgical access and contributes to the final successful outcome of EN-DCR. abstract_id: PUBMED:28673725 Traction suture modification to tongue-in-groove caudal septoplasty. Caudal septal deviation leads to unfavorable esthetic as well as functional effects on the nasal airway. A modification to the tongue-in-groove (TIG) technique to correct these caudal septal deformities is described. With placement of a temporary suspension suture to the caudal septum, manual traction is applied, assuring that the caudal septum remains in the midline position while it is being secured with multiple through-and-through, trans-columellar and trans-septal sutures. From 2003 to 2016, 148 patients underwent endonasal septoplasty using this modified technique, with excellent functional and cosmetic outcomes and a revision rate of 1.4%. This modified TIG technique replaces the periosteal suture that secures the caudal septum to the midline nasal crest in the original TIG technique. This simplifies the procedure and minimizes the risk of securing the caudal septum off-midline when used in endonasal septoplasty. abstract_id: PUBMED:24493469 Comparison of L-strut preservation in endonasal and endoscopic septoplasty: a cadaveric study. Background: Preservation of an adequate cartilaginous L-strut to prevent complications of septoplasty has been long recognized as critical. However, no previous study has examined the dimensions of the L-strut that remain after septoplasty. We hypothesized that differences in exposure and visualization between endoscopic and endonasal techniques would result in differences in preserved L-strut dimensions. We designed this study to determine L-strut dimensions after performance of septoplasty with endonasal and endoscopic technique. Methods: We performed a cadaveric study with 24 heads randomly assigned to undergo endonasal vs endoscopic septoplasty by senior resident surgeons (postgraduate year 4 [PGY-4] and PGY-5). Removal of the skin-soft tissue envelope and mucoperichondrium was performed after septoplasty to permit direct measurement of the L-strut. Minimum and maximum widths were recorded for the caudal and dorsal segments; a single measurement was recorded for the width at the anterior septal angle. Statistical analysis was carried out using the 2-tailed distribution Student t test. Results: There was no significant difference in caudal or anterior septal width between endonasal and endoscopic techniques. There was a statistically significant difference in dorsal segment width for both minimum and maximum values, with endoscopic technique resulting in a narrower dorsal segment than endonasal technique (mean minimum value of 10.8 mm vs 13.2 mm, respectively, p = 0.03; and mean maximum value of 12.6 mm vs 16 mm, respectively, p = 0.01). There was significant variation in resident surgeon performance, with the performance of 1 resident surgeon accounting for the difference in minimum dorsal width. Conclusion: Differences in exposure and visualization between endoscopic and endonasal septoplasty techniques may result in differences in preserved L-strut dimensions. Care should be taken with endoscopic technique to prevent overly aggressive resection of septal cartilage, particularly with learners of this technique. Answer: Yes, limited nasal septoplasty performed by trained ophthalmologists during endonasal dacryocystorhinostomy (DCR) is considered safe. A study investigating the outcomes of limited nasal septoplasty for significant deviated nasal septum (DNS) during endonasal DCR at two tertiary ophthalmic centers in Hong Kong found that after nasal septoplasty, all nasal cavities had better exposure of the surgical site, allowing the completion of the endonasal DCR. The anatomical and functional success rates of the endonasal DCR were both high at 96.4%. Only one patient experienced a complication, which was an asymptomatic mucosal adhesion between the lateral nasal wall and the nasal septum. The study concluded that in the hands of trained ophthalmic lacrimal surgeons, limited nasal septoplasty is an effective and safe procedure during endonasal DCR, allowing better exposure of the surgical field in patients with significant DNS (PUBMED:19617788).
Instruction: Is nonoperative management of adhesive intestinal obstruction applicable to children in a resource-poor country? Abstracts: abstract_id: PUBMED:20431212 Is nonoperative management of adhesive intestinal obstruction applicable to children in a resource-poor country? Background: Nonoperative management of adhesive intestinal obstruction gives good results in adults but there are scant studies on its outcome in children. This study reports outcomes and experiences with nonoperative and operative management of adhesive intestinal obstruction in children in a resource-poor country. Patients And Methods: This is a retrospective analysis of records of children who were managed with adhesive intestinal obstruction at the University of Benin Teaching Hospital between January 2002 and December 2008. Results: Adhesive intestinal obstruction accounted for 21 (8.8%) of 238 children managed with intestinal obstruction. They were aged between 7 weeks and 16 years (mean 3 +/- 6.4 years), comprising 13 males and eight females (ratio 1.6:1). Prior laparotomy for gangrenous/perforated intussusception (seven, 33.3%), perforated appendix (five, 23.8%), perforated volvulus (three, 14.3%), penetrating abdominal trauma (two, 9.5%) and perforated typhoid (two, 9.5%) were major aetiologies. Adhesive obstruction occurred between 6 weeks and 7 years after the index laparotomies. All the 21 children had initial nonoperative management without success, owing to lack of total parenteral nutrition and monitoring facilities. Outcomes of open adhesiolysis performed between 26 and 48 h in six (28.6%) children due to poor response to nonoperative management, 11-13 days in 12 (57.1%) who responded minimally and 2-5 weeks in three (14.3%) who had relapse of symptoms were encouraging. Exploration of the 21 adhesive obstructions confirmed small bowel obstruction due to solitary bands (two, 9.5%), multiple bands/adhesions (13, 61.9%) and encasement, including one bowel gangrene (six, 28.6%). Postoperatively, the only child who had recurrence during 1-6 years of follow-up did well after a repeat adhesiolysis. Conclusion: Nonoperative management was unsuccessful in this setting. Open adhesiolysis may be adopted in children to prevent avoidable morbidities and mortalities in settings with limited resources. abstract_id: PUBMED:30627335 Cost effectiveness analysis of duration of nonoperative management for adhesive bowel obstruction in a developing country. Background: Adhesive bowel obstruction (ABO) costs billions of dollars in developed countries. Cost is unknown in developing countries. This depends on the type of management and duration of hospital stay. Nonoperative management (NOM) of uncomplicated obstruction is safe for up to 10 days. While it remains cost effective, the most efficient duration of nonoperative management must retain its advantages over operative management. Aim: To describe cost effectiveness of various durations of nonoperative management of adhesive obstruction in a developing country. Method: Over 2 year period, Patients who had uncomplicated adhesive obstruction were observed on trial of nonoperative management. Length of hospital stay and success rate were combined as surrogates for Cost effectiveness analysis of 2 to 5 days and ≥7 days nonoperative management. Results: 41 patients (24(58.5%) females) were eligible. Mean age 38.4 ± 14.7 (range 18-80) years. 31 (75.6%) were first time admissions. The most common previous abdominal operations were for appendix and obstetrics and gynecologic pathologies. Median duration of nonoperative management (dNOM) was 4 days, median LOS was 9 days. Nonoperative management was successful in 53.7% (22 patients). Total estimated direct hospital cost of 41 adhesive bowel obstructions was $133,279. Total personnel charges were $112,142. Mean operative and nonoperative management was $4,914 and $1,814 respectively (p &lt;0.0001). Most of successful nonoperative management was within 5 days. 4 days nonoperative management had the highest cost utility. Conclusion: From this study, without indications for immediate surgical intervention, 4 days nonoperative management is the most cost effective course, after which surgical intervention may be considered if there is no improvement. abstract_id: PUBMED:29507603 Nonoperative management of appendiceal phlegmon or abscess in children less than 3 years of age. Background: In children less than 3 years of age, there is little experience in the nonoperative management of appendiceal phlegmon or abscess (APA), especially in APA with an appendicolith. The purposes of this study were to evaluate the effects of an appendicolith and the success rate of nonoperative management for APA in these young children. Methods: Children younger than 3 years of age with APA who underwent attempted initial nonoperative treatment between January 2008 and December 2016 were reviewed. Based on the presence or absence of an appendicolith on admission ultrasonography examination or computed tomography scan, children were divided into two groups: appendicolith group and no appendicolith group. Results: There were 50 children who met the study criteria. Among 50 children, three children failed to respond to nonoperative treatment because of aggravated intestinal obstruction or recurrent appendicitis within 30 days of admission. The overall success rate for nonoperative management of APA was 94% (47/50) in children younger than 3 years old. The rate of diarrhea and CRP levels were higher in the appendicolith group than that of the no appendicolith group (P &lt; 0.05). However, the success rate and the hospital length of stay for nonoperative treatment in the appendicolith group and the no appendicolith group were similar without statistical significance. Conclusion: APA with or without an appendicolith can have nonoperative management without immediate appendectomy in children less than 3 years old. abstract_id: PUBMED:34872712 Nonoperative management without nasogastric tube decompression for adhesive small bowel obstruction. Background: Although nasogastric tube (NGT) decompression is widely used in nonoperative management for adhesive small bowel obstruction (SBO), robust evidence is lacking to support this routine practice. Methods: Patients who received nonoperative management with a diagnosis of adhesive SBO were retrospectively reviewed. Those who received NGT or long-tube decompression at admission were categorized into the NGT group, while those who initially had no NGT placement were categorized into the non-NGT group. The incidence of vomiting after admission, pneumonia after admission, and the need for surgery were compared. Results: Among 288 patients, 148 (51.3%) had non-NGT conservative treatment. There were no significant differences in the incidence of vomiting (NGT vs non-NGT: 12.9% vs 18.9%, p = 0.16), pneumonia (1.4% vs 0%, p = 0.235), or need for surgery (12.9% vs 7.4%, p = 0.126). Conclusions: While NGT decompression is a standard of care for adhesive SBO, selective NGT insertion for patients with persistent nausea or vomiting can become an option. abstract_id: PUBMED:34893310 Variability in the management of adhesive small bowel obstruction in children. Background: This study assessed inter-hospital variability in operative-vs-nonoperative management of pediatric adhesive small bowel obstruction (ASBO). Methods: A multi-institutional retrospective study was performed examining patients 1-21 years-of-age presenting with ASBO from 2010 to 2019 utilizing the Pediatric Health Information System. Multivariable mixed-effects logistic regression was performed assessing inter-hospital variability in operative-vs-nonoperative management of ASBO. Results: Among 6410 pediatric ASBO admissions identified at 46 hospitals, 3,239 (50.5%) underwent surgery during that admission. The hospital-specific rate of surgery ranged from 35.3% (95%CI: 28.5-42.6%) to 74.7% (66.3-81.6%) in the unadjusted model (p &lt; 0.001), and from 35.1% (26.3-45.1%) to 73.9% (66.7-79.9%) in the adjusted model (p &lt; 0.001). Factors associated with operative management for ASBO included admission to a surgical service (OR 2.8 [95%CI: 2.4-3.2], p &lt; 0.001), congenital intestinal and/or rotational anomaly (OR 2.5 [2.1-3.1], p &lt; 0.001), diagnostic workup including advanced abdominal imaging (OR 1.7 [1.5-1.9], p &lt; 0.001), non-emergent admission status (OR 1.5 [1.3-1.8], p &lt; 0.001), and increasing number of complex chronic comorbidities (OR 1.3 [1.2-1.4], p &lt; 0.001). Factors associated with nonoperative management for ASBO included increased hospital-specific annual ASBO volume (OR 0.98 [95%CI: 0.97-0.99], p = 0.002), older age (OR 0.97 [0.96-0.98], p &lt; 0.001), public insurance (OR 0.87 [0.78-0.96], p = 0.008), and presence of coinciding non-intestinal congenital anomalies, neurologic/neuromuscular disease, and/or medical technology dependence (OR 0.57 [95%CI: 0.47-0.68], p &lt; 0.001). Conclusions: Rates of surgical intervention for ASBO vary significantly across tertiary children's hospitals in the United States. The variability was independent of patient and hospital characteristics and is likely due to practice variation. Level Of Evidence: III. abstract_id: PUBMED:31708206 Hospital-level factors associated with nonoperative management in common pediatric surgical procedures. Purpose: Our purpose was to examine patient- and hospital-level factors associated with nonoperative management in common pediatric surgical diagnoses. Methods: Using the 2012 Kid's Inpatient Database (KID), we identified patients &lt;20 years old diagnosed with cholecystitis (CHOL), bowel obstruction (BO), perforated appendicitis (PA), or spontaneous pneumothorax (SPTX). Logistic regression models were used to identify factors associated with nonoperative management. Results: Of 36,026 admissions for the diagnoses of interest, 7472 (20.7%) were managed nonoperatively. SPTX had the highest incidence of NONOP (55.9%; n = 394), while PA had the lowest incidence (9.2%; n = 1641). Utilization of operative management varied significantly between hospitals. Patients diagnosed with BO (OR 0.41; 95% CI 0.30-0.56) and SPTX (OR 0.28; 95% CI 0.14-0.56) had decreased odds of operative management when treated at an urban, teaching hospital compared to a rural hospital. Patients with PA had increased odds of operative management when treated at an urban, teaching hospital (OR 2.42; 95% CI 1.78-3.30). Hospital-level factors associated with decreased odds of nonoperative management included urban, nonteaching status (OR 0.54; 95% CI 0.31-0.91) and location in the South (OR 0.53; 95% CI 0.34-83) and West (OR 0.47; 95% CI 0.30-0.74). Conclusions: Despite representing more than 20% of pediatric surgical care for several conditions, nonoperative management is an understudied aspect of care with significant variation that warrants further research. Level Of Evidence: III. abstract_id: PUBMED:33511902 Adhesive Small Bowel Obstruction and the six w's: Who, How, Why, When, What, and Where to diagnose and operate? Background And Aims: Approximately 75% of patients admitted with small bowel obstruction have intra-abdominal adhesions as their cause (adhesive small bowel obstruction). Up to 70% of adhesive small bowel obstruction cases, in the absence of strangulation and bowel ischemia, can be successfully treated with conservative management. However, emerging evidence shows that surgery performed early during the first episode of adhesive small bowel obstruction is highly effective. The objective of this narrative review is to summarize the current evidence on adhesive small bowel obstruction management strategies. Materials And Methods: A review of the literature published over the last 20 years was performed to assess Who, hoW, Why, When, What, and Where diagnose and operate on patients with adhesive small bowel obstruction. Results: Adequate patient selection through physical examination and computed tomography is the key factor of the entire management strategy, as failure to detect patients with strangulated adhesive small bowel obstruction and bowel ischemia is associated with significant morbidity and mortality. The indication for surgical exploration is usually defined as a failure to pass contrast into the ascending colon within 8-24 h. However, operative management with early adhesiolysis, defined as operative intervention on either the calendar day of admission or the calendar day after admission, has recently shown to be associated with an overall long-term survival benefit compared to conservative management. Regarding the surgical technique, laparoscopy should be used only in selected patients with an anticipated single obstructing band, and there should be a low threshold for conversion to an open procedure in cases of high risk of bowel injuries. Conclusion: Although most adhesive small bowel obstruction patients without suspicion of bowel strangulation or gangrene are currently managed nonoperatively, the long-term outcomes following this approach need to be analyzed in a more exhaustive way, as surgery performed early during the first episode of adhesive small bowel obstruction has shown to be highly effective, with a lower rate of recurrence. abstract_id: PUBMED:33349571 Comparing outcomes of nonoperative treatment for adhesive small bowel obstruction with and without antibiotics. Introduction: Some clinicians administer antibiotics in adhesive SBO treatment to prevent bacterial translocation without evidence confirming reduced sepsis and mortality. We aimed to evaluate the effectiveness of preventive antibiotic administration in nonoperative treatment of adhesive small bowel obstruction (SBO) in a retrospective study. Methods: Using a Japanese national inpatient database, we identified 114,786 eligible patients with adhesive SBO and divided patients into a group who did not receive intravenous antibiotics in the initial 2 consecutive days after admission (control group, n = 71,666) and a group who received intravenous antibiotics ≥2 days after admission (antibiotic group, n = 43,120). To compare the in-hospital mortality, occurrence of sepsis, septic shock, Clostridioides difficile colitis, length of stay, and total costs between the two groups, we performed instrumental variable analyses to adjust for measured and unmeasured confounding factors. Results: Overall, in-hospital mortality was 2.2%, and the occurrence of sepsis was 0.8%. In the instrumental variable analyses, no significant differences were found for in-hospital mortality, occurrence of sepsis, septic shock, Clostridioides difficile colitis, or total hospitalization costs. The antibiotic group showed a longer length of stay than the control group (coefficient, 1.9 days; 95% confidence interval, 0.6-3.2). Conclusions: In this large nationwide cohort of patients with adhesive SBO, we found no benefit regarding preventive antibiotic administration in nonoperative treatment; however, antibiotic administration was associated with a longer hospital stay. These results did not support routine administration of antibiotics at admission to prevent bacterial translocation. abstract_id: PUBMED:36898878 The Volume of Intestinal Decompression can Predict the Necessity of Surgical Intervention for Adhesive Small Bowel Obstruction. Background: There is no standard timing for switching to surgical management for children with adhesive small bowel obstruction (ASBO) who initially receive conservative treatment. We hypothesized that an increased gastrointestinal drainage volume may indicate the need for surgical intervention. Methods: The study population included 150 episodes in the patients less than 20 years of age who received treatment for ASBO in our department from January 2008 to August 2019. Patients were divided into two groups: the successful conservative treatment group (CT) and the eventual surgical treatment group (ST). Following the analysis of all episodes (Study 1), we limited our analysis to only first ASBO episodes (Study 2). We retrospectively reviewed their medical records. Results: There were statistically significant differences in the volume on the 2nd day in both Study 1 (9.1 ml/kg vs. 18.7 ml/kg; p &lt; 0.01) and study 2 (8.1 ml/kg vs. 19.7 ml/kg; p &lt; 0.01). The cut-off value was the same for both Study 1 and Study 2 (11.7 ml/kg). Conclusions: The gastrointestinal drainage volume on the 2nd day in ST was significantly larger than that in CT. Accordingly, we considered that the drainage volume may predict eventual surgical intervention for children with ASBO who initially receive conservative treatment. Level Of Evidence: Level IV. abstract_id: PUBMED:27022449 Adhesive small bowel adhesions obstruction: Evolutions in diagnosis, management and prevention. Intra-abdominal adhesions following abdominal surgery represent a major unsolved problem. They are the first cause of small bowel obstruction. Diagnosis is based on clinical evaluation, water-soluble contrast follow-through and computed tomography scan. For patients presenting no signs of strangulation, peritonitis or severe intestinal impairment there is good evidence to support non-operative management. Open surgery is the preferred method for the surgical treatment of adhesive small bowel obstruction, in case of suspected strangulation or after failed conservative management, but laparoscopy is gaining widespread acceptance especially in selected group of patients. "Good" surgical technique and anti-adhesive barriers are the main current concepts of adhesion prevention. We discuss current knowledge in modern diagnosis and evolving strategies for management and prevention that are leading to stratified care for patients. Answer: Nonoperative management of adhesive intestinal obstruction in children in a resource-poor country has been explored, but with varying outcomes. A study conducted at the University of Benin Teaching Hospital between January 2002 and December 2008 found that nonoperative management was unsuccessful in all 21 children with adhesive intestinal obstruction due to the lack of total parenteral nutrition and monitoring facilities. These children eventually required open adhesiolysis, which yielded encouraging results, suggesting that open adhesiolysis may be a more suitable option in such settings to prevent morbidities and mortalities (PUBMED:20431212). Another study evaluating the cost-effectiveness of nonoperative management durations for adhesive bowel obstruction in a developing country found that a 4-day trial of nonoperative management was the most cost-effective, after which surgical intervention could be considered if there was no improvement (PUBMED:30627335). In contrast, a study on nonoperative management of appendiceal phlegmon or abscess in children less than 3 years old reported a high success rate of 94% for nonoperative management, indicating that it can be a viable option even in young children (PUBMED:29507603). However, variability in the management of adhesive small bowel obstruction in children across different hospitals has been noted, with significant differences in the rates of surgical intervention, suggesting that practice variation rather than patient or hospital characteristics may influence the decision to operate (PUBMED:34893310). In summary, while nonoperative management of adhesive intestinal obstruction in children can be applicable in resource-poor countries, its success may be limited by the availability of essential supportive care such as total parenteral nutrition and monitoring facilities. A short trial of nonoperative management may be cost-effective, but if facilities are inadequate or there is no improvement, surgical intervention may be necessary. Variability in management practices also indicates that there is no one-size-fits-all approach, and decisions should be tailored to the individual patient and the resources available (PUBMED:20431212; PUBMED:30627335; PUBMED:29507603; PUBMED:34893310).
Instruction: Underuse of 24-hour urine collection among children with incident urinary stones: a quality-of-care concern? Abstracts: abstract_id: PUBMED:24958480 Underuse of 24-hour urine collection among children with incident urinary stones: a quality-of-care concern? Objective: To determine the prevalence of 24-hour urine collection among first-time pediatric stone formers and investigate factors associated with its use. Materials And Methods: Using data from the MarketScan Commercial Claims and Encounters Database (2002-2006), children presenting to the emergency department with an incident episode of upper urinary tract stone disease were identified through a diagnosis code-based algorithm. The number of patients performing a 24-hour urine collection within 6 months of this encounter was calculated. Temporal trends in the prevalence of use were assessed. We fitted multivariate logistic regression models to examine factors associated with testing. Results: In total, 1848 children presented with an incident upper tract stone, of which 12.0% submitted a 24-hour urine collection. This percentage remained stable over the study period. Testing was more common among younger patients (P &lt;.001) and those who visited urologists (P &lt;.001) or nephrologists (P &lt;.001). The odds of testing were nearly 4-fold or 7-fold higher if the patient saw a urologist (odds ratio, 3.99; 95% confidence interval, 2.83-5.62) or a nephrologist (odds ratio, 7.00; 95% confidence interval, 3.95-12.41), respectively. Conclusion: Despite rates of stone recurrence, 24-hour urine collection appears to be underused among children. Efforts to increase its use are therefore likely to benefit pediatric patients with urinary stone disease. abstract_id: PUBMED:25765427 Re: Underuse of 24-hour urine collection among children with incident urinary stones: a quality-of-care concern? N/A abstract_id: PUBMED:24018242 Prevalence of 24-hour urine collection in high risk stone formers. Purpose: Secondary prevention has an important role in urinary stone disease. The core of secondary prevention is the identification of modifiable risk factors by a 24-hour urine collection, which then directs selective medical therapy. While this decreases the recurrence rate, little is known about the frequency with which 24-hour urine collections are obtained. Materials And Methods: Using medical claims from 2002 to 2006 we identified adults with incident urinary stone episodes. With appropriate diagnosis codes we determined those at high risk for recurrence. Of these patients we determined the proportion in whom a 24-hour urine collection was done within 6 months of diagnosis. Finally, we fitted regression models to measure associations between patient and provider level factors, and obtaining a 24-hour urine collection. Results: We identified 28,836 patients at high risk for recurrence. The prevalence of 24-hour urine testing increased from 7.0% in 2003 to 7.9% in 2006 (p = 0.011), although the overall prevalence was exceedingly low at 7.4%. Multivariable regression revealed that region of residence and level of comorbid illness were independently associated with 24-hour urine collection, as was the type of physician who performed the followup. For instance, the odds of metabolic evaluation were 2.9 times higher when a patient was seen by a nephrologist (OR 2.92, 95% CI 2.32-3.67), and more than threefold higher when seen by a urologist (OR 3.87, 95% CI 3.48-4.30). Conclusions: Obtaining 24-hour urine collections in stone formers at high risk is uncommon, raising a quality of care concern. abstract_id: PUBMED:34086154 Improving Compliance with 24-H Urine Collections: Understanding Inadequacies in the Collection Process and Risk Factors for Poor Compliance. Purpose Of Review: The purpose of this review paper is to describe the 24-h urine collection in terms of its utility, collection process, and common problems with its acquisition. Recent Findings: Although 24-h urine collections are standard of care for high-risk stone formers, several nuances in test acquisition including inaccurate urine collections 50% of the time and poor patient compliance limit its potential utility. Compliance in obtaining 24-h urine collections has been shown to be improved in patients who have not undergone surgical treatment of urinary calculi, patients with metabolic stone disease or family history of stone disease, Caucasian ethnicity, and in those with more sedentary occupations. Studies show conflicting data of compliance regarding patient age and gender. Physicians must understand the difficulties regarding 24-h urine collections including patient compliance, variability between collections, and complexities with interpretation to best utilize this tool in guiding clinical management for the treatment of nephrolithiasis. abstract_id: PUBMED:25134833 Paediatrics: A 24 h urine collection is essential in nephrolithiasis evaluation. N/A abstract_id: PUBMED:24398534 Carefully conducted preanalytic and postanalytic procedures for urine samples. Often neglected in urolithiasis treatment Particularly in the urological consultation sessions collection of 24 h urine samples is required to evaluate the metabolic risk factors for urolith formation. To ensure a yield from urinalysis of sufficiently high quality and to minimize the number of potential error sources, correctly performed preanalytical procedures are required. If certain basic quality criteria for the collection procedures are not being followed false data can be determined and the clinical interpretation based on that analysis will result in a wrong rating of the true metabolic status of patients. This article provides practical, feasible and proven solutions in an encouraging, motivating and clear way to ensure that at least the most important steps for a correct urine collection are being taken. The preanalytical requirements for the determination of other urinary parameters which are not in the direct focus of urolithiasis treatment will not be covered by this article. abstract_id: PUBMED:28705575 Can a Simplified 12-Hour Nighttime Urine Collection Predict Urinary Stone Risk? Objective: To determine if there is correlation between nighttime 12-hour and traditional 24-hour urine collection in regard to chemistry values and the supersaturations of calcium oxalate, calcium phosphate, and uric acid for the metabolic evaluation of nephrolithiasis. Materials And Methods: Ninety-five patients were prospectively enrolled from 2013 to 2015. Patients &gt;18 years of age who presented to a tertiary stone clinic and who would normally be counseled for 24-hour urine collection were eligible for the study. Participants completed 24-hour urine collections twice, with each divided into 2 separate 12-hour collections. Day-time collection began after the first morning void and continued for 12 hours. The night collection proceeded for the next 12 hours through the first morning void. Results: Forty-nine 24-hour samples from 35 patients met inclusion criteria and were included in the analysis. Overall, there was strong correlation between the night 12-hour and the 24-hour urine collections with R2 ranging from 0.76 for pH to 0.96 for Citrate. In our analysis of variability, the nighttime 12-hour collection differed from the 24-hour collection by 30% in 1-9 patients (2.0%-18.4%) based on individual chemistry value. Diagnosis of underlying metabolic abnormalities was concordant in 92% of patients. Conclusion: A 12-hour nighttime collection has strong correlation with 24-hour urine collection. As such, simplifying the metabolic evaluation to a 12-hour overnight collection may be feasible-improving compliance and decreasing patient burden. abstract_id: PUBMED:22967013 24-hour urine collection in the metabolic evaluation of stone formers: is one study adequate? Unlabelled: Abstract Purpose: To determine whether one vs two 24-hour urine collections is optimal in the metabolic evaluation of nephrolithiasis. Methods: We retrospectively reviewed all 24-hour urine collections performed at our tertiary stone clinic from July 1997 to February 2012. We identified patients with two 24-hour urine collections performed ≤10 days apart. Samples were analyzed by an outside laboratory for the standard urinary parameters. For each parameter, pairwise t tests were performed and Pearson correlation coefficients were calculated to compare samples 1 and 2. In addition, the number of cases that changed from normal to abnormal or vice versa was also evaluated for each parameter and the Kappa statistic was calculated. Results: A total of 813 subjects submitted two 24-hour urine collections ≤10 days apart. Mean age was 53.2 years, and mean body mass index was 28.8 kg/m(2). Based on creatinine 24/kg, subset analysis was performed for all properly collected samples (n=236). Using pairwise t test, 24-hour urine volume (P=0.0365) and phosphorus (P=0.0387) showed a statistically significant difference between samples 1 and 2. None of the other urinary parameters demonstrated a statistically significant difference when means were compared (pairwise t test, P&gt;0.05), (range 0.061-0.9983). Pearson correlation demonstrated a high degree of correlation between two 24-hour urines for all variables (r=0.66-0.95, each P&lt;0.0001). Depending on the urinary parameter assessed, 5.5% to 44.9% of patients changed from normalcy to abnormality, or vice versa. Conclusions: A single 24-hour urine collection may have changed clinical decision making in up to 45% of patients. Therefore, we recommend two collections to optimize the diagnostic yield and appropriately target stone prevention strategies. abstract_id: PUBMED:17679383 Risk of lithiasis and urolithiasis in children with unspecific inflammatory bowel disease Unlabelled: Urolithiasis is a disease of a complex, often systemic and not fully unequivocal, etiopathological mechanism, resulting in concrement crystallization The aim of study was the evaluation of frequency of occurrences of crystallization risk states and urolithiasis in children with IBD. Material And Methods: The study was conducted on 35 children aged between 12 and 18 with confirmed ulcerative colitis (25 children) and Crohn disease (10 children). In all children regiular urine examination and urine inoculation were conducted, as well as stimation of concrement crystallization risk index in a twenty-four hour urine collection. Urinary tract ultrasound as well as evaluation of erythrocytes in urine sediment in phase contrast microscope,have been performed. Control group consisted of 20 children without digestive tract complains. Results: In 13 (37%) of examined children, urolithiasis occured in family history. The general urine examination revealed in 27 (77%) examined children existence of erytrocyturia of various degree and the evaluation in phase contrast microscope indicated their extraglomucal origin. The ion-creatininal analysis of twenty-four hour urine collection revealed in 29 (82.2%) children risk of oxalate-calcium concrements crystallization and in 6 (17%) children--of oxalate concrements crystallization. USG examination revealed in 4 (11.4%) children existence of single or multiple concrements in urinary system. Analysis of 1 twenty-four hour urine collection indicated the decrease of magnesium ions in 27 (77%) examined children. In control group only in 2 (5.7%) children the risk of occurrence of phosphate-ammonium concrements crystallization, which was significantly lower in examined group (p &lt; 0.01). Conclusions: Unspecific intestine inflammations have an influence on the occurrence of crystallization risk states, as well as a fully symptomatic urolithiasis. abstract_id: PUBMED:36545539 Specialist Care, Metabolic Testing, and Testing Completeness Among U.S. Veterans with Urinary Stone Disease. Purpose: Recent observational studies reporting a lack of benefit from 24-hour urine testing for urinary stone disease (USD) prevention assumed testing included all components recommended from clinical guidelines. We sought to assess the completeness of 24-hour urine testing in the VA population. Materials And Methods: From the VHA Corporate Data Warehouse (2012-2019), we identified patients with USD (n=198,621) and determined those who saw a urologist and/or nephrologist, and received 24-hour urine testing within 12 months of their index USD encounter. Through Logical Observation Identifiers Names and Codes, we evaluated each collection's completeness, defined as including all of urine volume, calcium, oxalate, citrate, uric acid, and creatinine. We then fit a multilevel logistic regression model with random effects for VHA facility to evaluate factors associated with specialist follow-up, testing, and testing completeness. Results: Specialist follow-up occurred in 54.3% and was stable over time. Testing occurred in 8.4%, declining from 9.3% in 2012 to 7.2% in 2019. Of tests performed, 54.6% were complete (43.7% increasing to 62.7% from 2012-2019). In adjusted analysis, there was high between-facility variation in specialist follow-up (median OR 2.0; 95% CI 1.7-2.0), testing (median OR 2.2, 95% CI 1.9-2.4), and testing completeness (median OR, 6.0, 95% CI 4.5-7.3). Individual facilities contributed 52% (intraclass correlation coefficient, 0.52; 95% CI, 0.44-0.57) towards the observed variation in testing completeness. Conclusions: Approximately 1 in 12 U.S. Veterans with USD receive metabolic testing and half of these tests are complete. Addressing facility level variation in testing completeness may improve USD care. Answer: The underuse of 24-hour urine collection among children with incident urinary stones is indeed a quality-of-care concern. A study using data from the MarketScan Commercial Claims and Encounters Database (2002-2006) found that only 12.0% of children presenting with an incident upper tract stone submitted a 24-hour urine collection within 6 months of the encounter, and this percentage remained stable over the study period (PUBMED:24958480). Despite the known rates of stone recurrence, the underuse of this diagnostic tool suggests that efforts to increase its use could benefit pediatric patients with urinary stone disease. The likelihood of undergoing a 24-hour urine collection was significantly higher if the patient saw a urologist or nephrologist, with odds nearly 4-fold or 7-fold higher, respectively (PUBMED:24958480). The underuse is not limited to pediatric patients; a separate study identified that obtaining 24-hour urine collections in adult stone formers at high risk is also uncommon, with an overall prevalence of only 7.4% (PUBMED:24018242). This raises similar concerns about the quality of care for adult patients. Furthermore, the utility of 24-hour urine collections is often limited by inaccurate urine collections and poor patient compliance. Improving compliance has been shown to be possible in certain patient demographics, such as those who have not undergone surgical treatment, patients with metabolic stone disease or family history of stone disease, Caucasian ethnicity, and those with more sedentary occupations (PUBMED:34086154). In summary, the underuse of 24-hour urine collection among children with incident urinary stones is a quality-of-care concern that extends to adult patients as well. It highlights the need for increased awareness and efforts to improve the use and accuracy of this diagnostic tool in the management of urinary stone disease.
Instruction: Can the Bio-Transfx pin fail during initial ACL graft insertion? Abstracts: abstract_id: PUBMED:20001934 Can the Bio-Transfx pin fail during initial ACL graft insertion? Background: The Bio-Transfix pin is a biodegradable device used for femoral tunnel anterior cruciate ligament (ACL) graft fixation. Recent clinical studies have suggested the possibility of the pin's postoperative failure. Methods: This investigation evaluates the initial strength of several Bio-Transfix pin ACL fixations in a simulated femoral tunnel model. The forces generated by five surgeons during simulated ACL graft tensioning were also measured. Results: Average strengths of the pins ranged from 1075 to 2160 N for 10 and 8 mm tunnels, respectively, whereas the maximum surgeon-generated forces were 535 N. Conclusions: These results imply that initial fracture of the pin itself is unlikely; however, failure of the supporting bone or a decrease in pin strength due to biodegradation could account for early loss of the fixation. abstract_id: PUBMED:34627401 Graft tunnel integration occurs early in the tibial tunnel compared with the femoral tunnel after anterior cruciate ligament reconstruction with preserved insertion hamstring tendon graft. Background: Preservation of hamstring tendon insertion at the time of anterior cruciate ligament (ACL) reconstruction is a well-known technique; however, its effect on graft integration is not well studied. The present study was conducted to study the graft integration inside the tibial and femoral tunnels, respectively, after ACL reconstruction using hamstring tendon graft with preserved insertion. Methods: Twenty-five professional athletes who underwent ACL reconstruction using hamstring tendon graft with preserved tibia insertion were enrolled in the study. Functional outcomes were checked at final follow-up using Lysholm score and Tegner activity scale. Magnetic resonance imaging (MRI) was done at 8 months and 14 months follow-up to study the graft tunnel integration of the ACL graft at both tibial and femoral tunnels. Results: The mean Fibrous interzone (FI) score (tibial tunnel) decreased from 2.61 (1-5) at 8 months to 2.04 (1-4) at 14 months follow-up (p = 0.02). The mean FI score (femoral side) decreased from 3.04 (2-5) at 8 months to 2.57 (2-4) at 14 months (p = 0.02). Conclusions: Graft integration occurs early in the tibial tunnel as compared with the femur tunnel with preserved insertion hamstring tendon autograft. Trial registration CTRI/2019/07/020320 [registered on 22/07/2019]; http://www.ctri.nic.in/Clinicaltrials/pdf_generate.php?trialid=33884&amp;EncHid=&amp;modid=&amp;compid=%27,%2733884det%27. abstract_id: PUBMED:22570836 ACL Reconstruction with Autologous Hamstring Tendon: Comparison of Short Term Clinical Results between Rigid-fix and PINN-ACL Cross Pin. Purpose: To compare the short term clinical results of anterior cruciate ligament (ACL) reconstruction with autologous hamstring tendon between Rigid-fix and PINN-ACL Cross Pin for femoral side fixation. Materials And Methods: 127 patients who underwent arthroscopic ACL reconstruction using autologous hamstring tendon and had been followedup for over than one year were enrolled for the present study. Rigid-fix was used in 71 cases (group 1), and PINN-ACL Cross Pin was used in 56 cases (group 2). Clinical and radiological results, operation time, and perioperative complications were compared amongst the two groups. Results: The International Knee Documentation Committee subjective score and Lysholm score were 94 and 95 in group 1 and 87 and 91 in group 2, with no statistical difference (p=0.892, p=0.833), respectively. However, significant difference was observed in one-leg hop test between the two groups (p=0.032). Five cases in group 1 and 40 cases in group 2 were found to be associated with perioperative complications with statistical difference (p&lt;0.0001). Conclusions: There was no resultant difference between the employment of PINN-ACL Cross Pin and Rigid-fix as femoral graft fixation for ACL reconstruction with hamstring tendon. However, PINN-ACL Cross Pin led to complications with extensive operation times. Hence, it needs further improvement of tools for minimization of complications. abstract_id: PUBMED:28951643 ACL reconstruction using femoral Rigid-fix and tibial Bio-intrafix devices. Objectives: The objective of this study is to report the clinical and functional outcomes of Hamstring graft ACL reconstruction fixed with femoral Rigid-fix and tibial Bio-intrafix devices. Methods: In a prospective study, the clinical (Lysholm score) and functional outcomes (International Knee Documentation Committee, IKDC) of 44 patients who underwent autologus hamstring graft ACL reconstruction using femoral Rigid-fix and tibial Bio-intrafix devices, were evaluated at the end of one year. Joint laxity was assessed with KT-1000 arthrometer (MEDmetric, San Diego, CA). Results: None of the patients complained of instability, joint swelling or severe pain in the postoperative period. The IKDC score improved from 66.62 + 5.36 to 92.36 + 5.30 (P &lt; 0.001). Lysholm scores in the preoperative and follow up period were 68.28 + 5.54 and 93.87 + 4.75 respectively; the improvement was statistically significant with P value &lt; 0.001. Similarly, the mean anterior translation of tibia improved from 7.45 mm in the preoperative period to 3.89 mm after one year of ACL reconstruction. Associated meniscus injury didn't have significant impact on the overall outcome. No intraoperative or postoperative complications were documented. Conclusion: Hamstring graft fixation using femoral Rigid-fix and tibial Bio-intrafix devices provide secure graft fixation and allows aggressive rehabilitation. The clinical and functional outcome of this hybrid fixation technique is rewarding. abstract_id: PUBMED:25343145 Histologic Predictors of Maximum Failure Loads Differ between the Healing ACL and ACL Grafts after 6 and 12 Months In Vivo. Background: Bio-enhanced ACL repair, where the suture repair is supplemented with a biological scaffold, is a promising novel technique to stimulate healing after ACL rupture. However, the histological properties of a successfully healing ACL and how they relate to the mechanical properties have not been fully described. Hypothesis/purpose: The purpose of the study was to determine which histological features best correlated with the mechanical properties of the healing ACL repairs and ACL grafts in a porcine model at six and twelve months after injury. Study Design: Controlled laboratory study. Methods: Forty-eight Yucatan mini-pigs underwent ACL transection followed by: 1) conventional ACL reconstruction with bone-patellar tendon-bone (BPTB) allograft, 2) bio-enhanced ACL reconstruction with BPTB allograft using a bioactive scaffold, or 3) bio-enhanced ACL repair using the same bioactive scaffold. After 6 and 12 months of healing, structural properties of the ACL or graft (yield &amp; failure load, linear stiffness) were measured. Following mechanical testing, ACL specimens were histologically analyzed for cell and vascular density and qualitatively assessed using the advanced Ligament Maturity Index. Results: We found that after six months of healing, the cellular organization sub-score was most predictive of yield load (r2=0.98), maximum load (r2=0.89) and linear stiffness (r2=0.95) of the healing ACL, while at 12 months, the collagen sub-score (r2=0.68) became the best predictor of maximum load. For ACL grafts, the reverse was true, with the collagen sub-score predictive of yield and maximum loads at six months (r2=0.55), and graft cellularity predictive of maximum load of the graft at 12 months (r2=0.50). Conclusions: These findings suggest there may be key biologic differences in development and maintenance of ACL tissue after repair or reconstruction with early ligament function dependent on cellular population of the repair but early graft function dependent on the maintenance of organized collagen. abstract_id: PUBMED:29253556 Lateral Intercondylar Ridge: Is it a reliable landmark for femoral ACL insertion?: An anatomical study. Background: Incorrect femoral tunnel placement is the most common cause of graft failure during Anterior Cruciate Ligament (ACL) Reconstruction. A reliable landmark can minimize errors. Aim: To identify whether the Lateral Intercondylar Ridge (LIR) is a consistent anatomical structure and define its relationship with the femoral ACL insertion. Materials And Methods: Phase 1: we studied 23 femoral dry bone specimens macroscopically. Using a digital microscribe, the medial surface of the lateral femoral condyle was reconstructed (3D) to evaluate whether there was an identifiable bony ridge. Phase 2: 7 cadaveric specimens with intact soft tissues were dissected to identify the femoral ACL insertion. A 3D reconstruction of the femoral ACL insertion and the surface allowed us to define the relationship between the LIR and the ACL insertion. Results: All specimens had a defined LIR on the medial surface of the lateral femoral condyle. The ridge was consistently located just anterior to the femoral ACL insertion. The ACL footprint was present in the depression between the ridge and the Inferior Articular Cartilage Margin (IACM). The mean distance from the midpoint of the IACM to the LIR was 10.1 mm. Conclusions: This is the first study to use the microscribe to digitally reconstruct the medial surface of the lateral femoral condyle. It shows that the LIR is a consistent anatomical structure that defines the anterior margin of the femoral ACL insertion, which guides femoral tunnel placement. Our findings support the ruler technique, which is a commonly used method for anatomic single bundle ACL reconstruction. abstract_id: PUBMED:28549362 Iliotibial band friction syndrome due to bioabsorbable pins in ACL reconstruction Injuries of the anterior cruciate ligament are currently treated primarily with the use of hamstring graft with a wide range of different techniques and fixation systems for anterior cruciate ligament in reducing patient morbidity. We report the case of a female patient aged 42 that suffered an anterior cruciate ligament rupture and was treated with anterior cruciate ligament reconstruction with hamstring autograft with femoral fixation with bio-absorbable cross-pin and tibial fixation with bio-absorbable screw. The patient presented lateral knee pain that was diagnosed one year after the operation as an iliotibial band friction syndrome. Imaging studies were performed. X-rays and magnetic resonance imaging demonstrated adequate fixation of the anterior cruciate ligament with the presence of migration and rupture of the proximal bio-absorbable cross-pin. It was decided to remove the bio-absorbable cross-pin fragment in a second surgical procedure after which the patient went back to her daily activities and sports without pain and with stability stable knee. abstract_id: PUBMED:34144682 Can the body slope of interference screw affect initial stability of reconstructed anterior cruciate ligament?: An in-vitro investigation. Background: Superior biomechanical performance of tapered interference screws, compared with non-tapered screws, with reference to the anterior cruciate ligament (ACL) reconstruction process, has been reported in the literature. However, the effect of tapered interference screw's body slope on the initial stability of ACL is poorly understood. Thus, the main goal of this study was to investigate the effect of the interference screw's body slope on the initial stability of the reconstructed ACL. Methods: Based on the best screw-bone tunnel diameter ratios in non-tapered screws, two different tapered interference screws were designed and fabricated. The diameters of both screws were equal to bone tunnel diameter in one-third of their length from screw tip, then they were gradually increased by 1mm, in the lower slope (LSTIS), and 2 mm, in the higher slope (HSTIS) screws. To simulate the ACL reconstruction, sixteen soft tissue grafts were fixed, using HSTIS and LSTIS, in synthetic bone blocks. Through applying sub-failure cyclic incremental tensile load, graft-bone-screw construct's stiffness and graft laxity in each cycle, also through applying subsequent step of loading graft to the failure, maximum load to failure, and graft's mode of failure were determined. Accordingly, the performance of the fabricated interference screws was compared with each other. Results: HSTIS provides a greater graft-bone-screw construct stiffness, and a lower graft laxity, compared to LSTIS. Moreover, transverse rupture of graft fibers for LSTIS, and necking of graft in the HSTIS group were the major types of grafts' failure. Conclusions: HSTIS better replicates the intact ACL's behavior, compared to LSTIS, by causing less damage in graft's fibers; reducing graft laxity; and increasing fixation stability. Nonetheless, finding the optimal slope remains as an unknown and can be the subject of future studies. abstract_id: PUBMED:27104331 A complication following ACL reconstruction using bioabsorbable cross-pins. This is a case of a proximal pin migration after ACL reconstruction in medial soft tissue with pain, inflammatory reaction and functional reduction. 33-year-old male presented at our clinic with a complete ACL rupture. Reconstruction with autogenous gracilis and semitendinosus hamstring tendons was performed and graft fixed in the femoral canal with two PLLA bioabsorbable pins (RIGIDFIX® Cross Pin System). Two months postoperatively the patient presented swelling and pain on the medial side of the knee, full range of motion and negative results at the Lachman and Pivot shift tests. MRI examination showed the superior femoral tunnel crossing both the lateral and medial cortex lodging the pin in the knee's medial soft tissue corresponding to the swelling area reported by the patient. The tendon graft was properly positioned. After surgical removal of the pin through a small skin incision, the pain and swelling promptly subsided allowing the patient return to normal activities in few weeks without any pain. In our opinion the painful swelling of the knee was due to a displacement of the pin that had been accidentally lodged in the soft tissues instead of the bone causing a foreign-body reaction resulting in granuloma formation with local inflammation. This dislodgement could have been due to an inappropriately long femoral tunnel. abstract_id: PUBMED:36874051 Factors Influencing the Initial Constraint Level of the Knee Joint and Its Effect on Clinical Outcomes After ACL Reconstruction With Hamstring Graft. Background: A force-based tension protocol that uses a certain amount of tension at graft fixation could still give rise to variations in initial constraint levels of the knee joint in terms of side-to-side difference (SSD) in anterior translation. Purpose: To investigate the factors influencing the initial constraint level in anterior cruciate ligament (ACL)-reconstructed knees and compare outcomes according to the level of constraint in terms of anterior translation SSD. Study Design: Cohort study; Level of evidence, 3. Methods: Included were 113 patients who underwent ipsilateral ACL reconstruction using an autologous hamstring graft and had minimum 2-year follow-up outcomes. All grafts were tensioned and fixed at 80 N using a tensioner at the time of graft fixation. The patients were classified into the following 2 groups according to the initial anterior translation SSD, measured using the KT-2000 arthrometer: a physiologic constraint group with restored anterior laxity ≤2 mm (group P; n = 66) and a high-constraint group with restored anterior laxity &gt;2 mm (group H; n = 47). Clinical outcomes were compared between the groups, and preoperative and intraoperative variables were evaluated to identify factors affecting the initial constraint level. Results: Between group P and group H, generalized joint laxity (P = .005), posterior tibial slope (P = .022), and anterior translation measured in the contralateral knee (P &lt; .001) were found to differ significantly. Measured anterior translation in the contralateral knee was the only significant predictor of high initial graft tension (P = .001). No significant differences were found between the groups regarding clinical outcomes and subsequent surgery. Conclusion: Greater anterior translation measured in the contralateral knee was an independent predictor of a more constrained knee after ACL reconstruction. The short-term clinical outcomes after ACL reconstruction were comparable, regardless of the initial constraint level in terms of anterior translation SSD. Answer: The Bio-Transfix pin is a biodegradable device used for femoral tunnel anterior cruciate ligament (ACL) graft fixation. According to the investigation reported in abstract PUBMED:20001934, the initial strength of several Bio-Transfix pin ACL fixations was evaluated in a simulated femoral tunnel model. The results showed that the average strengths of the pins ranged from 1075 to 2160 N for 10 and 8 mm tunnels, respectively. In comparison, the maximum forces generated by surgeons during simulated ACL graft tensioning were measured at 535 N. These findings imply that the initial fracture of the Bio-Transfix pin itself is unlikely during the insertion process. However, the study suggests that failure of the supporting bone or a decrease in pin strength due to biodegradation could account for early loss of fixation, rather than the pin failing during the initial insertion.
Instruction: Does intraoperative spillage of benign ovarian mucinous cystadenoma increase its recurrence rate? Abstracts: abstract_id: PUBMED:25388920 Intraoperative rupture of benign mucinous cystadenoma does not increase its recurrence rate. Purpose: Laparoscopy has become the standard of care in the management of ovarian benign mucinous cystadenomas. One concern of the operative approach is the risk of recurrence. A previous study has found intraoperative cyst rupture and cystectomy to be the risk factors for recurrence. Our objective was to assess the incidence and risk factors for recurrence of benign mucinous cystadenomas after surgical removal. Methods: We conducted a retrospective analysis of all cases of patients who underwent removal of benign mucinous cystadenomas by either cystectomy or adnexectomy in our institution between December 2006 and June 2013. Results: 722 women underwent surgical treatment for benign ovarian cysts in that time period, out of which 38 women (5.2%) had benign mucinous cystadenomas confirmed by pathologic examination. Most of the patients underwent laparoscopy (94.7%). Half of the patients underwent cystectomy, and the other half underwent adnexectomy. One patient underwent re-operation due to cyst recurrence. This patient initially underwent laparoscopic cystectomy, during which intraoperative cyst rupture did not occur. Conclusion: Recurrence of benign mucinous cystadenomas was uncommon and was not associated with intraoperative cyst rupture. abstract_id: PUBMED:20022314 Does intraoperative spillage of benign ovarian mucinous cystadenoma increase its recurrence rate? Objective: To report a higher than estimated recurrence rate of benign mucinous cystadenomas after complete resection of the first one, and to assess potential risk factors for recurrence after complete surgical excision. Study Design: We retrospectively reviewed all cases of women who underwent either laparoscopic or laparotomic removal of benign mucinous adnexal cysts by either adnexectomy or cystectomy in our institution between 1996 and 2006. Results: Included were the data of 42 women who fulfilled study entry criteria. Three of them (7.1%) underwent a second operation because of a recurrence of the lesion. A significantly higher rate of women who had cyst recurrence had undergone cystectomy as opposed to adnexectomy (P &lt; .05). Intraoperative rupture of cysts during cystectomy was also significantly associated with cyst recurrence (P &lt; .03). Conclusion: Mucinous cystadenoma recurrence is apparently not as rare as reported in the literature. Intraoperative cyst rupture and cystectomy instead of adnexectomy emerged as being two risk factors for recurrence. abstract_id: PUBMED:26300232 The Relationship between Intraoperative Rupture and Recurrence of Pediatric Ovarian Neoplasms: Preliminary Observations. Study Objective: We investigated whether rupture increased the recurrence rate of pediatric ovarian neoplasms. Design: 20-year single-institution retrospective study. Setting: Tertiary, free-standing, university children's hospital. Participants: All girls with ovarian neoplasms treated during between 1991 and 2011. Main Outcome Measure: Tumor recurrence. Results: Fifty-nine tumors in 53 patients were managed, including 51/59 (86%) benign and 8/59 (14%) malignant. Laparotomy was employed in 44/59 (75%), laparoscopy in 8/59 (14%), and laparoscopy converted to laparotomy in 7/59 (12%). Total and partial oophorectomy (cystectomy) was used for 15/51 (29%) and 36/51 (71%) of benign tumors, respectively. All malignant tumors underwent total oophorectomy. Accidental rupture or intentional tumor puncture occurred in 26/56 cases (46%), 23/51 benign and 3/5 malignant. Rupture was associated with increasing cyst size on univariate and multivariate analyses (p = 0.002 and p = 0.004, respectively). There were 5 recurrences (9%) in 4 patients, including 4 benign (3 mature teratomas, 1 mucinous cystadenoma), and 1 malignant yolk sac tumor. Recurrence occurred in 2/30 (7%) without rupture and 3/26 (12%) with rupture, p = 0.66. Follow-up was available for 50/53 patients (94%), with a median of 23.8 months [range 0.2-189 months]. All recurrences were salvaged by surgery. Conclusions: In this limited study, intra-operative rupture did not increase the recurrence rate or worsen the prognosis of pediatric ovarian neoplasms. abstract_id: PUBMED:36496105 Recurrence Rates for Pediatric Benign Ovarian Neoplasms. Study Objective: To examine the recurrence rates of pediatric benign ovarian neoplasms METHODS: A retrospective review of females up to 21 years of age who underwent surgery for a benign ovarian neoplasm at 8 pediatric hospitals from January 2010 through December 2016 was conducted. Data include primary operation details, follow-up imaging, and reoperation details. Results: Four hundred and twenty-six females were included in our cohort, with a median age of 15 years at the time of the primary operation. Of the patients, 69% had a mature teratoma, 18% had a serous cystadenoma, and 8% had a mucinous cystadenoma. Two-thirds of patients underwent ovarian-sparing surgery. There were 11 pathologically confirmed recurrences (2.6%) at a median follow-up of 12.8 months. The pathologically confirmed recurrence was 10.5 per 100 person-months at 12 months (SE = 5.7) for mucinous cystadenomas and 0.4 months (SE = 0.4) for mature teratomas (P = .001). For half of the patients, the pathologically confirmed recurrences occurred by 12.8 months, and for 75%, they occurred by 23.3 months. There were no differences in reoperation or recurrence on the basis of initial procedure (ovary-sparing surgery vs oophorectomy). Conclusion: We measured the pathologically confirmed recurrence rate for pediatric benign ovarian neoplasms in a large cohort. Oophorectomy was not protective against recurrence. Mucinous cystadenomas were at a greater risk of pathologically confirmed recurrence. abstract_id: PUBMED:30868032 Benign Ovarian Tumors in Pregnancy: A Case Report of Metachronous Ipsilateral Recurrent Mucinous Cystadenoma in Initial Pregnancy and Mature Cystic Teratoma in Subsequent Pregnancy. Mucinous cystadenomas of the ovary are benign epithelial neoplasms that can grow rapidly during pregnancy. They may cause ovarian torsion, virilization, inferior vena cava syndrome, and even preterm labor and fetal growth restriction. Various theories exist regarding the pathogenesis of these tumors. One hypothesis suggests that they may arise from teratomas. Our case report describes synchronous mucinous cystadenomas and ovarian teratomas, as well as metachronous mucinous cystadenomas in patients with a history of ovarian teratoma. There has been no report of metachronous ipsilateral teratoma after previous mucinous cystadenoma. We present a 22-year-old female with a history of bilateral ovarian tumors in a prior pregnancy noted to have a recurrent ovarian mass on her left ovary at the time of cesarean section of a subsequent pregnancy. She had two prior cystectomies for metachronous mucinous cystadenomas of her left ovary, and a right salpingo-oophorectomy for the ovarian torsion in her previous pregnancy. On her current pregnancy, she developed a mature cystic teratoma of the remaining left ovary. The rapid growth and recurrence rate of these tumors highlights the importance of close surveillance of ovarian masses during pregnancy, even those that seem benign. In this case, a history of unilateral salpingo-oophorectomy with multiple contralateral cystectomies did not appear to affect her fertility. Her future ovarian reserve is unknown, pointing to the need for adequate pre-operative counseling in similar cases of ovarian masses in pregnancy. abstract_id: PUBMED:36882266 Benign multicystic peritoneal mesothelioma mimicking mucinous ovarian neoplasm with pseudomyxoma peritonei. Cystic mesothelioma, also known as benign multicystic peritoneal mesothelioma (BMPM), is a rare neoplasm and one of the five main histological types of peritoneal mesothelioma. Although it is usually considered benign histologically, it is increasingly accepted as a 'borderline malignancy' because the local recurrence rate is high. It is more common in middle-aged women and is generally asymptomatic. Considering that BMPM is often found in the pelvis, it is difficult to differentiate from other pelvic and abdominal lesions, such as cystic ovarian masses, especially mucinous cystadenoma-adenocarcinoma, pseudomyxoma peritonei, etc. Preoperative diagnosis is very difficult due to its rarity and lack of specific diagnostic imaging and clinical findings. Definitive diagnosis is made only by pathological evaluation. In this article, we aimed to present the imaging findings of a BMPM case in a woman who was diagnosed with mucinous ovarian neoplasm with pseudomyxoma peritonei preoperatively and underwent cytoreductive surgery with hyperthermic intraperitoneal chemotherapy. abstract_id: PUBMED:34843973 Progression of Cystadenoma to Mucinous Borderline Ovarian Tumor in Young Females: Case Series and Literature Review. Study Objective: To study the progression of benign ovarian lesions to mucinous borderline ovarian tumors (mBOTs); analyze the clinicopathologic features, diagnosis, and management of mBOTs in pediatric and adolescent girls; and provide a review of the literature on mBOTs in this population. Design: Retrospective chart review of female adolescents younger than 18 years diagnosed with mBOTs between July 2017 and February 2021. Setting: Yale New Haven Hospital, New Haven, Connecticut; and Yale New Haven Health Bridgeport Hospital, Bridgeport, Connecticut. Participants: Three female patients diagnosed with mBOTs between ages 12 and 17 years. Interventions: None. Main Outcome Measures: Clinical presentation, preoperative characteristics, surgical technique, histology, tumor stage, treatment, progression, outcome, and rate of recurrence. Results: Three adolescent patients were identified to have mBOTs. All three patients presented with a chief complaint of abdominal pain. One of the 3 patients was premenarchal at presentation. Two of the 3 patients were initially diagnosed with a mucinous cystadenoma and had recurrences of an ovarian cyst in the same ovary within 5 and 17 months, respectively. Pathology of the recurrent cyst was consistent with mBOT. Two of the 3 patients initially underwent cystectomy, and all ultimately had a unilateral salpingo-oophorectomy. Subsequent surveillance over 2 to 4 years found no evidence of disease recurrence. Conclusion: mBOTs are rare in the pediatric and adolescent population and could arise from benign ovarian tumors. abstract_id: PUBMED:19899396 Diagnosis and treatment of borderline ovarian neoplasms "the state of the art". The 5-year survival for women with Stage-I borderline tumours (BOT) is favourable, about 95-97%, but the 10-year survival is only between 70 and 95%, caused by late recurrence. The 5-year survival for Stage II-III patients is 65-87%. Standard primary surgery includes bilateral SOEB, omentectomy, peritoneal washing and multiple biopsies. Second cytoreductive surgery is recommended for patients with recurrent disease. Adjuvant postoperative therapy is not indicated in Stage-I diploid tumors. Occasional responses to chemotherapy have been reported in advanced BOTs but no study has shown improved survival. Recently a new theory has been developed describing a subset of S-ovarian cyst adenomas that evolve through S-BOT to low-grade carcinoma. A more correct staging procedure, classification of true serous implants and agreement on the contribution to stage of the presence of gelatinous ascites in mucinous tumours may in the future change the distribution of stage and survival data by stage for women with BOT. Independent prognostic factors in patients with epithelial ovarian BOT without residual tumour after primary surgery are DNA-ploidy, international FIGO-stage, histologic type and patient age. Studies on other molecular markers have not yet uncovered a reliable prediction of biologic behaviour, however, there is hope that future studies of genetics and molecular biology of these tumours will lead to useful laboratory tests. Future questions to be addressed in this review include the following: Have patients with borderline tumours in general been over-treated and how should these patients be treated? How to define the high-risk patients? In which group of patients is fertility-sparing surgery advisable and, do patients with borderline tumours benefit from adjuvant treatment? abstract_id: PUBMED:12717249 Mucinous tumors arising in ovarian mature cystic teratomas: relationship to the clinical syndrome of pseudomyxoma peritonei. Recent studies have redefined pseudomyxoma peritonei (PMP) as a specific clinicopathologic syndrome in which mucinous ascites is accompanied by peritoneal lesions characterized by bland to low-grade adenomatous mucinous epithelium intimately associated with pools of extracellular mucin and fibrosis, diagnosed pathologically as disseminated peritoneal adenomucinosis (DPAM). Most recent studies support an appendiceal rather than ovarian origin for virtually all cases of PMP/DPAM in women. Peritoneal mucinous tumors with the histologic features of carcinoma (peritoneal mucinous carcinomatosis, PMCA) are also rarely ovarian in origin and are distinguished from DPAM, even though they may produce abundant mucin, because PMCA and DPAM are pathologically and prognostically distinct. We report three cases of PMP (mucinous ascites) associated with ruptured mucinous tumors arising in ovarian mature cystic teratomas. Two tumors contained bland to low-grade adenomatous mucinous epithelium associated with dissecting mucin, identical to the mucinous tumors that secondarily involve the ovaries and peritoneum in PMP/DPAM derived from ruptured appendiceal mucinous adenomas. The third was composed of both low-grade adenomatous tumor and areas of mucinous carcinoma. In all cases the appendices were microscopically normal. The mucinous ascites associated with the low-grade tumors contained only a few fragments of detached bland mucinous epithelium in one and none in the other. The mucinous ascites associated with the higher-grade tumor contained one fragment of atypical mucinous epithelium. All three mucinous tumors were cytokeratin 20-positive and cytokeratin 7-negative, consistent with a lower gastrointestinal tract-type rather than primary ovarian-type mucinous tumor immunophenotype. Mucinous tumors arising in ovarian mature cystic teratomas are morphologically and immunohistochemically consistent with gastrointestinal tract-type mucinous tumors, which likely arise from gut elements of the teratoma. Rupture can on rare occasions produce mucinous ascites containing very scant mucinous epithelium, but additional follow-up will be required to determine whether these ovarian tumors ever lead to recurrent disease accompanied by the characteristic peritoneal lesions of DPAM or PMCA. Such tumors probably represent the only cases of ovarian origin of PMP. abstract_id: PUBMED:10857496 Duodenum-preserving pancreatic head resection in patients with benign and borderline tumors of the pancreatic head. Background: Benign tumors of the pancreas are rare, accounting for only 1-2% of primary pancreatic lesions. Up to now, partial duodenopancreatectomy is still one of the established forms of treatment of benign tumors of the pancreas. We applied duodenum-preserving pancreatic head resection in 12 patients with benign pancreatic tumors to evaluate the feasibility, morbidity and recurrence rates after this less aggressive method. Methods: Between April 1984 and December 1999, 12 patients with benign and borderline tumors of the pancreatic head were operated on by duodenum-preserving pancreatic head resection. Results: All five patients with serous cystadenoma are free of recurrence 4.4 years after primary resection. One of two patients with mucinous cystadenoma and one of three patients with intraductal papillary mucinous tumor developed recurrent tumor within the former pancreatic head 5 years and 6 years, respectively, after the primary operation. Both patients were resected a second time. One of two patients with gastrinoma still has elevated serum gastrin levels. There was no hospital or long-term mortality. Conclusion: For a symptomatic serous cystadenoma, duodenum-preserving pancreatic head resection is a good alternative to partial duodenopancreatectomy. In borderline tumors with malignant potential, we would rather suggest a more radical duodenum-preserving segmental resection. A video clip (3 min) is attached demonstrating the basic steps of duodenum-preserving pancreatic head resection. Answer: The relationship between intraoperative spillage of benign ovarian mucinous cystadenoma and its recurrence rate appears to be a subject of debate in the literature. Some studies suggest that intraoperative cyst rupture is associated with an increased risk of recurrence. For instance, one study reported a significantly higher rate of recurrence in women who had cyst recurrence after undergoing cystectomy, and intraoperative rupture of cysts during cystectomy was significantly associated with cyst recurrence (PUBMED:20022314). This suggests that intraoperative spillage may indeed increase the recurrence rate of benign ovarian mucinous cystadenomas. However, other studies have found contrasting results. One study concluded that recurrence of benign mucinous cystadenomas was uncommon and was not associated with intraoperative cyst rupture (PUBMED:25388920). Similarly, a study focusing on pediatric ovarian neoplasms found that intraoperative rupture did not increase the recurrence rate or worsen the prognosis of pediatric ovarian neoplasms (PUBMED:26300232). Another study on pediatric benign ovarian neoplasms also found no differences in reoperation or recurrence based on the initial procedure, suggesting that intraoperative spillage may not significantly impact recurrence rates (PUBMED:36496105). Given these conflicting findings, it is difficult to make a definitive statement about the impact of intraoperative spillage on the recurrence rate of benign ovarian mucinous cystadenomas. It appears that while some evidence points to an increased risk, other studies have not found a significant association. Therefore, the question may require further research to resolve these discrepancies and provide a more conclusive answer.
Instruction: Is the type of dissection in lateral neck metastasis for differentiated thyroid carcinoma important? Abstracts: abstract_id: PUBMED:17547987 Is the type of dissection in lateral neck metastasis for differentiated thyroid carcinoma important? Objective: This study was to compare the types of therapeutic neck dissection in patients with differentiated thyroid carcinoma. Study Design And Setting: Sixty-one patients with lymph node metastasis in the neck, treated between 1997 and 2001, were studied retrospectively. A comparative study was made of a selective lateral neck dissection group and a radical or modified radical neck dissection group for recurrence, disease free survival (DFS), and overall survival (OS). Results: Type of dissection was not related to DFS (P=0.92), OS (P=0.33), and local recurrence ratio (P=0.56). The factors affecting local recurrence were the age over 45 years (P=0.02), tumor size (0.005), and the presence of distant metastasis (P=0.04). The factors affecting DFS and OS were tumor size (0.003), thyroid capsule invasion (0.004). Conclusions: Determination of the type of therapeutic neck dissection depends on patient and tumor characteristics. Selective lateral neck dissection can be applied safely in selected cases. abstract_id: PUBMED:37424860 Lateral neck dissection for the treatment of synchronous and metachronous lateral neck metastasis of N1b papillary thyroid cancer. Introduction: Metachronous lateral neck recurrence after thyroidectomy for N1b papillary thyroid cancer is accompanied by high morbidity and increased difficulty of reoperation. From the perspective of recurrence, the objective of this study was to compare patients who underwent metachronous lateral neck dissection (mLND) despite initial thyroidectomy and patients who underwent synchronous lateral neck dissection (sLND) for papillary thyroid cancer and analyze the risk factors for recurrence after mLND. Method: This retrospective study involved 1,760 patients who underwent lateral neck dissection for papillary thyroid cancer at the Gangnam Severance Hospital, a tertiary medical center in Korea, from June 2005 to December 2016. The primary outcome was structural recurrence, and secondary outcome measures were risk factors of recurrence in the mLND group. Result: A total of 1,613 patients underwent thyroidectomy and sLND at diagnosis. In 147 patients, thyroidectomy alone was performed at the time of diagnosis, and mLND was performed when recurrence to the lateral neck lymph node was confirmed. During a median follow-up of 102.1 months, 110 (6.3%) patients experienced a recurrence. There was no significant difference in the recurrence between the sLND and mLND groups (6.1% vs 8.2%, P=.32). The period from lateral neck dissection to recurrence was longer in the mLND group than in the sLND group (113.6 ± 39.4 months vs 87.0 ± 33.8 months, respectively, P&lt;.001). Age ≥50 years (adjusted HR=5.209, 95% CI=1.359-19.964; P=.02), tumor size &gt;1.45 cm (adjusted HR=4.022, 95% CI=1.036-15.611; P=.04), and lymph node ratio in the lateral compartment (adjusted HR=4.043, 95% CI=1.079-15.148; P=.04) were independent variables predictive of recurrence after mLND. Conclusion: mLND is suitable for treating lateral neck recurrence in patients with N1b papillary thyroid cancer who previously underwent thyroidectomy. Lateral neck recurrence after treatment in patients who underwent mLND was predicted by age, tumor size, and lymph node ratio in the lateral compartment. abstract_id: PUBMED:36386534 Endoscopic lateral neck dissection via the breast and transoral approaches for papillary thyroid carcinoma: A preliminary report. Purpose: Complete lymph node dissection is essential for the management of papillary thyroid carcinoma (PTC) with lymph node metastasis (LNM). This work aimed to describe the feasibility of endoscopic lateral neck dissection via the breast and transoral approach (ELNDBTOA) in PTC patients and the necessity of the addition of the transoral approach. Methods: We included 13 patients with PTC and suspected lateral LNM who underwent ELNDBTOA at the Zhongshan Hospital, Xiamen University. Total thyroidectomy, ipsilateral central lymph node dissection, and selective neck dissection (levels IIA, IIB, III, and IV) were performed endoscopically via the breast approach. Residual lymph nodes were further dissected via the transoral approach. Results: The mean operation time was 362.1 ± 73.5 min. In the lateral neck compartments, the mean number of retrieved lymph nodes was 36.6 ± 23.8, and the mean number of positive lymph nodes was 6.8 ± 4.7. In further dissection via the transoral approach, lymph nodes in the lateral neck compartment were obtained in nine patients (9/13, 69.2%), and three patients (3/13, 23.1%) had confirmed lateral neck metastases. Transient hypocalcemia occurred in two patients (2/13, 15.4%), and three patients (3/13, 23.1%) developed transient skin numbness in the mandibular area. No other major complications were observed. There was no evidence of local recurrence or distant metastasis during the follow-up period (range, 24-87 months). All patients were satisfied with the good cosmetic outcome. Conclusion: ELNDBTOA is an option with proven feasibility for select PTC patients with LNM, and the addition of the transoral approach is necessary to ensure complete dissection. abstract_id: PUBMED:30536753 Extent of lateral neck dissection for papillary thyroid microcarcinomas. Background: Papillary thyroid microcarcinomas (PTMCs) often demonstrate lateral lymph node (LN) metastasis. The purpose of this study was to assess whether the size of primary tumor should be considered in deciding lateral neck dissection extent. Methods: We conducted a retrospective comparison of patients with PTMC and papillary thyroid carcinoma (PTC) larger than 1 cm who received lateral LN dissection from 2009 to 2015. Results: PTMC group had more frequent metachronous metastasis, multifocality, less involvement of lower third thyroid, lower metastatic central LN counts and ratio, and lower metastatic total LN ratio. The two groups showed no difference in lateral LN metastatic pattern. Male and lymphovascular invasion were more often found in PTMCs with synchronous lateral LN metastatic events than metachronous presentations. Conclusions: Relying on the size of the tumor to determine lateral neck dissection extent could be unreliable, because there is no observed difference in lateral neck metastatic count and pattern between PTC and PTMCs. abstract_id: PUBMED:34046713 Robotic transaxillary lateral neck dissection for thyroid cancer: learning experience from 500 cases. Background: Differentiated thyroid carcinoma with lateral neck lymph node metastasis requires aggressive operative intervention, including lateral neck dissection. Although several robotic approaches have made precise surgery for thyroid cancer possible, few centers have expanded the technique for application to lateral neck dissections. This study aimed to demonstrate the technical feasibility, cosmetic effectiveness, and safety of robotic transaxillary lateral neck dissection (RTLND) using the da Vinci system. Methods: From January 2008 to July 2019, 500 patients diagnosed with thyroid cancer with lateral neck node metastasis underwent RTLND. The clinicopathologic characteristics and surgical outcomes were retrospectively reviewed. Results: All operations were performed successfully without open conversion. As the primary operation for thyroid cancer, 476 (95.2%) patients underwent unilateral or bilateral RTLND, including robotic total thyroidectomy. The remaining 24 patients (4.8%), all of whom had a recurrence, also underwent RTLND with additional procedures, if needed. The mean operation time for the 500 operations was 293.71 ± 67.22 min. Only five cases had recurrence and required further treatment. Conclusions: RTLND is technically feasible and safe through the precise manipulation of robotic instruments. While this method is thorough and provides safe and effective surgical outcomes, it also offers the additional advantage of being minimally invasive. abstract_id: PUBMED:37085417 Level II lateral neck dissection for papillary thyroid carcinoma: A retrospective cohort study. Background: For N1b papillary thyroid carcinoma (PTC) patients, lateral neck dissection encompassing levels Ⅱ-Ⅴ is generally recommended. However, routine level Ⅱ dissection is controversial given the low incidence of metastasis, and potential complications such as increased shoulder syndrome. Methods: Retrospective analysis of consecutive patients with papillary thyroid carcinoma who underwent lateral neck dissection at a single institution from January 2019 to April 2021 was performed. Clinicopathological features such as age, gender, tumor location, tumor size, TgAb and TPOAb levels, capsular invasion, multifocality and lymph node metastases were examined to evaluate the occurrence of metastatic Level Ⅱ lymph nodes. Results: Overall and occult level Ⅱ metastases were observed in 51.83% and 34.84% of cN1b PTC patients. Multivariant analysis showed that primary tumor, location of primary tumor and positive level Ⅴ can serve as independent risk factors of metastasis in level Ⅱ. For cN1b PTC patients not suspected of level Ⅱ lymph nodes preoperatively, independent risk factors for predicting occult level Ⅱ metastases may include the location of primary tumor, positive level Ⅲ and positive level Ⅴ. Conclusion: A significant number of patients with PTC and lateral neck disease experienced Level Ⅱ metastasis, with the location of primary tumor and multilevel lymph node involvement being the independent risk factors. If the tumor is less than 1 cm and located at lower 2/3 lobe, there is minimal possibility of level Ⅱ lymph node metastasis. abstract_id: PUBMED:33915059 Lateral Neck Dissection for Well-Differentiated Thyroid Carcinoma: Is Prophylactic Level V Neck Dissection Necessary? A Retrospective Cohort Study. Objective: The presence of clinically detectable papillary thyroid carcinoma (PTC) metastases in the lateral neck is an indication for neck dissection (ND) and thyroidectomy. Although there is a consensus regarding the importance of therapeutic selective ND of involved levels II to IV in patients with clinically evident locoregional metastatic disease, the prognostic benefit of level V prophylactic ND remains debatable. Methods: All patients who underwent thyroidectomy with ND for metastatic PTC between 2006 and 2019 were included in a single-institution retrospective study. Preoperative characteristics at initial presentation, imaging workup, intraoperative findings, and the final histopathological reports were retrieved from the institutional database. Results: A total of 189 patients with locally advanced PTC were identified, of whom 22 (11.6%) patients underwent therapeutic selective ND at levels II to IV together with level V dissection due to clinical involvement. Comparison of the patients who were operated on level V to those who were not revealed no significant difference. The disease recurrence rate was 20.1% throughout an average follow-up of 5.1±3.1 years. No significant differences in recurrence rate were found between patients who underwent and those who did not undergo level V ND (22.7% vs 19.8%, P = .648). No recurrence at resected level V was detected during follow-up, while recurrence at level V was found in 4 (2.1%) patients who did not undergo level V dissection. Evidence of macroscopic and microscopic extrathyroidal extension was significant predictors of disease recurrence risk. Conclusion: There were no significant associations between level V dissection and risk for recurrence. Recurrence at level V was rare (4/189 patients, 2.1%). Our study's findings suggest a low prophylactic benefit of an elective level V ND. Elective level V ND should not be done routinely when lateral ND is indicated but should rather be considered after careful evaluation in high-risk patients. abstract_id: PUBMED:28963802 Therapeutic lateral neck dissection in well-differentiated thyroid cancer: Analysis on factors predicting distribution of positive nodes and prognosis. Background: Neck dissection is considered the treatment of choice in patients with lateral neck metastases from well-differentiated thyroid cancer. Methods: A multicenter, retrospective review of patients who underwent therapeutic lateral neck dissection for well-differentiated thyroid carcinoma was carried out. Results: The study included a total of 405 lateral neck dissections performed in 352 patients; 197 women (56%) and 155 men (44%). When considering ipsilateral neck metastases, levels IIa, IIb, III, IV, Va, Vb, and V (not otherwise specified) were involved in 42%, 6%, 73%, 67%, 11%, 31%, and 35% of cases, respectively. Five-year and 10-year overall survival (OS) were 93% and 81%, respectively. Age &gt;55 years, pathologic T (pT)4 category, tumor diameter &gt;4 cm, aggressive variants of well-differentiated thyroid carcinoma, endovascular invasion, and number of positive nodes &gt;5 turned out to be the most important prognostic factors. Conclusion: Neck dissection is a valid treatment option in the presence of neck metastasis from well-differentiated thyroid carcinoma. Levels IIa, III, IV, and Vb should always be removed. abstract_id: PUBMED:33614828 Lateral Neck Dissection for Papillary Thyroid Cancer. Introduction: Lateral neck metastases occur in ~15% of papillary thyroid cancer and in ~40% of medullary thyroid cancer. We present herein a systematic approach to a standard comprehensive lateral neck dissection, with attention to specific areas where thyroid cancer lymph node metastases may be missed during surgery. Materials and Methods: Video demonstration of a comprehensive levels 2a, 3, 4, and 5b lateral neck dissection for thyroid cancer. Results: A systematic step-wise approach to a standard comprehensive lateral neck dissection for thyroid cancer, inclusive of levels 2a, 3, 4, and 5b, is demonstrated. Areas where thyroid cancer lateral neck lymph nodes can be missed are noted, including low level 4 nodes and carotid-vertebral nodes, level 5B nodes, and subdigastric level 2 nodes medial to the carotid artery. Conclusion: A step-wise systematic approach to a comprehensive lateral neck dissection for thyroid cancer may lower the risk for missed thyroid cancer lateral neck lymph node metastases. No competing financial interests exist. Runtime of video: 8 mins 35 secs. abstract_id: PUBMED:38268664 Can Unilateral Therapeutic Central Lymph Node Dissection Be Performed in Papillary Thyroid Cancer with Lateral Neck Metastasis? Objectives: Unilateral or bilateral prophylactic central neck dissection (CND) in papillary thyroid cancer (PTC) is still controversial. We aimed to evaluate the risk factors for contralateral paratracheal lymph node metastasis and whether CND might be performed unilaterally. Methods: Prospectively collected data of patients who underwent bilateral CND and lateral neck dissection (LND) with thyroidectomy due to PTC with lateral metastases, between January 2012 and November 2019, were evaluated retrospectively. The patients were divided into two groups according to the presence (Group 1) and absence (Group 2) of metastasis in the contralateral paratracheal region.A total of 42 patients (46 ±15.7 years) were operated. In the contralateral paratracheal region, Group 1 (35.7%) had metastases, while Group 2 (64.3%) had no metastases. In groups 1 and 2, metastasis rates were 100% vs 77.8% (p=0.073), 46.7% vs 18.5% (p=0.078), and 80% vs 40.7% (p=0.023) for the ipsilateralparatracheal, prelaryngeal and pretracheal lymph nodes, respectively.The number of metastatic lymph nodes in the central region was significantly higher in Group 1 compared to Group 2 as; 10.7±8.4 vs. 2.6±2.4 (p=0.001) in bilateral central region material; 8.3±7.4 vs. 2.9±2.7 (p=0.001) in lateral metastasis with ipsilateral unilateral central region; 3.8±3.4 vs. 1.9±1.9 (p=0.023) in ipsilateralparatracheal area; and 3.7±4.6 vs. 0.6±0.9 (p=0.001) in pretracheal region, respectively. However, no significant difference was found regarding the prelaryngeal region material (0.9±1.8 vs. 0.2±0.4 (p=0.71)). Results: &gt;2 metastatic central lymph nodes in unilateral CND material (AUC: 0.814, p&lt;0.001, J=0.563) can estimate contralateral paratracheal metastasis with 93% sensitivity, 63% specificity, while &gt;2 pretracheal metastatic lymph nodes (AUC: 0.795, p&lt;0.001, J: 0.563) can estimate contralateral paratracheal metastasis with 60% sensitivity and 96.3% specificity. Conclusion: In patients with lateral metastases, the rate of ipsilateralparatracheal metastasis is 85%, while the rate of contralateral paratracheal metastasis is 35.7%. The number of ipsilateral central region or pretracheal lymph node metastases may be helpful in predicting contralateral paratracheal lymph node metastases. Notably, unilateral CND may be performed in the presence of ≤ 2 metastases in the ipsilateral central region. Answer: The type of dissection in lateral neck metastasis for differentiated thyroid carcinoma appears to be an important consideration, but its importance may vary depending on patient and tumor characteristics. According to a study comparing selective lateral neck dissection with radical or modified radical neck dissection, the type of dissection was not related to disease-free survival (DFS), overall survival (OS), or local recurrence ratio. Instead, factors such as age over 45 years, tumor size, and the presence of distant metastasis affected local recurrence, while tumor size and thyroid capsule invasion affected DFS and OS. This suggests that selective lateral neck dissection can be safely applied in selected cases (PUBMED:17547987). Another study on lateral neck dissection for synchronous and metachronous lateral neck metastasis of N1b papillary thyroid cancer found no significant difference in recurrence between synchronous lateral neck dissection (sLND) and metachronous lateral neck dissection (mLND) groups. However, age ≥50 years, tumor size >1.45 cm, and lymph node ratio in the lateral compartment were independent variables predictive of recurrence after mLND (PUBMED:37424860). Furthermore, a study on endoscopic lateral neck dissection via the breast and transoral approaches for papillary thyroid carcinoma reported that this method is feasible for select patients with lymph node metastasis (LNM), ensuring complete dissection (PUBMED:36386534). In the context of papillary thyroid microcarcinomas (PTMCs), the extent of lateral neck dissection should not rely solely on the size of the primary tumor, as there was no observed difference in lateral neck metastatic count and pattern between PTMCs and larger papillary thyroid carcinomas (PUBMED:30536753). Robotic transaxillary lateral neck dissection (RTLND) has also been shown to be technically feasible, safe, and effective, with the added benefit of being minimally invasive (PUBMED:34046713). In summary, while the type of dissection may not directly impact survival outcomes, it is important to consider patient and tumor characteristics when determining the type of therapeutic neck dissection for differentiated thyroid carcinoma. Selective and less invasive approaches can be safely applied in certain cases, and factors such as age, tumor size, and lymph node ratio should be considered when assessing the risk of recurrence.
Instruction: Children's dairy intake in the United States: too little, too fat? Abstracts: abstract_id: PUBMED:18035145 Children's dairy intake in the United States: too little, too fat? Objective: To compare reported dairy/calcium intake with intake recommendations and examination of food sources and fat levels of dairy intake in the National Health and Nutrition Examination Survey 1999-2002. Study Design: Dietary, anthropometric, and sociodemographic data for 2- to 18-year-olds (n = 7716) were evaluated to compare intakes of dairy (MyPyramid) and calcium (Adequate Intake [AI]) recommendations. US Department of Agriculture food codes were used to identify mutually exclusive food groups of dairy-contributing foods, which were ranked in descending order proportional to total intake. Complex sample survey Student t tests were used to determine statistical significance among intakes in 4 age groups and between reported and recommended intakes. Results: Dairy consumption was not significantly different among age groups, but only 2- to 3-year-olds met the MyPyramid recommendation. Calcium intake was significantly different among age groups, and 2- to 8-year-olds met the AI. Intake of flavored milk ranged from 9% to 18%. More than half of the milk consumed by 2- to 3-year-olds was whole milk, and, with the exception of yogurt consumption in 2- to 3-year-olds, children choose to consume more of the highest-fat varieties of cheese, yogurt, ice cream, and dairy-based toppings. Conclusions: Dairy and calcium intakes are inadequate in 4- to 18-year-olds. Most children consume the high-fat varieties of milk and dairy products. Focusing nutrition guidance efforts on increasing the intake of the low-fat dairy products, with special emphasis on increasing calcium intake in school-age children and adolescents through flavored low-fat milk products, may be beneficial. abstract_id: PUBMED:32119732 Whole-Fat or Reduced-Fat Dairy Product Intake, Adiposity, and Cardiometabolic Health in Children: A Systematic Review. Dietary guidelines commonly recommend that children aged &gt;2 y consume reduced-fat dairy products rather than regular- or whole-fat dairy. In adults, most studies have not found the consumption of whole-fat dairy products to be associated with increased cardiometabolic or adiposity risk. Associations in children could differ due to growth and development. We systematically reviewed the literature in indexed, peer-reviewed journals to summarize pediatric studies (children aged from 2 to 18 y) assessing associations between whole- and reduced-fat dairy intake and measures of adiposity as well as biomarkers of cardiometabolic disease risk, including the serum lipid profile, blood pressure, low-grade chronic inflammation, oxidative stress, and measures of glucose homeostasis. For the purposes of this review, a "whole-fat" dairy product was defined as a product with the natural fat content, whereas a "reduced-fat" dairy product was defined as a product with some or all of the fat removed (including "low-fat" and "skim" versions). A total of 29 journal articles met our criteria for inclusion. The majority were conducted in the United States and were prospective or cross-sectional observational studies, with only 1 randomized controlled trial. Studies were consistent in reporting that whole-fat dairy products were not associated with increased measures of weight gain or adiposity. Most evidence indicated that consumption of whole-fat dairy was not associated with increased cardiometabolic risk, although a change from whole-fat to reduced-fat dairy improved outcomes for some risk factors in 1 study. Taken as a whole, the limited literature in this field is not consistent with dietary guidelines recommending that children consume preferably reduced-fat dairy products. High-quality randomized controlled trials in children that directly compare the effects of whole-fat compared with reduced-fat dairy intake on measures of adiposity or biomarkers of cardiometabolic disease risk are needed to provide better quality evidence in this area. abstract_id: PUBMED:31070753 Full-Fat Dairy Food Intake is Associated with a Lower Risk of Incident Diabetes Among American Indians with Low Total Dairy Food Intake. Background: Diet plays a key role in development of diabetes, and there has been recent interest in better understanding the association of dairy food intake with diabetes. Objective: This study examined the associations of full-fat and low-fat dairy food intake with incident diabetes among American Indians-a population with a high burden of diabetes. Methods: The study included participants from the Strong Heart Family Study (SHFS), a family-based study of cardiovascular disease in American Indians, free of diabetes at baseline (2001-2003) (n = 1623). Participants were 14-86-y-old at baseline and 60.8% were female. Dairy food intake was assessed using a Block food frequency questionnaire. Incident diabetes was defined using American Diabetes Association criteria. Parametric survival models with a Weibull distribution were used to evaluate the associations of full-fat and low-fat dairy food intake with incident diabetes. Serving sizes were defined as 250 mL for milk and 42.5 g for cheese. Results: We identified 277 cases of diabetes during a mean follow-up of 11 y. Reported intake of dairy foods was low [median full-fat dairy food intake: 0.11 serving/1000 kcal; median low-fat dairy food intake: 0.03 serving/1000 kcal]. Participants who reported the highest full-fat dairy food intake had a lower risk of diabetes compared to those who reported the lowest full-fat food dairy intake [HR (95% CI): 0.79 (0.59, 1.06); P-trend = 0.03, comparing extreme tertiles, after adjustment for age, sex, site, physical activity, education, smoking, diet quality, and low-fat dairy food intake]. Low-fat dairy food intake was not associated with diabetes. Conclusions: American Indians who participated in the SHFS reported low dairy food intake. Participants who reported higher full-fat dairy food intake had a lower risk of diabetes than participants who reported lower intake. These findings may be of interest to populations with low dairy food intake. abstract_id: PUBMED:25402168 Evaluation of the relative concentration of serum fatty acids C14:0, C15:0 and C17:0 as markers of children's dairy fat intake. Background/aims: Biomarkers can provide objective measures of dietary exposure, but their relationship with dietary intake in different populations needs to be characterized. This study aimed to determine the association between C14:0, C15:0 and C17:0 and children's dairy fat intake, and to ascertain whether these fatty acids can be used as biomarkers for detecting change in dairy fat intake. Methods: Data from a randomized controlled trial (114 healthy children of 4-13 years of age) was used. The intervention was a replacement of regular-fat dairy foods with reduced-fat or low-fat items. Serum fatty acid composition was measured and dairy intake was assessed via 3 × 24-hour diet recalls at baseline and at 12 weeks (the end of the intervention). Correlation analysis was used to evaluate the relationship between dietary intake and fatty acids at baseline and at week 12, and for the change in biomarkers and diet between these time points. Results: Total dairy fat intake correlated with C14:0, C15:0 and C17:0 at baseline (n = 114; r = 0.24; r = 0.42; r = 0.25 respectively, all p &lt; 0.05), but not at week 12. The change in the total amount of dairy fat (g/day) after 12 weeks was associated with a change in serum C15:0 (n = 59; r = 0.27; p = 0.04). Conclusions: C15:0 is a useful biomarker of dairy fat intake in children and can detect short-term changes in the absolute intake of dairy fat. abstract_id: PUBMED:35898711 Association Between Dairy Intake and Executive Function in Chinese Children Aged 6-12 Years. Association between dairy intake and executive function remains controversial, especially among children, a population with fast-developing executive functions. This study aimed to explore this topic. Additionally, we further distinguished the role of dairy intake types (full- or low-fat milk or yogurt) in this relationship. This survey included 5,138 children aged 6-12 years. Dairy intakes were assessed by validated questionnaires. Executive function was measured by the behavior rating inventory of executive function (BRIEF; Parent Version), and lower T-scores of BRIEF indices indicated superior executive function performance. Results showed that children with higher dairy intake had statistically better performance in Shift (46.58 ± 7.48 vs. 45.85 ± 7.10), Initiate (48.02 ± 8.58 vs. 47.14 ± 8.33), and Working Memory (50.69 ± 8.82 vs. 49.89 ± 8.73). In the analysis of multivariate linear regression, we found that for every one unit increase in full-fat dairy intake, T-scores for Shift (β = -0.350 (95% confidence interval [CI]: (-0.660, -0.039) and Initiate (β = -0.486 (95% CI: (-0.845, -0.127) were decreased and for every one unit increase in low-fat dairy intake, T-score for Organizations of Materials (β = -0.940 (95% CI: (-1.690, -0.189) was decreased. After distinguishing dairy into milk and yogurt, we observed that only milk intake, not yogurt, was significantly associated with better executive function performance in Shift (β = -0.390 (95% CI (-0.745, -0.035) and Initiate (β = -0.509 (95% CI (-0.917, -0.101) after adjusting for potential confounding factors. This study shows that a higher intake of dairy, irrespective of fat content, is related to better executive function performance among children aged 6-12. In addition, a significantly positive relationship between dairy intake and executive function's indices of Shift and Initiate only was observed in milk, not in yogurt. abstract_id: PUBMED:33881583 Age and time trends of dairy intake among children and adolescents of the DONALD study. Purpose: To describe age and time trends in dietary intake of Total Dairy (TD) (g/1000 kcal Total Energy Intake) and types of dairy (weight percent of total dairy intake, w%TD) represented as Low Fat Dairy (LFD), High Sugar Dairy (HSD), Fermented Dairy (FD) and Liquid Dairy (LD) among children and adolescents in Germany. Methods: Overall, 10,333 3-day dietary records kept between 1985 and 2019 by 1275 DONALD participants (3.5-18.5 years; boys: 50.8%) were analyzed using polynomial mixed-effects regression models. Results: TD intake decreased with age (♂: linear trend p &lt; 0.0001; ♀: linear and quadratic trend p &lt; 0.0001), whereas FD (♀: linear, quadratic, cubic trend p ≤ 0.02) increased slightly in girls. HSD (♂: linear, quadratic, cubic trend p ≤ 0.004; ♀: linear, quadratic, cubic trend p ≤ 0.005) and LD (linear, quadratic trend p ≤ 0.0002) decreased with age. In terms of time trends, TD intake decreased in the last three decades, especially since 1995 (quadratic trend for ♂ 0.0007 and ♀ p = 0.004). LFD intake increased until 2010 and decreased thereafter (linear, quadratic, cubic trend p &lt; 0.0001). HSD decreased until 1995, then increased until 2010 and decreased again afterwards (♂: linear, quadratic, cubic trend p ≤ 0.001; ♀: linear, quadratic, cubic trend p ≤ 0.003). While FD intake increased linear (in both ♂ and ♀: p &lt; 0.0001), LD intake decreased (linear, quadratic trend p ≤ 0.03). Conclusion: Our results showed changes in dairy consumption patterns among children and adolescents over the past three decades, demonstrating a decrease in TD intake with age and time, and a shift from liquid to solid dairy products with a simultaneous increase in fermented dairy products, while LFD and HSD fluctuated over time. Further evaluations will examine the health significance of these consumption patterns. abstract_id: PUBMED:22909579 The impact of replacing regular- with reduced-fat dairy foods on children's wider food intake: secondary analysis of a cluster RCT. Background/objectives: The effect of changing one aspect of diet needs to be considered within the context of total diet. The study aim was to evaluate the changes in children's overall food intake following replacement of regular-fat with reduced-fat dairy foods. Subjects/methods: Secondary analysis of a cluster Randomized Controlled Trial where families were received parental behavioral nutrition education to change to reduced-fat dairy foods (intervention) or reduce screen time (comparison control). Food intake was assessed via multiple 24-h recalls at baseline, week 12 (end of the intervention) and week 24. Participants were parents and their children (4-13 years, N=145) who were regular-fat dairy food consumers. The intervention effect was based on mixed model analysis adjusted for covariates, and baseline food intake. Results: At week 24, total dairy servings per day were similar between groups and servings of reduced-fat dairy foods were higher in the intervention group (0.8 servings per day 95% confidence interval (CI) 0.5-1.1, P&lt;0.0001). Fruit intake was higher in the intervention group (0.5 servings per day 95% CI 0.02-0.9, P=0.040), with no other statistically significant differences in food intake. In the intervention group, the contribution of core food groups to saturated fat intake was 45% at baseline and 31% at week 24, with 'extra foods' being the largest contributor to total energy (28%) and saturated fat (40%) intake at follow-up. Conclusions: Changing children's dairy food choices to reduced-fat varieties did not adversely affect overall food intake. Replacing energy-dense foods with nutrient-rich foods should be the focus of interventions to lower in saturated fat. abstract_id: PUBMED:21430121 Changing from regular-fat to low-fat dairy foods reduces saturated fat intake but not energy intake in 4-13-y-old children. Background: Dairy foods are nutrient rich but also a source of saturated fat in the diets of children. Objective: We assessed effects on dietary intakes and health outcomes of changing dairy foods consumed by children from regular- to reduced-fat varieties. Design: This study was a 24-wk cluster randomized controlled trial in 93 families with 4-13-y-olds who were randomly allocated to parental education regarding changing to reduced-fat dairy foods (n = 76 children) or reducing screen time (n = 69 children). Study outcomes, which were measured at weeks 0, 12 (end of the intervention), and 24, included saturated fat, energy, and nutrient intakes; pentadecanoic acid and blood lipid concentrations; body mass index z score; and waist circumference. Multilevel analyses were used with adjustment for child- and family-level covariates. Results: There were no group differences in overall dairy intakes (-45 g dairy; 95% CI: -141, 51 g dairy; P = 0.356). Saturated fat intakes were 3.3 percentage points lower (P &lt; 0.0001) in the intervention group at week 24 than in the comparison group. Pentadecanoic acid concentrations were lower at week 12 (0.03%; P = 0.012) but not at week 24. LDL-cholesterol concentrations were not different at week 12, but LDL-cholesterol concentration was 0.15 mmol/L lower in the intervention group at week 24 than in the comparison group (P = 0.037). There were no significant group differences in total energy or adiposity measures. Regular-fat dairy foods decreased from 88% to 14% of dairy intake in the intervention group. Calcium, magnesium, and carbohydrate (percentage of energy) intakes were higher in the intervention group than in the comparison group; retinol intakes were lower in the intervention group than in the comparison group; and overall vitamin A intakes were similar between groups. Conclusion: Advice to parents to change to reduced-fat products was effective in reducing children's saturated fat intakes but did not alter energy intakes or measures of adiposity. This trial was registered in the Australia New Zealand Clinical Trials Registry as ACTRN12609000453280. abstract_id: PUBMED:34718860 Dairy intake and long-term body weight status in German children and adolescents: results from the DONALD study. Purpose: To analyse the association between intake of total dairy (TD) and types of dairy [liquid dairy (LD), solid dairy (SD), low-fat dairy (LFD), high-fat dairy (HFD), high sugar dairy (HSD), low-sugar dairy (LSD), not fermented dairy (NFD), as well as fermented dairy (FD)] and long-term changes in body weight status and composition among children and adolescents in Germany. Methods: In total, 9999 3-day dietary records collected between 1985 and 2019 by 1126 participants (3.5-18.5 years; boys: 50.8%) of the Dortmund Nutritional and Anthropometric Longitudinally Designed (DONALD) study were analysed. Polynomial mixed-effects regression models were used to examine whether changes (median follow-up: 9 years) in the intake of TD and dairy types (in 100 g/1000 kcal total energy intake) were associated with changes in body-mass-index-standard-deviation-score (BMI-SDS); fat mass index (FMI); fat-free mass index (FFMI) over time. Results: An individual increase in TD intake was slightly but significantly associated with an increase in BMI-SDS (β = 0.0092; p = 0.0371), FMI (β = 0.022; p = 0.0162), and FFMI (β = 0.0156; p = 0.0417) after adjustment for potential confounder. Analyses for LD (BMI-SDS: β = 0.0139; p = 0.0052; FMI: β = 0.0258; p = 0.0125; FFMI: β = 0.0239; p = 0.0052) and LSD intake (BMI-SDS: β = 0.0132; p = 0.0041, FMI: β = 0.02; p = 0.0316, FFMI: β = 0.0183; p = 0.0189) showed similar results to TD. Both processing method and fat content showed no association with body composition in our analyses. Conclusion: Increases in TD, LD, and LSD intake showed small but significant increases in BMI and concomitant increases in fat mass and lean mass. However, the observed changes were too small to expect biological or physiological meaningful effects. Overall, our results showed that policies to promote dairy intake in childhood are to be welcomed, as no negative effects on body composition are expected, while the intake of important nutrients for growth is ensured. The type of dairy does not seem to matter. abstract_id: PUBMED:35947163 Associations between total dairy, high-fat dairy and low-fat dairy intake, and depressive symptoms: findings from a population-based cross-sectional study. Purpose: Evidence on the association between dairy intake and depression is conflicting. Given numerous dietary guidelines recommend the consumption of low-fat dairy products, this study examined associations between total dairy, high-fat dairy, and low-fat dairy intake and the prevalence of elevated depressive symptoms. Associations between dairy products, which differed in both fat content and fermentation status, and depressive symptoms were also explored. Methods: This cross-sectional study included 1600 Finnish adults (mean age 63 ± 6 years; 51% female) recruited as part of the Kuopio Ischaemic Heart Disease Risk Factor Study. Dairy intake was assessed using 4-day food records. Elevated depressive symptoms were defined as having a score ≥ 5 on the Diagnostic and Statistical Manual of Mental Disorders-III Depression Scale, and/or regularly using one or more prescription drugs for depressive symptoms. Results: In total, 166 participants (10.4%) reported having elevated depressive symptoms. Using multivariate logistic regression models, intake in the highest tertile of high-fat dairy products (OR 0.64, 95% CI 0.41-0.998, p trend = 0.04) and high-fat non-fermented dairy products (OR 0.60, 95% CI 0.39-0.92, p trend = 0.02) were associated with reduced odds for having elevated depressive symptoms. Whereas no significant association was observed between intake of total dairy, low-fat dairy, or other dairy products, and depressive symptoms. Conclusion: Higher intake of high-fat dairy and high-fat non-fermented dairy products were associated with reduced odds for having elevated depressive symptoms in middle-aged and older Finnish adults. Given the high global consumption of dairy products, and widespread burden of depression, longitudinal studies that seek to corroborate these findings are required. Answer: The examination of children's dairy intake in the United States reveals that dairy and calcium intakes are inadequate in children aged 4 to 18 years, with most children consuming high-fat varieties of milk and dairy products. Only children aged 2 to 3 years met the MyPyramid recommendation for dairy consumption, and 2- to 8-year-olds met the Adequate Intake for calcium. However, there is a tendency for children to choose higher-fat options in dairy products such as cheese, yogurt, ice cream, and dairy-based toppings. It is suggested that nutrition guidance efforts should focus on increasing the intake of low-fat dairy products, especially through flavored low-fat milk products, to improve calcium intake in school-age children and adolescents (PUBMED:18035145). Contrastingly, a systematic review indicates that the majority of studies did not find an association between whole-fat dairy products and increased measures of weight gain, adiposity, or cardiometabolic risk in children. This suggests that the current dietary guidelines recommending children consume reduced-fat dairy products may not be consistent with the limited literature available. High-quality randomized controlled trials are needed to provide better evidence in this area (PUBMED:32119732). Furthermore, a study among American Indians, a population with a high burden of diabetes, found that higher full-fat dairy food intake was associated with a lower risk of diabetes, suggesting potential benefits for populations with low dairy food intake (PUBMED:31070753). In terms of biomarkers, C15:0 has been identified as a useful biomarker of dairy fat intake in children and can detect short-term changes in the absolute intake of dairy fat (PUBMED:25402168). Additionally, higher dairy intake, irrespective of fat content, has been related to better executive function performance among children aged 6-12, with milk intake, not yogurt, being significantly associated with better performance (PUBMED:35898711). Overall, while there is evidence of inadequate dairy and calcium intake among children in the United States, particularly with a preference for high-fat dairy products, the relationship between whole-fat dairy intake and health outcomes such as adiposity and cardiometabolic risk is not clear-cut and warrants further investigation.
Instruction: Does treatment of Helicobacter pylori with antibiotics alone heal duodenal ulcer? Abstracts: abstract_id: PUBMED:2251107 Helicobacter pylori Ever since 1983 when Marshall &amp; Warren succeeded in cultivating Helicobacter pylori (HP) (previously known as Campylobacter pylori) a high incidence thereof has been demonstrated in patients with ulcers and non-ulcerous dyspepsia. Helicobacter pylori is easily diagnosed via microscopy, cultivation and serological tests. The bacterium may be treated with antibiotics or the ulcer drug bismuth subcitrate (De-Nol). The pathogenetic role of HP is not clear. Some studies have shown a lower recurrence rate of duodenal ulcers after treatment addressed to Helicobacter pylori with the ulcer drug De-Nol, which suggests that HP plays a role in the onset of recurrence in cases of ulcus duodeni. abstract_id: PUBMED:14626564 Treatment of Helicobacter pylori infections in gastric and duodenal ulcers Helicobacter pylori is an organism that is thought to be important in the pathophysiology of ulcer disease and gastritis. Eradication of the organism is useful in the treatment of infected patients. Efficacious regimens generally include an antisecretory agent combined with two antimicrobials. The main determinant of the overall cost of treatment is the eradication of H. pylori in the microorganism. Resistance to the commonly used antibiotics can occur but it can be usually overcome with regimen changes. It is important for care physicians to clearly understand when and how to test and how to select appropriate therapy for Helicobacter pylori infection. abstract_id: PUBMED:1632004 Helicobacter pylori Helicobacter pylori (HP) are Gram-negative spiral bacteria which occur in the human stomach. The bacteria were cultured in vitro for the first time in 1983. It is suspected that the bacteria may cause chronic gastritis of type B and may also be a contributory cause of chronic ulceration and cancer of the stomach. The bacteria are accompanied by characteristic inflammatory changes in the gastric mucosa. The significance for gastritis, chronic ulceration, non-ulcer dyspepsia and carcinoma of the stomach is discussed. HP occurs in a great proportion of the population of the world and the frequency increases with age. The route of infection is unknown but faecal-oral infection is probable. Correlation between the presence of HP and the occurrence of symptoms is poor in the individual patient. The bacteria can be demonstrated histologically, cytologically, by culture, by the urease test, by the urease expiration test or serologically. The bacteria are sensitive for a series of antibiotics and bismuth but no effective treatment is known as the recurrence rate is high. abstract_id: PUBMED:9214051 Helicobacter pylori in 1997 In this review Helicobacter pylori (H. pylori) infection and its relation to different diseases is presented. H. pylori doesn't cause inconvenience to most infected people, though all infected persons have chronic active gastritis. The 10 year risk of peptic ulcer for people infected with H. pylori is about 10%. Randomized double-blinded trials have shown that eradication of H. pylori can cure most patients with peptic ulcer disease. Some people infected with H. pylori develop atrophic gastritis which is a risk factor for development of gastric cancer. It is not known if H. pylori screening and eradication would have a prophylactic effect against gastric cancer. It is also unknown if persons with non-organic dyspepsia and persons in long-term treatment with proton-pump-inhibitors would benefit from H. pylori eradication. abstract_id: PUBMED:21106480 Helicobacter pylori -- 2010 Helicobacter pylori, discovered 27 years ago, has remained the most prevalent infectious agent in the world. In the author's hypothesis, the increase of peptic ulcer prevalence in the 19-20th century could be attributable to the extended worldwide use of gastric tubes for secretory testing which led to the iatrogenic transmission of pathogenic strains. Helicobacter pylori outer membrane proteins (OMP), and duodenal ulcer promoting (dupA) proteins were identified as novel virulence factors, leading to the production of pro-inflammatory cytokines, which could be future targets of therapy. There is no ideal first-line eradication of the infection and according to expert's opinion, the efficiency of these regimens has fallen gradually in recent years to unacceptably low levels; however, in the author's opinion this is a multifactorial phenomenon which can not be generalized. As alternative drugs, the efficiency of levofloxacin, furazolidone and rifabutin has been proven by meta-analyses. Sequential and bismuth-free quadruple therapies, although highly efficient, are not yet used on a large scale. The recurrence of the infection is 2.27%/year in developed and of 13.0%/year in developing countries. Spontaneous eradication occurred in 8-20% of the children and 5-11% of adults. The prevalence of clarithromycin resistance is increasing worldwide. In Hungary, it has reached 10.9% in county cities, according to a national survey. In a district of Budapest called Ferencváros, the prevalence between 2005 and 2009 was 16-22%, with no increasing trend. The development of enzymatic inhibitors (urease, carbonic anhydrase and gamma-glutamyl transpeptidase), modified antibiotics and efflux pump inhibitors seem promising ways because these compounds do not lead to resistance; however, none have yet been used in humans. abstract_id: PUBMED:2000614 Helicobacter pylori and ulcer disease There is a strong association between Helicobacter pylori and peptic ulcer and chronic gastritis. It has been shown, furthermore, that the relapse rate after a healed duodenal ulcer is much reduced when the treatment has also eradicated H pylori from the antrum ventriculi. We have tried two simple and rapid tests for diagnosing H pylori infection on biopsies taken at gastroscopy. We found a significantly larger number of cases with H pylori among patients with peptic ulcer than among patients with normal gastroscopic findings. Culturing the biopsy proved to be successful, even though transport to the laboratory took about 24 hours. It has been proved simple to diagnose H pylori, but any practical implications of this fact remain to be seen. abstract_id: PUBMED:9033612 The treatment of Helicobacter pylori infection H. pylori causes inflammatory lesions of the stomach and duodenum. At the present time eradication is essentially recommended in case of gastric or duodenal ulcer. The choice of the appropriate drug depends on the characteristics of the H. pylori infection, the localization deep in the gastric mucosa, the physico-chemical properties of the gastric medium, especially the acidity which deactivates antibiotics, slow bacterial growth and the germ's sensitivity to antibiotics. Anti-infectious treatment is now based on a three-drug regimen combining an antisecretory drug (proton pump inhibitor or H2 receptor antagonist) and two antibiotics: clarithromycin associated with amoxicillin or an imidazol derivative (metronidazol or tinidazol) or tetracycline. Two antibiotics (clarithromycin, amoxicillin) as well as three anti-secretory agents (lansoprazole, omeprazole, ranitidine) have been authorized in France for three-drug regimens of 1 or 2 weeks leading to approximately 90% eradication. Special attention should be placed on the risk of resistance to antibiotics (macrolids and imidazol derivatives) and patient compliance required for successful eradication of H. pylori. Other therapeutic schemes are under assessment and a vaccine is being prepared. Eradication of H. pylori has totally changed the treatment of gastric and duodenal ulcers, eliminating the need for long-term treatment and avoiding complications. abstract_id: PUBMED:10221108 Pathophysiology of Helicobacter pylori infections Helicobacter pylori is the causative factor of chronic gastritis and ulcers of duodenum and stomach and relates to mucosa-associated lymphoid tissue (MALT) lymphomas. Host factors, bacterial factors and dietary habits conceivably play a part in the more severe mucosal lesions induced by H. pylori. H. pylori is a heterogeneous bacterial species, the various types differing in possession of a locus with genes coding for products that may intensify the inflammatory reaction. The cytotoxin-associated gene (cagA) protein is a marker for this locus; this protein and the vacuolating-associated cytotoxin A are clearly related to the ulcerative disease. Patients may be colonized by a mixed population of H. pylori consisting of cagA-positive and cagA-negative H. pylori. Also, a patient may be infected by a mixed population of H. pylori sensitive and resistant to metronidazole. Accordingly, a substantial part of the H. pylori population in a patient should be examined to arrive at an adequate conclusion concerning the production of H. pylori factors that may relate to a particular clinical picture and to the sensitivity to antibiotics of the H. pylori. abstract_id: PUBMED:11464622 Treatment of Helicobacter pylori infection. Whom to treat and with what? Actually is considered that Helicobacter pylori play a major role in the genesis of peptic ulcer. Like in the gastric and duodenal ulcer. When we demonstrate the presence of Helicobacter pylori in the gastric antrum of patients with ulcer they must receive eradication treatment. Another indication for eradication treatment are the patients with malt lymphoma or patients with endoscopical resection of gastric carcinoma. The ideal treatment is the therapy that eradicate 90% of the cases. The most effective are the triple therapies with one proton pump inhibitor with two antibiotics like amoxycillin plus clarithromycin. In Mexico the therapies with metronidazole are not recommended because we have high rates of resistance to this drug 70%. Is not justified to treat patients with non ulcer dyspepsia. We still recommended the schemes of 14 days. A good alternative is the combination of ranitidine bismuth citrate plus two antibiotics. Is possible that in the future we can have a vaccine to eradicate and to prevent the infection. abstract_id: PUBMED:10632645 Does eradication of Helicobacter pylori alone heal duodenal ulcers? Background: Eradication of Helicobacter pylori infection prevents duodenal ulcer (DU) relapse, but it remains uncertain whether eradication of H. pylori alone heals duodenal ulceration. Aim: To test the hypothesis that eradication of H. pylori infection is accompanied by healing of duodenal ulcer. Methods: A total of 115 consecutive patients with endoscopically confirmed H. pylori-infected duodenal ulcer were randomly assigned to one of two groups. Group BTC patients received a 1-week course of colloidal bismuth subcitrate 220 mg b.d., tinidazole 500 mg b.d., clarithromycin 250 mg b.d. Group OBTC patients received omeprazole 20 mg daily for 4 weeks with the BTC regimen during the first week. Endoscopy with antral biopsies and 13C-urea breath test (UBT) were performed before and 4 weeks after completion of the 7-day triple or quadruple therapy. Results: Eight patients dropped out (four in BTC and four in OBTC). Duodenal ulcer healing rates on an intention-to-treat basis in BTC and OBTC were 86% (95% CI: 77-95%) and 90% (95% CI: 82-98%), respectively. The eradication rates of H. pylori on an intention-to-treat basis in BTC and OBTC were 88% (95% CI: 79-96%) and 91% (95% CI: 84-99%), respectively. There were no statistically significant differences in ulcer healing rates and eradication rates between these two groups (P &gt; 0.05). Epigastric pain resolved more rapidly in patients assigned to OBTC compared with those assigned to BTC. Both of the two regimens were well tolerated with only minor side-effects (3% of the 115 patients) and the compliance was good. Conclusions: BTC is a very effective H. pylori eradication regimen. Almost all duodenal ulcers heal spontaneously after cure of H. pylori infection using a 1-week low-dose bismuth-based triple therapy. Treating duodenal ulcer with simultaneous administration of omeprazole achieves ulcer pain relief more rapidly. Answer: The treatment of Helicobacter pylori (H. pylori) infection is associated with the healing of duodenal ulcers. Eradication of H. pylori has been shown to prevent duodenal ulcer (DU) relapse, but there has been uncertainty about whether eradication alone is sufficient for the healing of duodenal ulceration. A study aimed to test this hypothesis found that almost all duodenal ulcers healed spontaneously after the cure of H. pylori infection using a 1-week low-dose bismuth-based triple therapy, which included antibiotics (PUBMED:10632645). This suggests that the eradication of H. pylori, which typically involves antibiotic treatment, is accompanied by the healing of duodenal ulcers. Furthermore, it has been established that H. pylori is a causative factor of chronic gastritis and ulcers of the duodenum and stomach (PUBMED:10221108), and the treatment regimens often include a combination of an antisecretory agent with two antimicrobials (PUBMED:14626564). The use of antibiotics is a critical component of these treatment regimens, and successful eradication of H. pylori has been shown to change the treatment of gastric and duodenal ulcers, eliminating the need for long-term treatment and avoiding complications (PUBMED:9033612). In conclusion, the treatment of H. pylori with antibiotics, often as part of a combination therapy that may include antisecretory agents and other medications, is indeed associated with the healing of duodenal ulcers.
Instruction: Angina-induced protection against myocardial infarction in adult and elderly patients: a loss of preconditioning mechanism in the aging heart? Abstracts: abstract_id: PUBMED:9316523 Angina-induced protection against myocardial infarction in adult and elderly patients: a loss of preconditioning mechanism in the aging heart? Objectives: The present study examined whether angina 48 h before myocardial infarction provides protection in adult and elderly patients. Background: The mortality rate for coronary artery disease is greater in elderly than in young patients. In experimental studies, ischemic preconditioning affords an endogenous form of protection against ischemia-reperfusion injury in adult but not in senescent hearts. Angina before myocardial infarction, a clinical equivalent of experimental ischemic preconditioning, has a protective effect in adult patients. It is not known whether angina before myocardial infarction is also protective in aged patients. Methods: We retrospectively verified whether antecedent angina within 48 h of myocardial infarction exerts a beneficial effect on in-hospital outcomes in adult (&lt; 65 years old, n = 293) and elderly (&gt; or = 65 years old, n = 210) patients. Results: In-hospital death was more frequent in adult patients without than in those with previous angina (10% vs. 2.6%, p &lt; 0.01), as were congestive heart failure or shock (10.7% vs. 3.3%, p &lt; 0.02) and the combined end points (in-hospital death and congestive heart failure or shock) (20.7% vs. 5.9%, p &lt; 0.0003). In contrast, the presence or absence of previous angina before acute myocardial infarction in elderly patients seems not to influence the incidence of in-hospital death (14.4% vs. 15.2%, p = 0.97), congestive heart failure or shock (11.0% vs. 11.9%, p = 0.99) and the combined end points (25.4% vs. 27.1%, p = 0.89). Logistic regression analysis models for in-hospital end points show that previous angina is a positive predictor in adult but not in elderly patients. Conclusions: The presence of angina before acute myocardial infarction seems to confer protection against in-hospital outcomes in adults; this effect seemed to be less obvious in elderly patients. This study suggests that the protection afforded by angina in adult patients may involve the occurrence of ischemic preconditioning, which seems to be lost in senescent patients. abstract_id: PUBMED:12878948 Protection of the human heart with ischemic preconditioning during cardiac surgery: role of cardiopulmonary bypass. Objective: Studies on the effects of ischemic preconditioning in the human heart have yielded conflicting results and therefore remain controversial. This study investigated whether ischemic preconditioning was able to protect against myocardial tissue damage in patients undergoing coronary artery surgery with cardiopulmonary bypass and on the beating heart. Methods: A total of 120 patients were studied and divided into 3 groups: group I: cardiopulmonary bypass with intermittent crossclamp fibrillation; group II: cardiopulmonary bypass with cardioplegic arrest using cold blood cardioplegia; group III: surgery on the beating heart. In each group (n = 40), patients were randomly subdivided (n = 20/subgroup) into control and preconditioning groups (1 cycle of 5 minutes of ischemia/5 minutes reperfusion before intervention). Ischemic preconditioning was induced by clamping the ascending aorta in groups I and II or by clamping the coronary artery in group III. Serial venous blood levels of troponin T were analyzed before surgery and at 1, 4, 8, 24, and 48 hours after termination of ischemia. In addition, in vitro studies using right atrial specimens obtained before the institution of cardiopulmonary bypass, and then again 10 minutes after initiation of bypass, were performed. The specimens were equilibrated for 30 minutes before being allocated to 1 of the following 2 groups (n = 6 per group): (1) ischemia alone (90 minutes of ischemia followed by 120 minutes of reoxygenation) or (2) preconditioning with 5 minutes of ischemia and 5 minutes of reoxygenation before the long ischemic insult. Creatine kinase leakage (U/g wet weight) and 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide reduction (mmol/l per gram wet weight), an index of cell viability, were assessed at the end of the experiment. Results: There were no perioperative myocardial infarctions or deaths in any of the groups studied. The total release of troponin T was similar in groups I and II (patients undergoing surgery with cardiopulmonary bypass) and in the release profile; they were unaffected by ischemic preconditioning. In contrast, the total troponin T release for the first 48 hours was significantly reduced by ischemic preconditioning in group III (patients undergoing surgery without cardiopulmonary bypass) from 3.1 +/- 0.1 to 2.1 +/- 0.2 ng. h. mL. Furthermore, the release profile that peaked at 8 hours in the control group shifted to the left at 1 hour. In the in vitro studies, the atrial muscles obtained before cardiopulmonary bypass were protected by ischemic preconditioning (creatine kinase = 2.6 +/- 0.2 and 3-[4,5-dimethylthiazol-2-yl]-2,5-diphenyltetrazolium bromide reduction = 152 +/- 24 vs creatine kinase = 5.4 +/- 0.6 and 3-[4,5-dimethylthiazol-2-yl]-2,5-diphenyltetrazolium bromide reduction = 87 +/- 16 in controls; P &lt;.05); however, the muscles obtained 10 minutes after initiation of cardiopulmonary bypass were already protected (creatine kinase = 0.8 +/- 0.1 and 3-[4,5-dimethylthiazol-2-yl]-2,5-diphenyltetrazolium bromide reduction = 316 +/- 38), and ischemic preconditioning did not result in further improvements. Conclusions: Ischemic preconditioning is protective in patients undergoing coronary artery surgery on the beating heart without the use of cardiopulmonary bypass, but it offers no additional benefit when associated with bypass regardless of the mode of cardioprotection used, because cardiopulmonary bypass per se induces preconditioning. abstract_id: PUBMED:12212260 Prognostic effect of angina before acute myocardial infarction in elderly patients Objective: The present study examined whether angina 48 h before acute myocardial infarction provided protection in early and adult patients. To investigate the effect of ischemic preconditioning in senescent heart. Methods: We retrospectively verified whether antecedent angina within 48 h myocardial infarction exerted a beneficial effect on in-hospital outcomes in adult(&lt; 60 years old, n = 68) and elderly (&gt; or = 60 years old, n = 118) patients. Results: In-hospital congestive heart failure was more frequent in adult patients without than in those with previous angina(16.7% vs 2.6%, P &lt; 0.05), as were cardiogenic shock and death(23.3% vs 5. 3%, P &lt; 0.05) and the combined end points(in-hospital congestive heart failure and shock or death) (40.0% vs 7.9%, P &lt; 0.01). In contrast, the presence or absence of previous angina before acute myocardial infarction in elderly patients seemed to have no obvious influence, the incidence of in-hospital congestive heart failure(17.7% vs 20.0%, P &gt; 0.05), shock and death(23.5% vs 20.3%, P &gt; 0.05) and combined end points(41.2% vs 40.0%, P &gt; 0.05). But the elderly patients with angina showed a significantly smaller infarct size compared with elderly patients without angina, the long-term cardiogenic death was more frequent in elderly patients without than in those with previous angina(18.0% vs 5.9%, P &lt; 0.05). Conclusions: The presence of angina before acute myocardial infarction seems to confer significant protection against in-hospital outcomes in adults; this effect seems to be present in elderly patients. This study suggests that the protection afforded by angina in adult and elderly patients may involve the occurrence of ischemic preconditioning. abstract_id: PUBMED:11360839 Ischemic preconditioning: implications for the geriatric heart. Ischemic preconditioning is among the most consistent and powerful modes of reducing myocardial infarct size. Although several clinical studies have suggested that the human heart can be preconditioned, controversy exists in both the experimental and clinical literature as to whether the senescent heart can be preconditioned. The authors recently reported that older patients (&gt; or = 60 years of age) in the Thrombolysis in Myocardial Infarction-4 study appeared to benefit from a history of angina prior to acute myocardial infarction. This observation may lead to a clinical counterpart to successful preconditioning in the older heart. abstract_id: PUBMED:9375994 Ischaemic preconditioning of myocardium. Myocardium has the innate potential to adapt to transient sublethal ischaemia so that it becomes more resistant to a subsequent, more severe, ischaemic insult. The response is called ischaemic preconditioning and protection of the myocardium is manifested by a slowing of adenosine triphosphate decline, limitation of ischaemic necrosis and a reduction in dysrhythmia severity. Protection conferred by preconditioning occurs in two distinct temporal phases. An early phase of protection is observed immediately but wanes within two to three hours (classic preconditioning). This is followed many hours later by a second window of protection (delayed preconditioning). The cellular mechanisms underpinning both forms of protection are currently being intensively investigated. There is evidence that human myocardium can be preconditioned ex vivo and also in situ during elective procedures such as angioplasty and coronary artery bypass grafting. Furthermore, evidence points to the possibility that preconditioning occurs naturally in some ischaemic syndromes, particularly warm-up angina and preinfarction angina. Ultimately, investigation of the mechanisms of preconditioning may contribute to the development of rational therapies for protecting the ischaemic myocardium and, perhaps more importantly, enhance our understanding of the molecular basis of ischaemic heart disease. abstract_id: PUBMED:16265313 Ischemic preconditioning and preinfarction angina in the clinical arena. In animals, brief episodes of ischemia before a total coronary occlusion protect the heart and result in a smaller myocardial infarct size. In humans, episodes of angina before acute myocardial infarction might also confer a preconditioning or protective effect; numerous studies show that preinfarction angina is associated with smaller infarcts. Preinfarction angina is also associated with reductions in ventricular dysfunction, arrhythmias and incidence of no-reflow phenomena, and, in some cases, improved survival. The protective effect of preconditioning in humans is characterized by marked individual variations and seems to be attenuated in women, people with diabetes and the elderly. Exercise seems to be an important way to induce preconditioning in humans and preserves it in the elderly. abstract_id: PUBMED:8905283 Ischaemic preconditioning and cardiac surgery. Objective: This review discusses the phenomenon of ischaemic preconditioning and its potential application to cardiac surgery. The biology of ischaemic preconditioning is explained and the more limited evidence suggesting that the human heart can be preconditioned is discussed. Methods And Results: It is now accepted that the heart is capable of short-term rapid adaptation in response to brief ischaemia so that during a subsequent, more severe ischaemic insult myocardial necrosis is delayed-ischaemic preconditioning. The infarct-delaying properties of ischaemic preconditioning have been observed in all species studied. Five minutes of ischaemia is enough to initiate preconditioning and the protective period lasts for 1-2 h. Laboratory experiments have demonstrated that the stimulation of adenosine receptors initiates preconditioning and the intracellular signal transduction mechanisms involve protein kinase C and ATP-dependent potassium channels, although there may be some differences between species. An analysis of studies on myocardial infarction in humans has revealed that some patients reporting angina in the days before infarction have a better outcome and this may be due to the ischaemia causing preconditioning. More direct evidence has come from an investigation of patients undergoing percutaneous transluminal angioplasty in whom the ST-segment changes induced by balloon inflation were more marked during the first inflation than the second. In patients undergoing coronary artery bypass grafting the decline in ATP content during the first 10 min of ischaemia was reduced in patients subjected to a brief preconditioning protocol. Conclusions: Preconditioning is a powerful and reproducible method of protecting the myocardium from irreversible ischaemic injury. There is now evidence indicating that the human heart can be preconditioned. However, more trials are necessary in patients undergoing cardiac surgery before the role of preconditioning as a means of myocardial protection can be assessed. abstract_id: PUBMED:8650986 -Myocardial protection by preconditioning. Experimental and clinical significance-. Short periods of ischemia render the myocardium more resistant to a subsequent prolonged coronary occlusion resulting in a reduction of infarct size. This cardioprotective mechanism has been called ischemic preconditioning. Acute myocardial ischemia results in a rapid decline of high energy phosphates. After short periods of ischemia the high energy phosphate levels are better preserved and the increase of lactate is slower during the prolonged subsequent ischemia in the preconditioned group compared to control. The duration of ischemia needed for induction of the protective effect is 2.5 min in dogs and 20 min in our swine model. In porcine myocardium the protection is lost about 1 h after induction and a renewal is not possible at that time, but is 24 h later. For rabbits or dogs, but not in pigs, a late protection 24 h after induction or preconditioning has been shown ("second window of protection"). Adenosine or adenosine A1 receptor agonists, muscarinic M2 receptor agonists, alpha 1-receptor agonists and bradykinin B2 receptor agonists as well as opening of the K+ATP-channel substitute for ischemia in the induction of protection. Activation of protein kinase C results in protection in rats and rabbits, but not in dogs or pigs. Inhibition of protein kinase C translocation or kinase activity results in a loss of the protection induced by preceding ischemia. After blockade of the K+ATP-channel the protection induced by adenosine A1 receptor activation is lost. Therefore opening of the K+ATP-channel is a prerequisite for induction of the protective effect. Inhibition of the inhibitory G-protein by pertussis toxin has been shown to result in a loss of protection, therefore the Gi-protein seems to be involved in the evolution of protection. In humans during coronary angioplasty anginal pain and lactate production during a second balloon occlusion is diminished without any change in the regional myocardial perfusion. This adaptation is inhibited by blockade of the K+ATP-channel or of the adenosine A1 receptor. Intermittent cross-clamping before a longer occlusion during open-heart surgery results in a better preservation of high energy phosphates compared to controls without preceding short ischemia. These observations support the hypothesis that ischemic preconditioning also occurs in humans. Angina pectoris preceding the myocardial infarction may have preconditioned the human heart against the subsequent myocardial infarction, but studies concerning the influence of angina pectoris on short-term outcome after thrombolysis are conflicting. In the future, ischemic preconditioning or preconditioning with drugs may prolong the duration of ischemia tolerated without necrosis and improve the prognosis of patients by reducing the infarct size. abstract_id: PUBMED:12473980 Ischemic preconditioning. Experimental facts and clinical perspective. Brief periods of non-lethal ischemia and reperfusion render the myocardium more resistant to subsequent ischemia. This adaption occurs in a biphasic pattern: the first being active immediately and lasting for 2-3 hrs (early preconditioning), the second starting at 24 hrs until 72 hrs after the initial ischemia (delayed preconditioning) and requiring genomic activation with de novo protein synthesis. Early preconditioning is more potent than delayed preconditioning in reducing infarct size; delayed preconditioning also attenuates myocardial stunning. Early preconditioning depends on the ischemia-induced release of adenosine and opioids and, to a lesser degree, also bradykinin and prostaglandins. These molecules activate G-protein coupled receptors, initiate the activation of KATP channels and generation of oxygen radicals, and stimulate a series of protein kinases with essential roles for protein kinase C, tyrosine kinases and members of the MAP kinase family. Delayed preconditioning is triggered by a similar sequence of events, but in addition essentially depends on eNOS-derived NO. Both early and pharmacological preconditioning can be pharmacologically mimicked by exogenous adenosine, opioids, NO and activators of protein kinase C. Newly synthetized proteins associated with delayed preconditioning comprise iNOS, COX-2, manganese superoxide dismutase and possibly heat shock proteins. The final mechanism of protection by preconditioning is yet unknown; energy metabolism, KATP channels, the sodium-proton exchanger, stabilisation of the cytoskeleton and volume regulation will be discussed. For ethical reasons, evidence for ischemic preconditioning in humans is hard to provide. Clinical findings that parallel experimental ischemic preconditioning are reduced ST-segment elevation and pain during repetitive PTCA or exercise tests, a better prognosis of patients in whom myocardial infarction was preceded by angina, and reduced serum markers of myocardial necrosis after preconditioning protocols during cardiac surgery with cardiac arrest. The most promising approach to apply principles of ischemic preconditioning therapeutically appears to be the pharmacological recruitment of delayed protection, as recently demonstrated with intravenous nitroglycerine in patients undergoing PTCA 24 hrs later. abstract_id: PUBMED:8805806 Ischemic preconditioning against infarction: its mechanism and clinical implications. Exposing the myocardium to brief ischemia followed by reperfusion enhances myocardial resistance to infarction from a subsequent sustained ischemia. This phenomenon, termed preconditioning, is most likely to be triggered by adenosine A1 receptor activation, and the dependence of the preconditioning effect on the duration of preconditioning ischemia and the number of its repetition is probably through the interstitial adenosine level achieved by each preconditioning protocol. Our studies support the theory that activation of protein kinase C subsequent to stimulation of the A1 receptor enhances myocardial ischemic tolerance. The ATP-sensitive potassium channel may be involved in preconditioning, but its relation with protein kinase C is unclear, and the relative importance of this channel might be species dependent. The mechanism of preconditioning needs to be further elucidated in animal models and preconditioning in the human heart needs to be further characterized before we can adapt its biochemical basis to clinical therapy. Answer: The concept of angina-induced protection against myocardial infarction, often referred to as ischemic preconditioning, has been studied in both adult and elderly patients. Ischemic preconditioning is a phenomenon where brief episodes of ischemia make the heart more resistant to subsequent, more severe ischemic insults (PUBMED:9375994). This protective effect has been observed in various species and is believed to involve several cellular mechanisms, including the activation of protein kinase C and ATP-dependent potassium channels (PUBMED:8650986). Clinical studies have suggested that angina prior to myocardial infarction can serve as a natural form of ischemic preconditioning, potentially leading to smaller infarct sizes and better outcomes (PUBMED:16265313). However, there is controversy regarding the effectiveness of this mechanism in the aging heart. In adult patients (younger than 65 years old), the presence of angina within 48 hours before myocardial infarction has been associated with lower in-hospital mortality, congestive heart failure, or shock, and combined endpoints of in-hospital death and congestive heart failure or shock (PUBMED:9316523). This suggests that ischemic preconditioning is effective in adults. In contrast, for elderly patients (65 years old or older), the presence of angina before myocardial infarction does not seem to influence in-hospital death, congestive heart failure, or shock, and the combined endpoints (PUBMED:9316523). However, another study indicated that while the immediate in-hospital outcomes for elderly patients were not significantly influenced by prior angina, there was a noted reduction in infarct size and long-term cardiogenic death in elderly patients with previous angina compared to those without (PUBMED:12212260). This suggests that some protective effects of ischemic preconditioning may still be present in the elderly, although possibly attenuated. The discrepancy in the protective effects of ischemic preconditioning between adult and elderly patients may imply a loss or reduction of the preconditioning mechanism with aging. This could be due to age-related changes in the heart's response to ischemic stress or alterations in the signaling pathways involved in preconditioning (PUBMED:11360839).
Instruction: Do traditional symptoms of hypothyroidism correlate with biochemical disease? Abstracts: abstract_id: PUBMED:9294788 Do traditional symptoms of hypothyroidism correlate with biochemical disease? Objective: Hypothyroidism often remains undetected because of the difficulty associating symptoms with disease. To determine the relation between symptoms and biochemical disease, we assessed symptoms and serum thyroid function tests, concurrently, for patients with and without hypothyroidism. Design: Cross-sectional study. Setting/patients: Seventy-six newly diagnosed case patients with overt hypothyroidism and 147 matched control patients identified through outpatient laboratories in Michigan and Colorado. Measurements And Main Results: Patient symptoms were assessed by questionnaire. Case patients reported a higher proportion of hypothyroid symptoms than did control patients (30.2% vs 16.5%, p &lt; .0001). Univariate analysis identified three significant predictors of an elevated level of thyroid-stimulating hormone (TSH) (p &lt; .05), and 13 symptoms which, when they had changed in the past year, were reported more often by case patients with hypothyroidism than by control patients (p &lt; .005). Individuals reporting changes in 7 or more symptoms were significantly more likely to have hypothyroidism (likelihood ratio [LR] = 8.7, 95% confidence interval [CI] 3.8, 20.2); those reporting changes in 2 or fewer symptoms were less likely to have hypothyroidism (LR = 0.5, 95% CI 0.4, 0.7). Conclusions: In this sample, the number of hypothyroid symptoms reported was directly related to the level of TSH. The association was stronger when more symptoms were reported. Symptoms that had changed in the past year were more powerful than symptoms reported present at the time of testing. This suggests that traditional symptoms are valuable when deciding which patients to test for hypothyroidism. abstract_id: PUBMED:35462920 New Therapeutic Horizon of Graves' Hyperthyroidism: Treatment Regimens Based on Immunology and Ingredients From Traditional Chinese Medicine. Graves' disease is an autoimmune disease characterized by goiter and hyperthyroidism, and 25% patients develop GO. Traditional treatment options, such as antithyroid drugs, radioiodine or thyroidectomy, have remained largely unchanged over the past 70 years. For many patients, there is a high rate of recurrence after antithyroid drugs and lifelong hypothyroidism after ablation and thyroidectomy. The symptoms and quality of life of some patients have not been effectively improved. The clinical demand for new therapeutic regimens, coupled with a deeper understanding of the pathophysiology and immunobiology of Graves' disease, has led to the emergence of several new therapeutic ideas, including biologics, small molecule peptides, immunomodulators and teprotumumab, a specific antibody targeting IGF-1R. Besides, the elements of TCM have attracted more and more interests in modern medicine, because some effective components have been successfully used in the treatment of autoimmune diseases. Based on the pathophysiology and efficacy of clinical management and treatment in Graves' hyperthyroidism, here we review the new strategies under investigation and summarize the effective components of traditional Chinese medicine used for Graves' hyperthyroidism, and explore their mechanisms. These therapies have opened a new window for the treatment of Graves' disease, but the exact mechanism and the research direction still need to be further explored. abstract_id: PUBMED:36814580 Hypothyroidism: The difficulty in attributing symptoms to their underlying cause. Common symptoms of overt hypothyroidism are non-specific and include fatigue, lethargy, and dry skin. Although the diagnosis is considered to be straightforward, no single symptom can be used to identify patients with overt hypothyroidism, while many patients with subclinical hypothyroidism are asymptomatic. A large population-based study on the spectrum of symptoms in subclinical hypothyroidism showed similar rates of thyroid disease-related symptoms compared with euthyroid subjects, while the TSH concentration had no impact on symptom score. Together, these findings make it challenging to attribute symptoms to their underlying cause. This is also true in the case of unexplained persistent symptoms in levothyroxine-treated patients. Although generally considered a life-long replacement therapy, successful thyroid hormone discontinuation resulting in euthyroidism has been reported in approximately one third of patients. Thus, we overtreat patients with (subclinical) hypothyroidism, highlighting the importance of reliable diagnostic criteria. The diagnostic process, including the implementation of robust TSH and FT4 reference intervals, is especially challenging in specific situations including aging, pregnancy, non-thyroidal illness, and central hypothyroidism. There is a clear need for improved adherence to current guidelines from scientific societies and for willingness to manage symptoms without a clear pathological correlate, especially in the case of mild TSH elevations. This review will highlight recent literature on this topic and offers some practice points. abstract_id: PUBMED:34027377 Persisting symptoms in patients with Hashimoto's disease despite normal thyroid hormone levels: Does thyroid autoimmunity play a role? A systematic review. Objective: Patients with hypothyroidism due to Hashimoto's disease (HD) may experience persisting symptoms despite normal serum thyroid hormone (TH) levels. Several hypotheses have been postulated to explain these persisting symptoms. We hypothesized that thyroid autoimmunity may play a role. Design: A systematic literature review. Methods: A PubMed search was performed to find studies investigating the relation between the presence of thyroid autoimmunity and (persisting) symptoms. Included studies were critically appraised by the Newcastle - Ottawa Scale (NOS) and then subdivided into (A) disease-based studies, comparing biochemically euthyroid patients with HD, and euthyroid patients with non-autoimmune hypothyroidism or euthyroid benign goitre, and (B) (general) population-based studies. Due to different outcome measures among all studies, meta-analysis of data could not be performed. Results: Thirty out of 1259 articles found in the PubMed search were included in this systematic review. Five out of seven disease-based studies found an association between thyroid autoimmunity and symptoms or lower quality of life (QoL). Sixteen of 23 population-based studies found a comparable positive association. In total, the majority of included studies reported an association between thyroid autoimmunity and persisting symptoms or lower QoL in biochemically euthyroid patients. Conclusion: (Thyroid) autoimmunity seems to be associated with persisting symptoms or lower QoL in biochemically euthyroid HD patients. As outcome measures differed among the included studies, we propose the use of similar outcome measures in future studies. To prove causality, a necessary next step is to design and conduct intervention studies, for example immunomodulation vs. placebo preferably in the form of a randomized controlled trial, with symptoms and QoL as main outcomes. abstract_id: PUBMED:37915906 Management of traditional risk factors for the development and progression of chronic kidney disease. Chronic kidney disease (CKD) and its downstream complications (i.e. cardiovascular) are a major source of morbidity worldwide. Additionally, deaths due to CKD or CKD-attributable cardiovascular disease account for a sizeable proportion of global mortality. However, the advent of new pharmacotherapies, diagnostic tools, and global initiatives are directing greater attention to kidney health in the public health agenda, including the implementation of effective strategies that (i) prevent kidney disease, (ii) provide early CKD detection, and (iii) ameliorate CKD progression and its related complications. In this Review, we discuss major risk factors for incident CKD and CKD progression categorized across cardiovascular (i.e. hypertension, dyslipidemia, cardiorenal syndrome), endocrine (i.e. diabetes mellitus, hypothyroidism, testosterone), lifestyle (i.e. obesity, dietary factors, smoking), and genetic/environmental (i.e. CKDu/Mesoamerican nephropathy, APOL1, herbal nephropathy) domains, as well as scope, mechanistic underpinnings, and management. abstract_id: PUBMED:35573548 Clinical and Biochemical Correlation With Cytomorphological Findings of Lymphocytic Thyroiditis: An Experience at a Tertiary Centre in the Himalayan Foothills. Introduction Lymphocytic thyroiditis is an autoimmune disorder and one of the major causes of hypothyroidism. On cytomorphology, it is characterized by follicular destruction by lymphocytes with elevated biochemical markers, including a panel of autoantibodies against thyroid antigens. This study aimed to determine the prevalence of various cytological features of lymphocytic thyroiditis and their correlation with clinical presentation and biochemical parameters. Materials and methods We conducted a hospital-based cross-sectional study of 105 patients diagnosed with lymphocytic thyroiditis on cytology at our tertiary care center in the Himalayan foothills from December 2018 to December 2019. We recorded and analyzed baseline demographic characteristics, clinical features, and biochemical parameters to assess correlations between cytological findings and grades. Results The study included 105 patients with lymphocytic thyroiditis (90 females, 15 males). The study population age ranged from 11 years to 80 years, with the disease most common in patients aged 21 to 40 years. Grade II was the most common cytological presentation (n=65, 62%). Thyroid-stimulating hormone levels were elevated in 33.3% of cases, and anti-thyroid peroxidase levels were elevated in all 25 cases for whom data were available (p&gt;0.05). Conclusion Cytological diagnosis of lymphocytic thyroiditis was compatible in all cases in the study. However, cytological grading did not correlate with the clinical presentation and biochemical parameters. The diagnosis of lymphocytic thyroiditis could be missed if clinicians use clinical findings and biochemical parameters alone. abstract_id: PUBMED:36743914 Local symptoms of Hashimoto's thyroiditis: A systematic review. Objective: Hashimoto's thyroiditis (HT) is the most common type of thyroid disease and can cause many different manifestations. The local symptoms of HT are an under-studied area of research. Therefore, the purpose of this study was to investigate the local symptoms of HT and their prevalence. Methods: A systematic review was performed to find articles in PubMed that discuss the local symptoms of HT. Relevant vocabulary terms and key terms included: autoimmune thyroid disease (AITD), hyperthyroidism, hypothyroidism, neck, throat, pharynx, airway, esophagus, breathe, swallow, globus, sleep apnea, symptoms, and quality of life. Two investigators independently screened the eligible studies. Results: A total of 54 articles fulfilled the inclusion criteria. Of these, 25 were clinical studies, 24 were case reports, and five were reviews. These clinical studies and case reports included a total of 2660 HT patients. There were eight local symptoms related to HT: neck pain (0.02%~16%), voice changes (7%~30%), throat discomfort (20%~43.7%), shortness of breath (28%~50%), dysphagia (29%), goiter-related symptoms (69.44%), sleep apnea, and generally defined compressive symptoms. Due to the use of different outcome measures among all the studies, a meta-analysis of the data could not be performed. Conclusion: Goiter symptoms, which are an item on the ThyPRO scales, are the most frequent local symptoms in HT patients, and include neck pain, voice changes, throat discomfort, and dysphagia. These local symptoms should be identified in the clinic and included in the early diagnosis and management of HT, as well as evaluated further to understand their relevance in the pathogenesis of HT. abstract_id: PUBMED:11174048 Depressive symptoms in hypothyroid disorder with some observations on biochemical correlates. Lumbar punctures and ratings of depressive symptoms were done in hypothyroid patients before and during L-thyroxine therapy. Before treatment, the most prominent symptoms were concentration difficulties, lassitude, and reduced sexual interest. All patients suffered from sleep disturbances. Suicidal thoughts did not occur at all. Inner tension was negatively correlated with the anxiogenic cholecystokinin tetrapeptide (CCK-4) in the cerebrospinal fluid (CSF), while reduced sexual interest was negatively correlated with CSF tryptophan. Furthermore, failing memory correlated negatively with T3 as well as T4 in serum. A positive correlation was found between failing memory and serum TSH. All patients improved significantly during treatment. No biochemical correlates were found. In conclusion, hypothyroidism is associated with major depressive symptoms. CSF CCK-4 and tryptophan, as well as serum thyroid hormones, may constitute biochemical correlates for some of these symptoms. abstract_id: PUBMED:32733376 Clinical and Biochemical Characteristics of Severe Hypothyroidism Due to Autoimmune Thyroiditis in Children. Introduction: In the majority of countries, autoimmune thyroiditis is the main cause of acquired hypothyroidism in children. Typically, the natural course of the disease is initially insidious and the diagnosis is incidental. There are some children who develop severe hypothyroidism without a proper diagnosis. The aim of the study was to analyze the clinical and biochemical profiles of children with severe primary hypothyroidism due to autoimmune thyroiditis. Materials and Methods: We analyzed the records of 354 patients diagnosed between 2009 and 2019 with autoimmune thyroiditis. Only patients with TSH above 100 μIU/mL, associated with decreased free thyroxine and the presence of antithyroid antibodies, were enrolled in the study. The analysis encompassed clinical symptoms, thyroid and biochemical status, bone age, and imaging. Results: Twenty-six children were enrolled in the study. The mean age at diagnosis was 10.26 ± 3.3 years, with a female preponderance of 1.8:1. The most frequent symptom was growth impairment (77%) and weight gain (58%). Goiters were present in 42% of patients. Less common findings were pituitary hypertrophy (four patients) and hypertrichosis (three patients). Median values at the time of diagnosis were TSH 454.3 uIU/ml (295.0-879.4), anti-TPO antibodies 1,090 IU/ml, and anti-Tg antibodies 195 IU/ml. Anti-TSHR ab were evaluated only in six out of the 26 patients. The characteristic biochemical profile was correlated with the grade of hypothyroidism, and the strongest correlations were found with CBC parameters, lipid profile, aminotransferases, and creatine. Conclusion: In children with severe hypothyroidism, the most sensitive symptoms are growth arrest and weight gain despite the fact that, in some children, the auxological parameters at presentation could be within normal values for the population. The specific biochemical profile closely correlates to the severity of thyroid hormone deficiency and involves mostly erythropoiesis, liver function, and kidney function. Pituitary enlargement should be considered in each child with severe hypothyroidism. It is necessary to conduct prospective studies evaluating the actual frequency of anti-TSHR antibodies and pituitary enlargement in children with extremely high TSH, especially those presenting without goiters. abstract_id: PUBMED:29863435 A case series study of hypopituitarism in older patients with and without gastrointestinal symptoms. Objectives: Some older individuals who present with gastrointestinal symptoms as their chief complaint were ultimately diagnosed with hypopituitarism instead of gastrointestinal diseases. The aim of this study was to find the characteristics of biochemical indicators in these patients so as to reduce early misdiagnosis. Methods: We conducted a retrospective review of 45 patients with hypopituitarism who were at least 60 years of age. Two groups were included: group of hypopituitarism patients with gastrointestinal symptoms (Group G) included 23 patients with gastrointestinal symptoms and group of hypopituitarism patients without gastrointestinal symptoms (Group N) included 22 patients without these symptoms. In Group G, we investigated the prevalence of different gastrointestinal symptoms, the response of these symptoms to treatment, the occurrence of electrolyte disorders, and target gland dysfunction. Then, we compared the electrolyte and target gland function indices between the two groups. Results: Nausea and vomiting were the most common complaints, accounting for 69.57% of the gastrointestinal symptoms in Group G. Hyponatremia was the most common electrolyte disorder, occurring in 72.86% (n = 18) of patients in Group G. Hypoadrenalism and hypothyroidism were reported by 69.57% and 60.78% of patients, respectively, in Group G. None of the gastrointestinal symptoms were relieved by 4 weeks of treatment with antacid and motility drugs. As mentioned, 18 patients also experienced refractory hyponatremia during early treatment including regular sodium supplements; however, their gastrointestinal symptoms and hyponatremia improved after only a week of treatment for hypopituitarism. Regarding the biochemical indicators, only serum sodium and cortisol in Group G were statistically lower compared with those in Group N (P &lt; .05). Conclusion: Nausea and vomiting were the most common gastrointestinal symptoms in older patients with hypopituitarism, which were associated with lower serum sodium and cortisol. In addition, we hope to share the research to our gastroenterologists that serum sodium and cortisol should be tested when meeting elder patients with unexplained gastrointestinal symptoms. Answer: Yes, traditional symptoms of hypothyroidism do correlate with biochemical disease. A cross-sectional study assessed symptoms and serum thyroid function tests concurrently for patients with and without hypothyroidism. The study found that case patients with overt hypothyroidism reported a higher proportion of hypothyroid symptoms than control patients (30.2% vs 16.5%, p < .0001). Univariate analysis identified significant predictors of an elevated level of thyroid-stimulating hormone (TSH), and symptoms that had changed in the past year were reported more often by case patients with hypothyroidism than by control patients (p < .005). Individuals reporting changes in 7 or more symptoms were significantly more likely to have hypothyroidism, while those reporting changes in 2 or fewer symptoms were less likely to have hypothyroidism. This suggests that traditional symptoms are valuable when deciding which patients to test for hypothyroidism (PUBMED:9294788). However, it is important to note that the symptoms of hypothyroidism can be non-specific, and no single symptom can be used to identify patients with overt hypothyroidism. Many patients with subclinical hypothyroidism are asymptomatic, and a large population-based study showed similar rates of thyroid disease-related symptoms compared with euthyroid subjects, with TSH concentration having no impact on symptom score. This makes it challenging to attribute symptoms to their underlying cause (PUBMED:36814580). Additionally, patients with hypothyroidism due to Hashimoto's disease may experience persisting symptoms despite normal serum thyroid hormone levels, and thyroid autoimmunity seems to be associated with persisting symptoms or lower quality of life in biochemically euthyroid patients (PUBMED:34027377). In conclusion, while traditional symptoms of hypothyroidism do correlate with biochemical disease, the relationship is complex due to the non-specific nature of the symptoms and the presence of symptoms in individuals with normal thyroid function tests.
Instruction: The Gatehouse Project: can a multilevel school intervention affect emotional wellbeing and health risk behaviours? Abstracts: abstract_id: PUBMED:15547059 The Gatehouse Project: can a multilevel school intervention affect emotional wellbeing and health risk behaviours? Study Objective: The aim of this study was to determine the effect of a multilevel school based intervention on adolescents' emotional wellbeing and health risk behaviours. Design: School based cluster randomised controlled trial. Students were surveyed using laptop computers, twice in the first year of intervention and annually thereafter for a further two years. Setting: Secondary schools. Participants: 2678 year 8 students (74%) participated in the first wave of data collection. Attrition across the waves was less than 3%, 8%, and 10% respectively with no differential response rate between intervention and control groups at the subsequent waves (98% v 96%; 92% v 92%, and 90% v 89% respectively). Main Results: A comparatively consistent 3% to 5% risk difference was found between intervention and control students for any drinking, any and regular smoking, and friends' alcohol and tobacco use across the three waves of follow up. The largest effect was a reduction in the reporting of regular smoking by those in the intervention group (OR 0.57, 0.62, and 0.72 at waves 2, 3, and 4 respectively). There was no significant effect of the intervention on depressive symptoms, and social and school relationships. Conclusions: While further research is required to determine fully the processes of change, this study shows that a focus on general cognitive skills and positive changes to the social environment of the school can have a substantial impact on important health risk behaviours. abstract_id: PUBMED:38489134 Harms from Other People's Gambling: Associations with an Individual's Own Gambling Behaviours, Health Risk Behaviours, Financial Problems, General Health, and Mental Wellbeing. This study aimed to explore relationships between being an 'affected other' (AO) and an individual's own gambling, health risk behaviours, financial problems, poor general health, and low mental wellbeing. A cross-sectional survey using representative and supplementary convenience samples was undertaken with 1234 residents of a British Island aged 16 + years. Being an AO was measured as having a partner or relative who has been gambling regularly in the past 12 months. PGSI was used to assess gambling severity. Health risk behaviours included: poor diet, low physical activity, daily smoking, and binge drinking. Other measures included experiencing financial problems, poor general health, and low mental wellbeing. Analyses were performed using χ2 and binary logistic regressions. 11.0% of participants were AOs. AOs were more likely to gamble at at-risk/problem-gambling levels and experience financial problems compared to those who were not affected others. The significant relationship between being an AO and low mental wellbeing was mediated by experiencing financial problems. The relationship between being an AO and engaging in two or more health risk behaviours was no longer significant after controlling for sociodemographics and an individual's own gambling. The relationship between being an AO and poor general health was no longer significant after controlling for sociodemographics, health risk behaviours and an individual's own gambling. AOs experienced risks to health and wellbeing, with findings not limited to AOs specifically with a relationship to an individual with problem-gambling. Therefore, support for AOs should be more widely available, aiming to address AOs' needs holistically. abstract_id: PUBMED:34879321 Perceived changes in lifestyle behaviours and in mental health and wellbeing of elementary school children during the first COVID-19 lockdown in Canada. Objectives: The closure of schools to prevent the spread of COVID-19 prompted concerns of deteriorating lifestyle behaviours, mental health, and wellbeing of children, particularly those in socioeconomically disadvantaged settings. We assessed changes in lifestyle behaviours (physical activity, screen time, eating habits and bed/wake-up times), mental health and wellbeing during the first lockdown in Spring 2020 as perceived by school children from disadvantaged settings, and examined determinants of these changes. Study Design: Cross-sectional study. Methods: We surveyed 1095 grade 4 to 6 students (age 9-12 years) from 20 schools in socioeconomically disadvantaged communities in northern Canada. Students reported on changes in lifestyle behaviours, mental health and wellbeing during the lockdown. Determinants of these perceived changes were examined in multivariable regression models. Results: A majority of students reported declines in physical activity, having late bed/wake-up times, and modest improvements in mental health and wellbeing. Many students reported increases rather than decreases in screen time and snacking. Positive attitudes toward being active, eating healthy, going to sleep on time and being healthy were strongly associated with maintaining healthy lifestyle behaviours during the lockdown. Positive attitudes toward active and healthy living and healthy lifestyle behaviours were associated with maintaining positive mental health and wellbeing during the lockdown. Conclusions: The considerable changes in lifestyle behaviors, superimposed on the pre-existing burden of unhealthy lifestyle behaviours, put this generation of children at increased risk for future chronic disease. Findings call for effective health promotion of active and healthy lifestyles to benefit both physical and mental health. abstract_id: PUBMED:25834626 Enhancing the Emotional and Social Skills of the Youth to Promote their Wellbeing and Positive Development: A Systematic Review of Universal School-based Randomized Controlled Trials. Background: The acquisition of social and emotional skills is associated with positive youth development, character education, healthy lifestyle behaviours, reduction in depression and anxiety, conduct disorders, violence, bullying, conflict, and anger. School-based interventions aimed to enhance these skills go beyond a problem-focused approach to embrace a more positive view of health; they could also improve the youth's wellbeing. Aim: To describe the main features and to establish the effectiveness of universal school-based RCTs for children and the youth, aimed to promote their psychosocial wellbeing, positive development, healthy lifestyle behaviours and/or academic performance by improving their emotional and social skills. Methods: Systematic review by searching for relevant papers in PubMed/Medline with the following key words: "mental health" OR "wellbeing" OR "health promotion" OR "emotional learning" OR "social learning" OR "emotional and social learning" OR "positive youth development" OR "life skills" OR "life skills training" AND "school". Interval was set from January 2000 to April 2014. Results: 1,984 papers were identified through the search. Out of them 22 RCTs were included. While most interventions were characterized by a whole-school approach and SAFE practices, few studies only used standardized measures to assess outcomes, or had collected follow-up data after ≥ 6 months. The results of all these trials were examined and discussed. Conclusion: Universal school-based RCTs to enhance emotional and social skills showed controversial findings, due to some methodological issues mainly. Nevertheless they show promising outcomes that are relatively far-reaching for children and youth wellbeing and therefore are important in the real world. abstract_id: PUBMED:37334330 The impact of the COVID-19 pandemic on inequalities in lifestyle behaviours and mental health and wellbeing of elementary school children in northern Canada. Objectives: The COVID-19 pandemic has negatively affected children's lifestyle behaviours and mental health and wellbeing, and concerns have been raised that COVID-19 has also increased health inequalities. No study to date has quantified the impact of COVID-19 on health inequalities among children. We compared pre-pandemic vs. post-lockdown inequalities in lifestyle behaviours and mental health and wellbeing among children living in rural and remote northern communities. Methods: We surveyed 473 grade 4-6 students (9-12 years of age) from 11 schools in rural and remote communities in northern Canada in 2018 (pre-pandemic), and 443 grade 4-6 students from the same schools in 2020 (post-lockdown). The surveys included questions on sedentary behaviours, physical activity, dietary intake, and mental health and wellbeing. We measured inequality in these behaviors using the Gini coefficient, a unitless measure ranging from 0 to 1 with a higher value indicating greater inequality. We used temporal changes (2020 vs. 2018) in Gini coefficients to assess the impact of COVID-19 on inequalities in lifestyle behaviours and mental health and wellbeing separately among girls and boys. Results: Inequalities in all examined lifestyle behaviours increased between 2018 and 2020. Inequalities in watching TV, playing video games, and using a cell phone increased among girls, while inequalities in playing video games, using computers and tablets, and consumption of sugar, salt, saturated fat and total fat increased among boys. Changes in inequalities in mental health and wellbeing were small and not statistically significant. Conclusion: The findings suggest that the COVID-19 pandemic has exacerbated inequalities in lifestyle behaviours among children living in rural and remote northern communities. If not addressed, these differences may translate into exacerbated inequalities in future health. The findings further suggest that school health programs can help mitigate the negative impact of the pandemic on lifestyle behaviours and mental health and wellbeing. abstract_id: PUBMED:35329276 Barriers and Facilitators to Sustaining School-Based Mental Health and Wellbeing Interventions: A Systematic Review. Despite an increasing focus on schools to deliver support and education around mental health and wellbeing, interventions are often not sustained beyond initial funding and research. In this review, the barriers and facilitators to sustaining mental health and wellbeing interventions in schools are explored. A systematic review was conducted using keywords based on the terms: 'sustainability', 'school', 'intervention', 'mental health', and 'emotional wellbeing'. Six online databases (PsycINFO, Embase, MEDLINE, British Education Index, ERIC, and Web of Science) and relevant websites were searched resulting in 6160 unique references. After screening, 10 articles were included in the review and extracted data were qualitatively synthesized using thematic analysis. Data synthesis led to the identification of four sustainability factors at the school level (school leadership, staff engagement, intervention characteristics, and resources) and one at the wider system level (external support). These factors were separated into 15 themes and discussed as barriers and facilitators to sustainability (for example, school culture and staff turnover). Most articles included no definition of sustainability, and nearly all barriers and facilitators were discussed at the school level. The findings suggest that more longitudinal and theory-driven research is required to develop a clearer picture of the sustainability process. abstract_id: PUBMED:27899098 Health risk behaviours amongst school adolescents: protocol for a mixed methods study. Background: Determining risky behaviours of adolescents provides valuable information for designing appropriate intervention programmes for advancing adolescent's health. However, these behaviours are not fully addressed by researchers in a comprehensive approach. We report the protocol of a mixed methods study designed to investigate the health risk behaviours of Moroccan adolescents with the goal of identifying suitable strategies to address their health concerns. Methods: We used a sequential two-phase explanatory mixed method study design. The approach begins with the collection of quantitative data, followed by the collection of qualitative data to explain and enrich the quantitative findings. In the first phase, the global school-based student health survey (GSHS) was administered to 800 students who were between 14 and 19 years of age. The second phase engaged adolescents, parents and teachers in focus groups and assessed education documents to explore the level of coverage of health education in the programme learnt in the middle school. To obtain opinions about strategies to reduce Moroccan adolescents' health risk behaviours, a nominal group technique will be used. Discussion: The findings of this mixed methods sequential explanatory study provide insights into the risk behaviours that need to be considered if intervention programmes and preventive strategies are to be designed to promote adolescent's health in the Moroccan school. abstract_id: PUBMED:26296367 Association of different forms of bullying victimisation with adolescents' psychological distress and reduced emotional wellbeing. Objective: The frequency and emotional response to bullying victimisation are known to be associated with adolescent mental ill health. A potentially important under-investigated factor is the form of bullying. Four common forms of bullying behaviours are name-calling, physical threats or harm, rumour spreading and social exclusion. To more comprehensively understand bullying victimisation in adolescence, we examined the association of all three factors (frequency, emotional response, form) to psychological distress and emotional wellbeing. Method: A stratified, random sample of adolescents (n = 10, 273; mean age = 14.33 years, standard deviation = 1.68 years) completed validated measures of bullying victimisation (Gatehouse Bullying Questionnaire), psychological distress (K10) and emotional wellbeing (Mental Health Inventory) in classroom time. Associations between the form of bullying victimisation and mental health outcomes were examined. Results: Adolescents reported a high prevalence of all four forms of bullying: teased or called names (30.6%), rumour spreading (17.9%), social exclusion (14.3%) and physical threats or harm (10.7%). Victimisation was independently associated with significantly higher levels of psychological distress and reduced levels of emotional wellbeing for all forms of bullying. In particular, social exclusion had a strong association with mental ill health. Adolescents who experienced frequent bullying that was upsetting reported higher psychological distress and reduced emotional wellbeing. Conclusion: Different forms of bullying victimisation were independently associated with psychological distress and reduced emotional wellbeing. In particular, frequent and upsetting social exclusion requires a targeted and measured response by school communities and health practitioners. abstract_id: PUBMED:14519564 Changing schools, changing health? Design and implementation of the Gatehouse Project. Purpose: To describe the Gatehouse Project which addresses the social context of high school with an aim of changing students' sense of school connection and in turn, health risk behavior and well-being. Methods: Distinguishing features of the project were its conceptual framework, implementation process, and evaluation design. The conceptual framework derived from attachment theory and focused on three aspects of the school social context: security, communication, and participation. Implementation was standardized around a survey of the school social environment, creation of a school-based action team, and the implementation of strategies matched to a school's profile of need. In addition, an early high school curriculum addressed the skills relevant to social functioning and emotional adjustment. The evaluation design was based on a cluster randomized trial involving 26 schools. It used follow-up of an individual cohort and repeat cross-sectional surveys to capture outcomes at an individual student and whole-school level. Results And Conclusions: The Gatehouse Project drew on both health and education research to develop and coordinate a broad-based school health promotion intervention. It represents a promising new direction for school-based preventive work. abstract_id: PUBMED:32847174 The Pyramid Club Primary School-Based Intervention: Evaluating the Impact on Children's Social-Emotional Health. Background: Children with poor social-emotional health are at risk of failing to reach their potential. Method: The impact of the Pyramid Club year 3 preventive intervention on children's social-emotional health was investigated. Children were allocated to an intervention or non-problem comparison group. Results: Post-intervention both groups had improved Total Difficulty scores (Strengths and Difficulties Questionnaire) with the Pyramid group showing a significantly stronger effect size (r = .71) than the non-problem comparison group (r = .44). Conclusions: The Pyramid Club intervention improves the social-emotional health of vulnerable children. The need to evaluate such interventions and to extend research are discussed. Answer: The Gatehouse Project aimed to determine the effect of a multilevel school-based intervention on adolescents' emotional wellbeing and health risk behaviours. The study was designed as a school-based cluster randomised controlled trial, involving 2678 year 8 students from secondary schools. The students were surveyed using laptop computers at multiple points over three years. The main results indicated a consistent 3% to 5% risk difference between intervention and control students for any drinking, any and regular smoking, and friends' alcohol and tobacco use across the follow-up waves. The most significant effect was a reduction in the reporting of regular smoking by those in the intervention group. However, the intervention did not have a significant effect on depressive symptoms, and social and school relationships. The study concluded that focusing on general cognitive skills and positive changes to the social environment of the school can substantially impact important health risk behaviours, although further research is needed to fully understand the processes of change (PUBMED:15547059).
Instruction: Does incision of the tunica vaginalis during the Palomo procedure prevent post-operative hydrocele? Abstracts: abstract_id: PUBMED:9598637 Does incision of the tunica vaginalis during the Palomo procedure prevent post-operative hydrocele? Objective: To assess whether incision of the tunica vaginalis of the ipsilateral testis during the Palomo procedure affects the incidence of post-operative hydrocele. Patients And Methods: Forty-two patients with varicocele were treated between 1992 and 1996, all undergoing the Palomo procedure. In 15 patients, incision of the tunica vaginalis of the ipsilateral testis was performed with the Palomo procedure, to prevent the formation of hydrocele. All patients were followed at regular intervals, for 3 months to 4 years. Results: Of 27 patients who underwent the Palomo procedure alone, four developed post-operative hydrocele requiring operation. None of those who underwent incision of the tunica vaginalis with the Palomo procedure developed a hydrocele. The duration of hospital stay was the same in both groups (mean 2.5 days). During the follow-up, there was no difference in the size of the testes, as assessed with the Prader orchidometer, between the two groups. Conclusion: Incision of the tunica vaginalis of the ipsilateral testis in addition to the Palomo procedure prevents the development of post-operative hydrocele and causes no adverse sequelae. abstract_id: PUBMED:30741658 Report of two cases of malignant mesothelioma of the tunica vaginalis. Objective: Paratesticular mesothelioma isan infrequent tumor and only 250 cases have been published.It originates in the scrotal tunica vaginalis. It represents0.3-1.4% of mesotheliomas and it predominates inpatients with history of asbestos exposure and old age. Itsdiagnosis is usually casual. Our objective is to present thecases that occurred in our service with malignant paratesticularmesothelioma and to carry out a review of the currentliterature on this pathology. Methods: We report two cases diagnosed with malignantparatesticular mesothelioma that happened in the lasttwo years. Result: The first case was a 73-year-old male with asymptomatichydrocele. The second was a 57-year-oldmale who had testicular pain and hydrocele. Both werediagnosed of mesothelioma after hydrocelectomy. The firsttreatment was radical orchiectomy in both cases. The firstpatient did not need more treatments. The second patientpresented pulmonary nodules, lymphadenopathy and localrelapse, which was treated with chemotherapy and localresection. Conclusion: Paratesticular mesothelioma is an infrequenttumor. Scrotal mass associated with hydrocele is thetypical form of presentation. Surgical treatment consists ofradical orchiectomy. They have poor prognosis because inmost cases there is rapid local and dissemination. abstract_id: PUBMED:30216181 Tunica Vaginalis Thickening, Hemorrhagic Infiltration and Inflammatory Changes in 8 Children with Primary Hydrocele; Reactive Mesothelial Hyperplasia? A Prospective Clinical Study. The aim of this study is to describe an entity of primary hydrocele accompanied with fibrosis, thickening and hemorrhagic infiltration of parietal layer of tunica vaginalis (PLTV). During a 4-year period (2011-2014), 94 boys (2.5-14 years old) underwent primary hydrocele repair. Hydrocele was right sided in 55 (58.5 %), left sided in 26 (28.7%) and bilateral in 12 patients (13.8%). Eighty three out of 94 patients (88.30%) had communicating hydrocele and the rest eleven patients (11.7%) had non-communicating. Our case group consists of 8 patients (8.51%) based on operative findings consistent with PLTV induration, thickening and hemorrhagic infiltration. Preoperative ultrasonography did not reveal any pathology of the intrascrotal structures besides hydrocele. There weren't hyperechoic reflections or septa within the fluid. Evaluation of thickness of the PLTV was not feasible. Presence of lymph or exudate was excluded after fluid biochemical analysis. Tunica vaginalis histological examination confirmed thickening, hemorrhagic infiltration and inflammation, while there was absence of mesothelial cells. Immunochemistry for desmin was positive, excluding malignant mesothelioma. One patient underwent high ligation of the patent processus vaginalis and PLTV sheath fenestration, but one year later, he faced a recurrence. An elective second surgery was conducted via scrotal incision and Jaboulay operation was performed. The latter methodology was our treatment choice in other 7 out of 8 patients. During a 2-year postoperative follow-up, no other patient had any recurrence. We conclude that in primary hydrocele with macroscopic features indicative of tunica vaginalis inflammation, reversion of the tunica should be a part of operative strategy instead of sheath fenestration, in order to minimize the recurrence. abstract_id: PUBMED:29285372 Malignant mesothelioma of the tunica vaginalis testis: A case report and literature review. Malignant mesothelioma of the tunica vaginalis testis is an extremely rare tumor without specific clinical manifestations, mainly including hydrocele formation and a painless mass. We herein present the case of a patient with hydrocele of the left testis, without any other complaints. Tunica vaginalis subinvolution was performed, and postoperative pathological examination revealed a malignant mesothelioma arising from the left tunica vaginalis testis. Whole-body positron emission tomography-computed tomography (PET-CT) and subsequent abdominal and pelvic magnetic resonance imaging (MRI) revealed no evidence of local lymphadenopathy. Radical left orchiectomy was performed after the pathological diagnosis. The pathological examination after the second surgery demonstrated that the tumor had invaded the adjacent periorchium and spermatic cord, but there was no evidence of local lymph node metastasis. Pemetrexed and cisplatin were administered at a dose of 900 and 130 mg, respectively, on the first day of a 28-day cycle. After 6 months of therapy, the disease had not progressed on abdominal and pelvic PET-CT and MRI. The patient was still followed up in our urology outpatient clinic at the time of the present report. Although testicular hydrocele is a common and easily diagnosed condition, detailed medical history and physical examination are required. Thus, when clinicians encounter patients with testicular hydrocele, a variety of possible causes must be considered, including testicular or paratesticular tumors, even rare tumors such as mesothelioma of the tunica vaginalis testis. abstract_id: PUBMED:29375946 Malignant Mesothelioma of Tunica Vaginalis Testis: Macroscopic and Microscopic Features of a Very Rare Malignancy. Malignant mesothelioma of the tunica vaginalis testis (MMTVT) is an extremely rare tumour, usually mimicking benign pathologies of the scrotum. Our case is an 84-year-old male patient who appealed with a painless, left-sided scrotal swelling longer than 2 months. Although the level of tumour markers was normal, ultrasonographic examination results forced us to perform an inguinal scrotal exploration. Multiple small papillary tumours, both on tunica vaginalis and tunica albuginea, were detected intraoperatively. Due to these findings, radical orchiectomy was performed. A pathological evaluation showed malignant mesothelioma (MM) of the tunica vaginalis testis. Exposure to asbestos is a well-known risk factor. Furthermore, a history of trauma, herniorrhaphy and chronic hydroceles is blamed as a possible risk factor. Scrotal ultrasonography is the mainstay of primary diagnosis and, therefore, it should not be overlooked when dealing with benign scrotal cysts or hydroceles, which are very common pathologies at these decades, too. Radical inguinal orchiectomy is the primary treatment choice for localised MMTVT disease, whereas in signs of lymph node metastasis, inguinal lymph node dissection is required. Radical resection should be completed with chemotherapy and/or radiotherapy for an advanced or recurrent disease. This case, which is very rarely reported in the literature and detected during inguinal exploration, along with the pathological works that supported the diagnosis, was presented with this report. abstract_id: PUBMED:1464547 Malignant mesothelioma of the tunica vaginalis testis Malignant mesothelioma of the tunica vaginalis testis is an extremely rare tumor, with only 37 cases previously described in the literature. Treatment consists of inguinal orchiectomy with close follow-up [1]. Asbestos exposure, trauma and hydrocele have been implicated as risk factors. We describe a patient's history and the pathological findings as well as the management according to preceding reports in the literature. abstract_id: PUBMED:26115654 Imaging of mesothelioma of tunica vaginalis testis. Objectives: To describe the imaging findings in a series of patients with mesothelioma of the tunica vaginalis testis. Methods: We reviewed clinical data, imaging findings and follow-up information in a series of 10 pathology-proven cases of mesothelioma (all had US; 2 had MR) of the tunica vaginalis. Results: A variety of patterns could be observed, the most common (5/10) being a hydrocele with parietal, solid and hypervascular vegetations; one patient had a septated hydrocele with hypervascular walls; one had multiple, solid nodules surrounded by a small, physiological quantity of fluid; one a cystic lesion with thick walls and vegetations compressing the testis; two had a solid paratesticular mass. MR showed multiple small nodules on the surface of the tunica vaginalis in one case and diffuse thickening and vegetations in the other one; lesions had low signal intensity on T2-w images and were hypervascular after contrast injection. Conclusions: A preoperative diagnosis of mesotheliomas presenting as solid paratesticular masses seems very difficult with imaging. On the contrary, the diagnosis must be considered in patients in whom a hydrocele with parietal vegetations is detected, especially if these show high vascularity. Key Points: Mesotheliomas of the tunica vaginalis are rare, often challenging to diagnose preoperatively. Most common finding is a complex hydrocele with hypervascular parietal vegetations. Septated hydrocele, nodules without hydrocele, a thick-walled paratesticular cyst are less common. Preoperative diagnosis may allow aggressive surgical approach and, possibly, a better prognosis. abstract_id: PUBMED:10230545 Malignant mesothelioma of the tunica vaginalis testis A case of malignant mesothelioma of the tunica vaginalis testis is reported in a 77-year-old male patient. There was no history of asbestos exposure. Recurrent right hydrocele with a papillar inguinal mass was the main clinical feature. An inguinal radical orchiectomy with en bloc resection of the surrounding tissue was performed. The therapeutic options for this rare, but aggressive neoplasm are discussed. Because of the disappointing results of antineoplastic chemotherapy or radiation therapy, the importance of initial radical surgical treatment with complete excision is emphasized. abstract_id: PUBMED:32106829 Malignant mesothelioma of the tunica vaginalis testis: a rare case and review of literature. Background: Malignant mesothelioma of the tunica vaginalis is a rare tumour which comprises less than 1% of all mesotheliomas. Case Presentation: 69-years old patient with painful hard mass and hydrocele in the right scrotum to whom a right hydrocelectomy was performed. Any history of scrotal trauma or exposure to asbestos was not present. Excisional biopsy revealed a multinodular tumour with focal areas of necrosis and infiltrative growth. According to morphological and immunohistochemical findings, diagnosis of malignant biphasic mesothelioma of the tunica vaginalis testis was made. Two months after hydrocelectomy, right inguinal orchidectomy was performed. Post-surgical whole body CT scan revealed paraaortic and pararenal lymphadenopathy, likely to be metastatic. Adjuvant treatment with 6 cycles of cisplatin and pemetrexed was applied. After 3 cycles of chemotherapy, CT scan showed progression and the treatment was changed to gemcitabine 1 month after. Conclusions: Although malignant mesothelioma of the tunica vaginalis is a rare malignancy, it poses a diagnostic challenge which can mimic common inguinal or scrotal diseases such as hydrocele. Despite aggressive surgical procedures or adjuvant therapies, the prognosis remains poor. abstract_id: PUBMED:34105419 Malignant mesothelioma of tunica vaginalis: Two extremely rare case reports. Introduction: Malignant mesothelioma (MM) of tunica vaginalis is a very rare tumor and it has a poor prognosis. It is not easy to diagnose the disease preoperatively because there is a lack of comprehensive studies and information about the disease. We would like to present our two cases with severe scrotal edema and hydrocele, which were debrided perioperatively due to scrotal necrosis, followed by radical inguinal orchiectomy and subsequently diagnosed as MM of tunica vaginalis. Case 1: A 74-year-old male patient with bilateral scrotal swelling for 6 months. Right radical inguinal orchiectomy was performed. The pathology result of right radical inguinal orchiectomy material was compatible with malignant mesothelioma of tunica vaginalis. Case 2: A 90-year-old male patient with complaints of right scrotal swelling for 3 years. A right radical inguinal orchiectomy was performed. The pathology result of right radical inguinal orchiectomy material was compatible with malignant mesothelioma of tunica vaginalis. Conclusion And Discussion: MM has a very poor prognosis therefore early treatment plan should be made. Surgery is the main treatment for malignant mesothelioma of the testis. Although chemotherapy and radiotherapy play an active role in the treatment of metastatic disease, surgery is the first-line therapy in the treatment of the tumor. Also; long-term hydrocele, recurrent epididymitis, scrotal edema can make it harder to detect the tumor diagnosis. MM should be considered in such patients. Necrosis and abscess appearance was not mentioned before in published case reports. In this respect, it should not be ignored the long-term hydrocele situation and the appearance of abscess-epididymitis that cannot be treated properly. Answer: Yes, incision of the tunica vaginalis during the Palomo procedure appears to prevent post-operative hydrocele. A study assessing the impact of incising the tunica vaginalis of the ipsilateral testis during the Palomo procedure found that none of the patients who underwent this additional step developed a hydrocele post-operatively. In contrast, among the patients who underwent the Palomo procedure alone, four developed post-operative hydrocele requiring operation. The study concluded that incision of the tunica vaginalis in addition to the Palomo procedure prevents the development of post-operative hydrocele without causing adverse sequelae (PUBMED:9598637).
Instruction: Does retrograde axonal atrophy really occur in carpal tunnel syndrome patients with normal forearm conduction velocity? Abstracts: abstract_id: PUBMED:15546786 Does retrograde axonal atrophy really occur in carpal tunnel syndrome patients with normal forearm conduction velocity? Objective: The cause of decreased median forearm motor conduction velocity (FMCV) in carpal tunnel syndrome (CTS) is best ascribed to retrograde axonal atrophy (RAA); however, the relationships between the occurrence of RAA and electrophysiological or clinical severity remains controversial. We attempt to determine whether RAA really occurs in CTS patients with normal median FMCV and to investigate any relationships between RAA and severity of compression at the wrist. Methods: Consecutive CTS patients were enrolled and age-matched volunteers served as controls. We performed conventional nerve conduction studies (NCS) and measured median and ulnar distal motor latencies (DML), FMCV, compound muscle action potential (CMAP) amplitudes, distal sensory latencies (DSL), and sensory nerve action potential (SNAP) amplitudes. Furthermore, palmar median stimulation was done to calculate the wrist-palm motor conduction velocity (W-P MCV). Patients included for analysis should have normal FMCV and needle examination. We compared each electrodiagnostic parameters between the patient group and controls. Results: The mean+/-SD of the W-P MCV for patients and controls were 33.26+/-6.74 and 52.14+/-5.85 m/s and those of median FMCV were 55.26+/-3.56 and 57.82+/-3.9 m/s, respectively. There was a significant reduction in the W-P MCV (36.2%, P&lt;0.00001), significant decrease in the median FMCV (4.43%, P&lt;0.00001) and SNAP amplitudes, and an increase of the DML and DSL in the patient group (P&lt;0.00001) compared to the controls; however, there were no differences in median and ulnar CMAP amplitudes, ulnar FMCV and DML between the controls and patients. Conclusions: RAA and relatively slowed median FMCV do occur in CTS patients with normal median FMCV, regardless of severity of clinical manifestations and electrophysiological abnormalities. Significance: This article provides new information for research of the electrophysiological changes of the proximal nerve part at distal injury. abstract_id: PUBMED:12881666 Forearm mixed nerve conduction velocity: questionable role in the evaluation of retrograde axonal atrophy in carpal tunnel syndrome. The objective of this study was to determine whether forearm mixed nerve conduction velocity (Fmix) reflects the real conduction velocity of forearm motor nerve (Fmot) and forearm sensory nerve (Fsen) fibers passing through the carpal tunnel. Forearm mixed nerve conduction velocity is presumed to be indicative of the conduction velocity of the median nerve over the forearm. Therefore, Fmix is used widely to assess the causes of slowing forearm conduction velocity in carpal tunnel syndrome. However, some authors claim that Fmix comes chiefly from the undamaged fibers in carpal tunnel syndrome, and thus cannot replace Fmot or Fsen in the evaluation of retrograde axonal atrophy. Patients with clinical symptoms and signs of carpal tunnel syndrome confirmed with standard electrodiagnosis were included. Age-matched volunteers served as control subjects. Conduction velocities across the wrist and over the forearm were measured, including those of the wrist sensory (Wsen), wrist motor (Wmot), and wrist mixed nerves (Wmix); and forearm mixed (Fmix), forearm motor (Fmot), and forearm sensory nerves (Fsen). The authors compared and correlated Wsen, Wmot, and Wmix; and Fmix, Fmot, and Fsen respectively. The mean values of Wsen, Wmot, Wmix, Fmix, Fmot, and Fsen of the control subjects less those of corresponding conduction velocity of carpal tunnel syndrome patients were designated Wsen N, Wmot N, Wmix N, Fmix N, Fmot N, and Fsen N respectively and were compared and correlated again. Wrist motor nerve conduction velocity, Wsen, and Wmix were significantly lower in carpal tunnel syndrome patients, and Fmot and Fsen but not Fmix were reduced significantly when compared with control subjects. Mean wrist sensory nerve conduction velocity, Wmot N, and Wmix N; and Fsen N and Fmot N showed good correlation except for Fmix N, suggesting that Fmix reflects the conduction velocity of undamaged fibers in carpal tunnel syndrome. Forearm mixed nerve conduction velocity cannot replace Fmot or Fsen in the assessment of retrograde axonal atrophy in carpal tunnel syndrome. In the disease state, Fmix possibly represents the conduction velocity of the palmar cutaneous branch. abstract_id: PUBMED:10825711 The cause of slowed forearm median conduction velocity in carpal tunnel syndrome. Objectives: Attempting to answer a debate concerning the etiopathogenesis of the decreased forearm median motor conduction velocity (FMMCV), we tried to use proximal stimulation at the wrist, elbow, mid-arm and axillary regions to determine segmental median motor conduction velocity (MMCV). We also correlated the FMMCV with median motor distal latency (MMDL) and compound muscle action potential (CMAP) amplitudes of the abductor pollicis brevis (APB) muscle in order to assess whether the conduction block of large myelinating fibers or retrograde axonal atrophy was the major cause of the decreased FMMCV. Background: The cause of the decreased FMMCV resulting from either the conduction block of the large myelinating fibers at the wrist or distal compression with retrograde axonal atrophy remains an unresolved issue at the moment. Animal models have supported the hypothesis that the retrograde axonal atrophy might also occur in humans. Other authors believe the standard FMMCV is calculated by subtracting the distal latency which may not represent an exact assessment of FMMCV but rather the velocity of small fibers that persist through the carpal tunnel. Subjects And Methods: Patients with the clinical symptoms and signs of carpal tunnel syndrome (CTS) confirmed using standard electrodiagnosis were included. The patients were arbitrarily divided into two groups based on the FMMCV, one with reduced FMMCV (n = 20, FMMCV &lt; 50 m/s) and the other with normal FMMCV (n = 35, FMMCV&gt; or =50 m/s). Age-matched volunteers served as controls. We explored motor conduction proximally at wrist, elbow, mid-arm and axillary stimulation, and recorded at the APB muscles. Based on the latency differences, we calculated the FMMCV, distal arm MMCV (DAMMCV) and proximal arm MMCV (PAMMCV), and compared the conduction velocity (CV) differences of DAMMCV-FMMCV, PAMMCV-FMMCV and PAMMCV-DAMMCV in the two patient groups and the control. Furthermore, we correlated FMMCV with MMDL and CMAP amplitudes of APB muscle because MMDL and CMAP amplitudes might reflect the integrity of the large myelinating fibers. Results: CMAP amplitudes of APB muscle at wrist stimulation and MMDL were not correlated with FMMCV in either of the two patient groups; however, the CMAP amplitude was markedly decreased and MMDL was significantly prolonged when compared with normal controls. The significant increase of CV gradient of DAMMCV-FMMCV and PAMMCV-FMMCV without an equal increase of CV gradient of PAMMCV-DAMMCV only occurred in the reduced FMMCV patient group, suggesting that the conduction block is not the primary cause. The CV gradient of DAMMCV-FMMCV and PAMMCV-DAMMCV did not show any significant difference between patients with the normal FMMCV and the control group. Conclusion: The retrograde axonal atrophy, not selective damage of the large fibers at the wrist, was the direct cause of the decreased FMMCV. abstract_id: PUBMED:18976952 The role of forearm mixed nerve conduction study in the evaluation of proximal conduction slowing in carpal tunnel syndrome. Objective: A decrease of forearm median motor conduction velocity (CV) is a common electrophysiological finding in carpal tunnel syndrome (CTS), ascribed to two possible mechanisms: either conduction block or slowing of the fastest myelinating fibers in the carpal tunnel, or retrograde axonal atrophy (RAA) with retrograde conduction slowing (RCS). We hope to utilize both direct and derived forearm median mixed nerve conduction studies to clarify the mechanism of the decrease of forearm median motor CV in CTS. Methods: Seventy-five CTS patients and 75 age-matched control subjects received conventional motor and sensory nerve conduction studies of median and ulnar nerves and forearm median mixed nerve conduction techniques. First, direct measurement of forearm median mixed conduction velocity (Forearm mixed CV) and nerve action potential amplitude (Forearm mixed amplitude) was determined with recording at elbow and stimulation at wrist. Then, stimulating electrode was placed over palm and recording at elbow and then at wrist to calculate the derived Forearm mixed CV. Electrophysiological parameters, including direct Forearm mixed CV and amplitude and derived Forearm mixed CV, were compared between CTS patients and controls. Results: CTS patients had significantly prolonged wrist-palm sensory and motor conduction, significantly decreased forearm median motor CV, and normal ulnar nerve conduction. The direct Forearm mixed amplitude was significantly decreased in CTS patients. The direct Forearm mixed CV was similar in CTS patients and controls, but there was a significant decrease in derived Forearm mixed CV in CTS group. The difference between direct and derived Forearm mixed CV was significantly greater in the CTS, suggesting that direct and derived Forearm mixed CV represent CV from different nerve fibers, one passing outside carpal tunnel without undergoing RAA or the other through the carpal tunnel with occurrence of RAA. Conclusion: A decrease of direct Forearm mixed amplitude really occurs in CTS, implying that RAA and RCS will develop over proximal median nerve at distal nerve injury and the decreased forearm median motor CV is best ascribed to RAA and RCS. Furthermore, in CTS, the direct Forearm mixed CV measures the CV from undamaged nerve fibers without passing through carpal tunnel, resulting in the misinterpretation of the cause of proximal conduction slowing secondary to conduction block or slowing over the wrist. Significance: We provide a direct evidence of the occurrence of RAA and RCS that would explain the cause of proximal median nerve conduction slowing. However, the clinical significance of RAA and RCS is uncertain. abstract_id: PUBMED:12088702 The cause of slowed forearm median conduction velocity in carpal tunnel syndrome: a Palmar stimulation study. Objectives: To elucidate the etiopathogenesis of decreased forearm median motor conduction velocity (FMMCV) in carpal tunnel syndrome (CTS), we used segmental stimulation at the palm, wrist and antecubital fossa to determine conduction block at wrist and calculate and compare the segmental median motor conduction velocity (MMCV) to determine the pathogenesis. Background: The cause of the decreased FMMCV in CTS remains unclear. Animal models have supported retrograde axonal atrophy as the cause. Some authors believe standard FMMCV, calculated by subtracting the distal latency, may not represent an exact assessment of FMMCV but rather the velocity of small fibers that persist throughout the carpal tunnel. Subjects And Methods: Patients with clinical symptoms and signs of CTS which had been confirmed with standard electrodiagnosis, were included. The patients were divided into two groups: one with reduced FMMCV &lt;50m/s (Group I, n=20) and the other with normal FMMCV&gt;50m/s (Group II, n=40). Age-matched volunteers served as controls (n=60). We used palm, wrist and antecubital stimulation, and recorded compound muscle action potential (CMAP) amplitudes at the abductor pollicis brevis (APB) muscle. Based on a ratio of the CMAP amplitudes obtained from wrist and palm stimulation (W/P ratio) and the latency differences, we calculated the W/P ratio and the across wrist MMCV (AWMMCV) and FMMCV and compared and correlated them between two patient groups. Results: There was no difference in median motor and sensory distal latency between Groups I and II. CMAP and sensory nerve action potential amplitudes were reduced in Group I compared with Group II, but the difference was only marginally significant. Four patients had a significant reduction of the W/P ratio in Group I, compared with 7 patients in Group II, which did not reach a significance. Sixteen patients (80%) in Group I demonstrated no conduction block. Furthermore, Group I showed significantly decreased FMMCV when compared with Group II; however, AWMMCV was not significantly reduced in Group I, suggesting that decreased FMMCV does not result from a decrease in AWMMCV. Conclusions: There was no significant motor conduction block and no correlation of the FMMCV and AWMMCV in CTS patients with a decrease of FMMCV, suggesting retrograde axonal atrophy, and not selective conduction block of the large fibers at the wrist, is the direct cause of decreased FMMCV in CTS. abstract_id: PUBMED:12804677 The reason for forearm conduction slowing in carpal tunnel syndrome: an electrophysiological follow-up study after surgery. Background: The exact cause of decreased forearm median motor conduction velocity (FMMCV) in carpal tunnel syndrome (CTS) is still a subject of controversy. A conduction block or an axonal loss in the large myelinating fibers upon wrist compression, or retrograde axonal atrophy, is suspected. Methods: In order to attempt a determination of the cause, 10 patients with clinical symptoms and signs of CTS, confirmed using standard electrodiagnosis and with a slowed FMMCV &lt;50m/s, were included in this study. Serial standard median motor conduction studies were performed at baseline, 1 week, 2 weeks, 4 weeks, 8 weeks, and 12 weeks after endoscopic ligament release. Serial median motor distal latencies (MMDL), compound muscle action potential (CMAP) amplitudes, and FMMCV, were determined and compared. Results: Significant improvement in MMDL had occurred at the 1-week follow-up examination; however, no such improvement in FMMCV was observed. Furthermore, a significant increase in CMAP amplitude was evidenced beginning 4 weeks after surgery. The results revealed an improvement in median motor conduction, across the wrist segment, that did not parallel the increase in FMMCV, suggesting that a conduction block or axonal loss at wrist compression was not the likely cause of the decreased FMMCV. Conclusions: Retrograde axonal atrophy, not selective damage to the large myelinating fibers at the wrist, is the direct cause of decreased FMMCV in CTS. abstract_id: PUBMED:12140002 Does direct measurement of forearm mixed nerve conduction velocity reflect actual nerve conduction velocity through the carpal tunnel? Objectives: The purpose of this study was to determine whether forearm (wrist-elbow) mixed nerve conduction velocity (W-Emix) represents the actual nerve conduction velocity (CV) of nerve fibers passing through the carpal tunnel. Background: W-Emix is presumed to reflect the actual forearm CV through the carpal tunnel. However, it has been argued that W-Emix chiefly originates from the nerve fibers passing outside the carpal tunnel. Therefore, the direct measurement of W-Emix cannot be used to assess retrograde axonal atrophy in carpal tunnel syndrome (CTS). Subjects And Methods: Thirty patients with clinical signs and symptoms of CTS were recruited and the diagnosis was confirmed with standard electrodiagnosis. Fifty age-matched volunteers served as control. Recording electrodes were placed over the elbow and index finger for mixed nerve and sensory nerve conduction studies, respectively. Stimulation was applied at the palm and wrist for the measurement of mixed nerve wrist-palm CV (W-Pmix), wrist-elbow CV (W-Emix), and elbow-palm CV (E-Pmix). Stimulation was applied at the elbow, wrist, and palm for the measurement of wrist-elbow sensory CV (W-Esen), wrist-palm CV (W-Psen), and elbow-palm CV (E-Psen). Comparisons were made between W-Pmix and W-Psen, W-Emix and W-Esen, and E-Pmix and E-Psen. Results: Correlations between W-Emix and W-Esen, E-Pmix and E-Psen, and W-Pmix and W-Psen were good in the control. In the patient group, there was a strong positive correlation between W-Pmix and W-Psen, and between E-Pmix and E-Psen. However, W-Esen correlated weakly with W-Emix, suggesting that W-Emix chiefly represents the CV of fibers passing outside the carpal tunnel. Therefore, the direct measurement of W-Emix cannot be used to assess retrograde axonal atrophy. Furthermore, the reduction in W-Psen was more marked than the reduction in W-Esen, implying that a conduction block at the wrist is the least likely cause of proximal slowing in CTS. Conclusions: W-Emix does not reflect the actual CV of the nerve fibers passing through the carpal tunnel. In addition, retrograde axonal atrophy appears to be the primary cause of decreased forearm CV in CTS. abstract_id: PUBMED:34377874 Reduction in median nerve cross-sectional area at the forearm correlates with axon loss in carpal tunnel syndrome. Objective: To explore the relationship between axon loss and measured cross-sectional areas of the median nerve (MN) in severe carpal tunnel syndrome (CTS). Methods: In this retrospective study of 158 examined wrists, we compared axon loss to the ultrasound parameters MN cross-sectional area at the wrist (wCSA), MN cross-sectional area at the forearm (fCSA) and wrist-to-forearm ratio (WFR), in patients with moderate to extreme CTS. Axon loss was evaluated by needle electromyography (EMG) of the abductor pollicis brevis muscle (spontaneous activity and reduction of interference pattern). Results: Both the spontaneous activity and interference pattern reduction correlated negatively to fCSA (r = -0.189, p = 0.035; r = -0.210, p = 0.019; respectively). In moderate CTS, both the spontaneous activity and interference pattern reduction correlated positively to WFR (r = 0.231, p = 0.048; r = 0.232, p = 0.047; respectively). The WFR was highest when slight spontaneous activity was detected. Neither wCSA nor WFR correlated with axon loss in severe and extreme CTS. Conclusions: The fCSA is smaller when axon loss in CTS is more prominent. The WFR is highest when CTS is associated with slight axon loss of the MN. Significance: CTS might cause retrograde axonal atrophy detected as small fCSA. Prominent axon loss in CTS may reduce the diagnostic value of WFR. abstract_id: PUBMED:34575232 Sensory Nerve Conduction Velocity Predicts Improvement of Hand Function with Nerve Gliding Exercise Following Carpal Tunnel Release Surgery. This study aims to investigate the effects of nerve gliding exercise following carpal tunnel release surgery (NGE-CTRS) and the probing factors affecting the effect of NGE-CTRS on hand function. A total of 86 patients after CTRS participated. Grip strength (grip-s), pinch strength (pinch-s), Semmes-Weinstein monofilament test (SWMT), two-point discrimination (2PD), numbness, pain, and Phalen test (Phalen) were measured and compared between pre- and post-NGE-CTRS. The results showed that the combination of surgery and NGE significantly improved the postoperative grip-s, pinch-s, SWMT, 2PD, numbness, and Phalen; however, no improvement was observed in pain. Background factors that influenced the improved grip-s and pinch-s included gender and preoperative sensory nerve conduction velocity (SCV). Additionally, numbness and Phalen were not affected by age, gender, fault side, bilateral, trigger finger, dialysis, thenar eminence atrophy, motor nerve conduction velocity, SCV, the start of treatment, and occupational therapy intervention. In conclusion, the combination of surgical procedures and NGE showed a high improvement. SCV and time-to-start treatment of intervention for carpal tunnel syndrome may be useful in predicting the function after the intervention. abstract_id: PUBMED:30106062 A novel chronic nerve compression model in the rat. Current animal models of chronic peripheral nerve compression are mainly silicone tube models. However, the cross section of the rat sciatic nerve is not a perfect circle, and there are differences in the diameter of the sciatic nerve due to individual differences. The use of a silicone tube with a uniform internal diameter may not provide a reliable and consistent model. We have established a chronic sciatic nerve compression model that can induce demyelination of the sciatic nerve and lead to atrophy of skeletal muscle. In 3-week-old pups and adult rats, the sciatic nerve of the right hind limb was exposed, and a piece of surgical latex glove was gently placed under the nerve. N-butyl-cyanoacrylate was then placed over the nerve, and after it had set, another piece of glove latex was placed on top of the target area and allowed to adhere to the first piece to form a sandwich-like complex. Thus, a chronic sciatic nerve compression model was produced. Control pups with latex or N-butyl-cyanoacrylate were also prepared. Functional changes to nerves were assessed using the hot plate test and electromyography. Immunofluorescence and electron microscopy analyses of the nerves were performed to quantify the degree of neuropathological change. Masson staining was conducted to assess the degree of fibrosis in the gastrocnemius and intrinsic paw muscles. The pup group rats subjected to nerve compression displayed thermal hypoesthesia and a gradual decrease in nerve conduction velocity at 2 weeks after surgery. Neuropathological studies demonstrated that the model caused nerve demyelination and axonal irregularities and triggered collagen deposition in the epineurium and perineurium of the affected nerve at 8 weeks after surgery. The degree of fibrosis in the gastrocnemius and intrinsic paw muscles was significantly increased at 20 weeks after surgery. In conclusion, our novel model can reproduce the functional and histological changes of chronic nerve compression injury that occurs in humans and it will be a useful new tool for investigating the mechanisms underlying chronic nerve compression. Answer: Yes, retrograde axonal atrophy (RAA) does occur in carpal tunnel syndrome (CTS) patients with normal forearm conduction velocity. This is supported by a study that found a significant reduction in wrist-palm motor conduction velocity (W-P MCV) and median forearm motor conduction velocity (FMCV) in CTS patients with normal median FMCV compared to controls. The study concluded that RAA and relatively slowed median FMCV occur in CTS patients with normal median FMCV, regardless of the severity of clinical manifestations and electrophysiological abnormalities (PUBMED:15546786). Furthermore, another study indicated that the decrease in forearm median motor conduction velocity (FMMCV) in CTS is best ascribed to RAA and not selective damage of the large fibers at the wrist. This conclusion was drawn from the observation that compound muscle action potential (CMAP) amplitudes of the abductor pollicis brevis (APB) muscle at wrist stimulation and median motor distal latency (MMDL) were not correlated with FMMCV in CTS patient groups, suggesting that conduction block is not the primary cause (PUBMED:10825711). Additionally, a study utilizing forearm mixed nerve conduction techniques found that the direct Forearm mixed amplitude was significantly decreased in CTS patients, implying that RAA and retrograde conduction slowing (RCS) develop over the proximal median nerve at distal nerve injury. The study provided direct evidence of the occurrence of RAA and RCS, which would explain the cause of proximal median nerve conduction slowing (PUBMED:18976952). Moreover, a retrospective study correlating axon loss with ultrasound parameters of the median nerve in severe CTS found that axon loss is negatively correlated with the median nerve cross-sectional area at the forearm (fCSA). This suggests that CTS might cause RAA detected as a smaller fCSA, further supporting the occurrence of RAA in CTS patients (PUBMED:34377874).
Instruction: Is There a Connection Between Electrosensitivity and Electrosensibility? Abstracts: abstract_id: PUBMED:29972004 Acupoint electrosensitivity and regularity based on bibliometric Objective: To explore the acupoint electrosensitivity and regularity for assisting the clinical diagnosis and treatment so as to improve the clinical therapeutic effects. Methods: By means of the computer retrieval (CBMdisc, CNKI, VIP, WanFang, PubMed and EMBASE) and manual retrieval, the relevant literature on acupoint electrosensitivity was reviewed and the modern literature database of acupoint electrosensitivity was established. Using bibliometric, the relations were analyzed among the detected indexes, acupoints, meridians and disorders in the conditions of acupoint electrosensitivity, and the potential regularity was explored. Results: The general detection indexes of acupoint electrosensitivity were resistance, inertia area, volt-ampere area, electric conduction quantity, electric potential, electric current and capacitance. The electrosensitive points included meridian points, auricular points and extraordinary points. Among those points, the electrosensitivity easily occurred at the special points. Conclusion: The acupoint electrosensitivity is mainly reflected on the electrical impedance and volt-ampere characteristics. The relative specificity exists between acupoint electrosensitivity and zangfu disorders. In clinical practice, detecting acupoint electrosensitivity may assist the diagnosis and instruct acupoint selection to improve the clinical therapeutic effects. abstract_id: PUBMED:25778471 Is There a Connection Between Electrosensitivity and Electrosensibility? A Replication Study. Background: Among people with idiopathic environmental intolerance attributed to electromagnetic fields (IEI-EMF), a better than random detection ability for a 50-Hz 0.5-mT magnetic field (MF) and a propensity to experience more symptoms than controls was reported in a previous study. Purpose: The current study aimed to replicate and clarify these results using a modified experimental design. Method: Participants of the provocation experiment were 49 individuals with self-reported IEI-EMF and 57 controls. They completed the questionnaires (symptom expectations, Somatosensory Amplification Scale--SSAS, radiation subscale of the Modern Health Worries Scale--MHWS Radiation) and attempted to detect the presence of the MF directed to their right arm in 20 subsequent 1-min sessions. Symptom reports were registered after each session. Results: Individuals with IEI-EMF as opposed to the control group showed a higher than random detection performance (d' index of signal detection theory), while no difference in their bias (β index) toward the presence of the MF was found. Predictors of reported symptoms were self-reported IEI-EMF and believed as opposed to actual presence of the MF. People with IEI-EMF reported significantly more symptoms particularly in the believed presence of the MF. IEI-EMF was closely related to MHWS Radiation and SSAS scores. Conclusion: People with IEI-EMF might be able to detect the presence of the MF to a small extent; however, their symptom reports are connected to perceived exposure. abstract_id: PUBMED:12929157 Electrosensibility and electromagnetic hypersensitivity. Electromagnetic sensibility, the ability to perceive electric and electromagnetic exposure, and electromagnetic hypersensitivity (EHS), developing health symptoms due to exposure to environmental electromagnetic fields, need to be distinguished. Increased electrosensibility is a necessary, however, not a sufficient condition for electromagnetic hypersensitivity. At an extended sample of the general population of 708 adults, including 349 men and 359 women aged between 17 and 60 years, electrosensibility was investigated and characterized by perception threshold and its standard deviation. By analyzing the probability distributions of the perception threshold of electric 50 Hz currents, evidence could be found for the existence of a subgroup of people with significantly increased electrosensibility (hypersensibility) who as a group could be differentiated from the general population. The presented data show that the variation of the electrosensibility among the general population is significantly larger than has yet been estimated by nonionizing radiation protection bodies, but much smaller than claimed by hypersensitivity self-aid groups. These quantitative results should contribute to a less emotional discussion of this problem. The investigation method presented, is capable of exclusion diagnostics for persons suffering from the hypersensitivity syndrome. abstract_id: PUBMED:15690669 Relation between anal electrosensitivity and rectal filling sensation and the influence of age. Purpose: The aim of this study was to assess the effect of age and sex on the rectal filling sensation and anal electrosensitivity and to explore the relation between anal electrosensitivity and the parameters of the rectal filling sensation. Methods: Anal mucosal electrosensitivity and anorectal manometry, including the rectal filling sensation test were performed in 19 control subjects; 10 were younger than 60 years and 9 were older than that. Altogether, there were 11 men and 8 women. Results: Anal electrosensitivity did not differ between the two age groups. Women had a significantly lower electrosensitivity 4 and 5 cm from the anal verge than men, as well as a significantly shorter anal high-pressure zone. The rectal filling sensation did not differ between sexes. In the older age group, the rectal volumes required to induce filling sensations were smaller than those observed in the younger age group, but rectal pressures were comparable; as a consequence, rectal compliance was lower in older subjects. Anal electrosensitivity at different anal levels did not correlate with the rectal volume or pressure parameters of successive rectal filling sensations. The pressure recorded in the proximal anal canal at the consecutive rectal filling sensations strongly correlated with the rectal balloon pressure needed to elicit them. Conclusions: The zones of high anal electrosensitivity and high pressure seem to coincide. The fact that both are shorter in females did not influence the parameters of the rectal filling sensation. Lower rectal volumes but comparable rectal pressures were needed to induce the rectal filling sensation in the older age group. Rectal sensation did not correlate with anal electrosensitivity, probably because the receptors are not stimulated by the type of anal stimulation used or because different receptors are involved. Hence, the rectal filling sensation test cannot be replaced by the simpler anal electrosensitivity test. abstract_id: PUBMED:10602601 "Hypersensitivity to Electricity" in the Office; Symptoms and Improvement. Nineteen persons "hypersensitive" to electricity and 20 nonafflicted persons were studied for 1 ½ years. The most discernible hypersensitivity symptoms were pricking sensations and redness in the face, but these symptoms were present in only half of the afflicted. Other symptoms were similar to symptoms experienced during office work and this study does not support the idea that electrosensitivity is one single syndrome. The "hypersensitive" persons improved significantly, mainly on neuropsychiatric symptoms, but the skin problems sustained- as did the belief about their cause. The afflicted persons used less conventional medication than the group of the nonafflicted, which suggests a general tendency for attribution to environmental factors. abstract_id: PUBMED:38068115 Study on Joint Connection Performance of an Innovative Tooth Groove Connection and Vertical Reinforcement Lapping in Reserved Hole. In order to explore the horizontal joint connection performance of the innovative tooth groove connection and vertical reinforcement lapping in the reserved hole, five horizontal joint specimens were designed and constructed in this paper. Through the combination of monotonic horizontal load tests and finite element simulation analysis, the effects of axial compression ratio, vertical reinforcement connection degree, reserved hole type, mortar strength, and tooth groove depth on the horizontal joint connection performance of innovative tooth groove connections and vertical reinforcement lapping in reserved holes were comprehensively analyzed and discussed. The results indicated that the specimens were subjected to penetration failure at the tooth groove joint, but the vertical reinforcements and UHPC in reserved holes can effectively transfer the stress, ensuring satisfactory connection performance. With the increase in axial compression ratio and vertical reinforcement connection degree, the joint connection performance enhanced gradually, while the reserved hole type had little effect on the joint connection performance. In addition, it was found that increasing the mortar strength and the tooth groove depth can significantly improve the peak bearing capacity through finite element analysis. Finally, the optimization design suggestions for this innovative tooth groove connection and vertical reinforcement lapping in the reserved hole were given considering factors such as joint connection performance and construction assembly. abstract_id: PUBMED:34370029 Mental Disorder and Suicide: What's the Connection? This paper offers a philosophical analysis of the connection between mental disorder and suicide risk. In contemporary psychiatry, it is commonly suggested that this connection is a causal connection that has been established through empirical discovery. Herein, I examine the extent to which this claim can be sustained. I argue that the connection between mental disorder and increased suicide risk is not wholly causal but is partly conceptual. This in part relates to the way suicidality is built into the definitions of some psychiatric diagnoses. It also relates to the broader normative assumption that suicidal behavior is by definition mentally disordered behavior. The above has significant epistemological implications, which I explore. I propose that the claim that suicide is connected with mental disorder cannot be justified solely by appealing to empirical evidence but also warrants a justification on conceptual and normative grounds. abstract_id: PUBMED:10883774 Perceived electrosensitivity and magical ideation. We report, in 40 healthy right-handed men, an association between perceived sensitivity to weak electromagnetic fields and scores on the Magical Ideation scale, assessing experience of and beliefs in forms of causation that are nonexistent according to current scientific standards (r = .48, p =.002). Based on our previous research, we interpret this finding as reflecting human subjects' propensity to assume connections between own organismic state and (objectively unrelated) environmental variables. We emphasize, however, the need for studies which directly assess accuracy in detection of electromagnetic fields as a function of self-rated electrosensitivity. abstract_id: PUBMED:35330854 The Importance of Awareness, Acceptance, and Alignment With the Self: A Framework for Understanding Self-Connection. We provide a theoretical framework for what it means to be self-connected and propose that self-connection is an important potential contributor to a person's well-being. We define self-connection as consisting of three components: 1) an awareness of oneself, 2) an acceptance of oneself based on this awareness, and 3) an alignment of one's behavior with this awareness. First, we position the concept within the broader self literature and provide the empirical context for our proposed definition of self-connection. We next compare and contrast self-connection to related constructs, including mindfulness and authenticity. Following, we discuss some of the potential relationships between self-connection and various aspects of mental health and well-being. Finally, we provide initial recommendations for future research, including potential ways to promote self-connection. In all, we present this theory to provide researchers with a framework for understanding self-connection so that they can utilize this concept to better support the efforts of researchers and practitioners alike to increase individuals' well-being in various contexts. abstract_id: PUBMED:31849741 Interoception and Social Connection. Interoception - the process of sensing bodily signals - has gained much interest in recent years, due to its role in physical and mental well-being. Here, we focus on the role of interoception in social connection, which is a relatively new and growing research area. Studies in this area suggest that interoception may help in appraising physiological signals in social situations, but also that (challenging) social situations may reduce interoceptive processing by shifting attention from internally- to externally- focused. We discuss potential mechanisms for the influence of interoception on social connection and highlight that flexibility in engaging interoception in social situations may be particularly important. We end with a discussion of loneliness - an extreme case of poor social connection, which is associated with physiological decline and increased mortality risk, and propose that interoceptive dysregulation is involved. We suggest that interventions aimed to improve interoceptive abilities, such as mindfulness-based meditation practices, may be key for alleviating loneliness and improving social connection. Answer: Yes, there is a connection between electrosensitivity and electrosensibility. Electrosensitivity, also known as idiopathic environmental intolerance attributed to electromagnetic fields (IEI-EMF), refers to the condition where individuals report experiencing symptoms when they believe they are exposed to electromagnetic fields (EMF), even at levels well below those considered harmful (PUBMED:25778471). Electrosensibility, on the other hand, is the ability to perceive electric and electromagnetic exposure (PUBMED:12929157). A study aimed to replicate and clarify the connection between electrosensitivity and electrosensibility using a modified experimental design found that individuals with IEI-EMF showed a higher than random detection performance for a 50-Hz 0.5-mT magnetic field, indicating some degree of electrosensibility. However, their symptom reports were more closely connected to perceived exposure rather than actual exposure, suggesting that psychological factors such as belief in the presence of the MF played a significant role in the manifestation of symptoms (PUBMED:25778471). Another study investigating electrosensibility in the general population found evidence for the existence of a subgroup of people with significantly increased electrosensibility, who could be differentiated from the general population. This increased electrosensibility is a necessary, but not sufficient, condition for electromagnetic hypersensitivity (EHS), which is characterized by developing health symptoms due to exposure to environmental electromagnetic fields (PUBMED:12929157). In summary, while there is a connection between electrosensitivity and electrosensibility, the relationship is complex and influenced by both physiological detection ability and psychological factors such as beliefs and expectations about EMF exposure.
Instruction: Bladder function evaluation before renal transplantation in nonurologic disease: is it necessary? Abstracts: abstract_id: PUBMED:24210566 Bladder function evaluation before renal transplantation in nonurologic disease: is it necessary? Objective: To determine whether preoperative cystometry and a pressure flow study (PFS) are necessary in patients with end-stage renal disease from nonurologic causes who will undergo renal transplantation. Methods: From April 2009 to June 2010, 30 patients scheduled to undergo renal transplantation were prospectively evaluated with cystometry and PFS. The evaluation was performed immediately before and 6 months after renal transplantation. The inclusion criteria were age &gt;18 years and end-stage renal disease secondary to nonurologic disease. Results: Improvement in the cystometry and PFS parameters was observed after the return of diuresis at 6 months after transplantation. The parameter changes from baseline to the 6-month evaluation were as follows: first sensation of bladder filling, 88.8-168.7 mL (P = .0005); first desire to void, 137.2-251.1 mL (P &lt;.0001); maximal cystometric capacity, 221.2-428.7 mL (P &lt;.0001); bladder compliance, 73.9-138.6 mL/cm H2O (P = .03); and maximal flow rate, 8.1-15.8 mL/s (P &lt;.0001). The Abrams-Griffiths number in the men decreased from 31.8 to 15.2 (P = .002). No significant changes were observed in the detrusor pressure at the maximal flow rate or the postvoid residual urine volume. Patients with a 24-hour urine output &lt;200 mL tended to have had significantly worse parameters before transplantation. Conclusion: Significant improvement in the cystometry and PFS parameters was observed in patients with end-stage renal disease, without urologic disease, 6 months after transplantation, and was associated with recovery of the glomerular filtration rate and urine output by the renal graft. abstract_id: PUBMED:32472980 Does renal transplant in children with LUTD improve their bladder function? Much is still unknown about LUT function after receiving renal graft. Graft function was the main focus of different studies discussing the same issue. However, these studies ignored the effects of the graft on lower tract function and more demand for bladder cycling and growth of the child. Therefore, we aimed at evaluating the LUT function after RT into patients with LUTD. We enrolled a retrospective cohort of 83 live renal transplant children with LUTD. The 44 patients in Group (A) had a defunctionalized bladder, and the 39 patients in Group (B) had underlying LUT pathology. All patients had clinical and urodynamic evaluation of LUT functions at least 1 year after RT. We found that the improvement in patients with impaired bladder compliance was 73% in Group (A) and 60% in Group (B), with no statistically significant difference between the study groups. In Group (B), there was statistically significant worsening of MFP (8.4%) and mean PVR (79.9%) after RT. In Group (A), mild but stable significant improvement of all clinical and urodynamic parameters was observed. Serum creatinine was significantly worse in patients with pathological LUTD compared with those with defunctionalized bladder but without significant effect on graft survival. All LUT variables seemed to have no adverse effect on graft survival except for use of CIC and augmented bladder. Incident UTI independent of LUT variables accounted for 20% of graft creatinine change. abstract_id: PUBMED:20534225 Renal transplantation in patients with "valve bladder": is bladder augmentation necessary? Introduction: Posterior urethral valve is a common cause of renal failure in children. This disorder often results in small bladder and low compliance, which frequently requires bladder augmentation. Herein, we report our experience in 5 children with "valve bladder" who underwent renal transplantation without preliminary bladder enlargement. Materials And Methods: Thirteen children with valve bladder undergoing renal transplantation were considered candidates for bladder augmentation. All had oligoanuria at transplantation. In 8 children, bladder augmentation was performed before renal transplantation; in the remaining 5, the decision was postponed until after transplantation. These children underwent transplantation with a ureteral reimplant, and a suprapubic catheter was in place for 2 months. Periodically, renal function, bladder capacity, and compliance were assessed, and renal ultrasonography was performed. Results: At 1-, 2-, 4-, and 6-month follow-up, the 5 children who did not undergo bladder augmentation demonstrated normal renal function, with improved bladder capacity and absence of hydronephrosis. No significant difference was evident between the 2 groups (augmented vs nonaugmented) insofar as renal function, bladder capacity, or hydronephrosis. After transplantation, bladder augmentation was not deemed necessary in any of the 5 children because of complete restoration of clinical and urodynamic parameters. Conclusion: Renal transplantation can be performed safely without preemptive bladder augmentation. Ureteral reimplantation is recommended, even in patients with small valve bladders. The decision about the need for bladder augmentation should be made only after normal diuresis is restored. abstract_id: PUBMED:21896378 CKD and bladder problems in children. Approximately 35% of children with CKD who require renal replacement therapy have a significant urological abnormality, including posterior urethral valves, a neuropathic bladder, prune belly syndrome, Hinman syndrome, or severe vesicoureteral reflux. In such children, abnormal bladder function can have a significant deleterious effect on the renal function. In children with bladder outlet obstruction, bladder compliance and capacity often are abnormal, and a sustained intravesical pressure of &gt;40 cm H(2)O impedes drainage from the upper urinary tract. Consequently, in these conditions, regular evaluation with renal sonography, urodynamics, urine culture, and serum chemistry needs to be performed. Pediatric urological care needs to be coordinated with pediatric nephrologists. Many boys with posterior urethral valves have severe polyuria, resulting in chronic bladder overdistension, which is termed as valve bladder. In addition to behavioral modification during the day, such patients may benefit from overnight continuous bladder drainage, which has been shown to reduce hydronephrosis and stabilize or improve renal function in most cases. In children with a neuropathic bladder, detrusor-sphincter-dyssynergia is the most likely cause for upper tract deterioration due to secondary vesicoureteral reflux, hydronephrosis, and recurrent urinary tract infection (UTI). Pharmacologic bladder management and frequent intermittent catheterization are necessary. In some cases, augmentation cystoplasty is recommended; however, this procedure has many long-term risks, including UTI, metabolic acidosis, bladder calculi, spontaneous perforation, and malignancy. Nearly half of children with prune belly syndrome require renal replacement therapy. Hinman syndrome is a rare condition with severe detrusor-sphincter discoordination that results in urinary incontinence, encopresis, poor bladder emptying, and UTI, often resulting in renal impairment. Children undergoing evaluation for renal transplantation need a thorough evaluation of the lower urinary tract, mostly including a voiding cystourethrogram and urodynamic studies. abstract_id: PUBMED:16771737 Improvement of polyuria, bladder sensation and bladder capacity following renal transplantation. Here, we report a patient with renal failure and bladder dysfunction whose symptoms improved following renal transplantation. A 23-year-old woman underwent renal transplantation for renal failure as a result of dysplastic kidneys. Urodynamic evaluation prior to transplantation showed decreased bladder sensation and increased bladder capacity, probably because of congenital polyuria. One week after surgery, dry mouth disappeared, and urine volume normalized gradually. Urodynamic studies performed 3 and 10 months after transplantation showed improvement of bladder function, and the frequency/volume charts of urination also showed normalization of urine volume. Renal and bladder functions were almost normal 45 months after transplantation. Bladder dysfunction involves numerous factors, but the primary cause was probably congenital polyuria in the present case. This case suggested that blood purification and correction of urine volume by renal transplantation might lead to restoration of normal bladder function. abstract_id: PUBMED:28229802 Renal Transplant in Patients With Abnormal Bladder: Impact of Causes on Graft Function and Survival. Objectives: Successful kidney transplant depends partly on the normal physiologic functioning of the bladder, which involves effective urine storage and emptying. The bladder may become abnormal owing to various urologic and neuropathic disorders. Patients with abnormal bladders need careful management before and after transplant. In this study, we aimed to determine the outcomes of renal transplants in relation to various causes of abnormal bladder. Materials And Methods: We conducted a retrospective review of 25 patients with abnormal bladder who received 30 renal transplants between 1990 and 2014. The patients were divided into neurologic and urologic groups based on the causes of abnormal bladder. Patient demographics, graft function, survival, and postoperative complications were compared. Results: The most common urologic cause was posterior urethral valve (14 patients), while the most common neurologic cause was spina bifida (6 patients). There was no statistically significant difference in graft survival at 1, 3, and 5 years between patients with neurologic and urologic causes of abnormal bladder as well as at long-term follow-up. However, the mean estimated glomerular filtration rate at 1, 3, and 5 years was higher among patients with neurologic causes than in those with urologic causes, although the difference was statistically significant only at 1 year (61 ± 34 vs 37 ± 19 mL/min; P = .025). Stone formation was reported only in patients whose abnormal bladder had neurologic causes, and no incidence was reported in patients with urologic causes (20% vs 0%; P = .038). The incidence of other postoperative complications was not statistically significant between the 2 groups. Conclusions: With careful evaluation and proper preoperative correction of abnormal bladder dysfunction and optimization of the emptying and storage functions of the bladder, the causes of abnormal bladder did not appear to impact graft function and survival or overall rate of postoperative complications. abstract_id: PUBMED:15897701 Is bladder cycling useful in the urodynamic evaluation previous to renal transplantation? Objectives: To evaluate the usefulness of bladder cycling (BC) in differentiating between urodynamic abnormalities due to reversible bladder dysfunction and those due to preexisting lower urinary tract (LUT) abnormalities, and to determine if BC increases the diagnostic yield of urodynamic investigations (UIs) in these patients. Methods: Eleven patients with oligoanuria (&lt;300 ml/day) were evaluated with complete UIs. All cases showed low compliance (mean 6 ml/cm H2O) and low cystometric capacity (mean 146 ml) and were placed in a BC program. The indications for UIs were LUT symptoms in 2 patients, LUT abnormalities (myelodysplasia or urethral valves) in 3 patients, or the appearance of an extremely small bladder on cystogram (6 patients). BC was performed through a hypogastric catheter in an outpatient setting. UIs were repeated afterwards. Results: After BC, 5 patients (45%) continued to have low compliance (&lt;8 ml/cm H2O) and 4 patients (36%) had normal UIs. In 2 cases (18%), a previously unrecognized obstruction was diagnosed. No patient with LUT symptoms or abnormalities had a normal UI after BC. Conversely, 4 of 6 patients in whom BC was indicated due to a small bladder on cystogram had normal UIs after BC. Conclusions: BC is useful in explaining whether high bladder pressure is due to dysfunction or to preexisting disorders, and thus indicating the proper surgery for correction. Suprapubic BC significantly increases bladder capacity thus increasing the diagnostic yield of the UI. abstract_id: PUBMED:33134087 Kidney transplantation with vesicostomy in small-bladder capacity neurogenic bladder dysfunction patient: A case report. Spinal cord injury often results in neurogenic bladder condition and eventually lead to an end-stage renal disease requiring kidney transplantation. However, transplantation in abnormal bladder carries special considerations. We report a case of an adult male with end-stage chronic kidney disease and small bladder capacity after having spinal cord injury. The evaluation of videourodynamic showed reduced compliance and detrusor overactivity during filling phase. Kidney transplantation and vesicostomy was performed. Eighteen months follow-up after surgery showed that kidney function could be maintained. The prevention of increasing bladder pressure and UTI should be monitored to prevent the damage of the graft kidney. abstract_id: PUBMED:20546054 Bladder function of end-stage renal disease patients. Objectives: To evaluate the bladder function of end-stage renal disease (ESRD) patients by using video H(2)O cystometry (CM) before renal transplantation (RTx). Methods: A total of 92 patients (57 men and 35 women; mean age 45.4 years; mean period of renal replacement therapy (RRT) 60.2 months) were divided in two groups based on the presence of detrusor overactivity (DO). In each group the following parameters were recorded during CM: first sensation (FS), maximum cystometric capacity (MC) and compliance. The effect of the duration of RRT on the H(2)O cystometric results was also assessed. Results: There were 25 patients (27.2%) who had DO before their operation, this ratio being higher than the general Japanese population. Subjects with DO were observed independently for the duration of RRT and there was no difference in the prevalence of DO based on this variable. In contrast, 50 patients kept their bladder compliance in the normal range. The numerical value of FS, MC and compliance decreased proportionally for the duration of RRT. Conclusions: Patients undergoing RTx frequently present with bladder dysfunction in terms of low capacity, low compliance and also DO. This bladder dysfunction has a negative impact on patients' quality of life. Thus, bladder function and its related symptoms require specific attention during the management of transplant patients. abstract_id: PUBMED:16565874 Bladder dysfunction in children and adolescents after renal transplantation. The underlying mechanisms of urinary-tract infections (UTI) in renal transplant recipients are still not fully understood. In otherwise healthy children, bladder dysfunction increases the susceptibility to UTI. The aim of this study was to evaluate lower-urinary-tract function in children and adolescents after renal transplantation. Sixty-eight recipients of renal transplants, 5-20 years of age and 1-15 years after transplantation, were evaluated for their bladder function with a questionnaire, uroflowmetry and bladder ultrasound, and for renal function (glomerular filtration rate) by measuring clearance of inulin or iohexol. Forty-nine patients (72%) had some type of abnormality of bladder function. Abnormal bladder capacity was found in 26%, abnormal urinary flow in 50% and residual urine in 32% of the patients. There was no significant difference in bladder or renal function in children with urinary-tract malformations compared with those with normal urinary tract. Furthermore, there was no significant difference in renal function in patients with bladder dysfunction compared with those without. The incidence of bladder dysfunction is high in children and adolescents after renal transplantation, but the clinical significance of this finding and whether there is a correlation between bladder dysfunction and UTI in these patients need to be clarified further. Answer: The necessity of bladder function evaluation before renal transplantation in patients with end-stage renal disease from nonurologic causes has been a subject of investigation. A study by PUBMED:24210566 found that significant improvement in cystometry and pressure flow study (PFS) parameters was observed in patients without urologic disease 6 months after transplantation. This improvement was associated with the recovery of the glomerular filtration rate and urine output by the renal graft. Patients with less than 200 mL of urine output per day before transplantation had significantly worse parameters, suggesting that bladder function evaluation could be particularly relevant in such cases. Another study focusing on children with lower urinary tract dysfunction (LUTD) who received renal transplants (PUBMED:32472980) found that improvement in bladder compliance was observed in a majority of patients post-transplantation. However, some patients experienced worsening of mean flow pressure (MFP) and mean postvoid residual (PVR) volume after renal transplantation, indicating that bladder function can be affected differently depending on the underlying pathology. In patients with "valve bladder," a condition that often results in small bladder and low compliance, renal transplantation was performed without preliminary bladder enlargement (PUBMED:20534225). The study reported that after transplantation, bladder capacity and compliance improved, and no significant difference was found between patients who had bladder augmentation and those who did not, suggesting that preemptive bladder augmentation may not always be necessary. The abstract from PUBMED:21896378 emphasizes the importance of regular evaluation of bladder function in children with chronic kidney disease (CKD) and significant urological abnormalities, as abnormal bladder function can negatively impact renal function. Overall, these studies suggest that while bladder function can improve after renal transplantation, preoperative bladder function evaluation may be beneficial in certain patient populations, such as those with very low urine output or specific underlying conditions like LUTD. It can help in identifying patients who may need additional interventions or monitoring post-transplantation to ensure the best outcomes for both the patient and the graft (PUBMED:24210566, PUBMED:32472980, PUBMED:20534225, PUBMED:21896378).
Instruction: Do personalised e-mail invitations increase the response rates of breast cancer survivors invited to participate in a web-based behaviour change intervention? Abstracts: abstract_id: PUBMED:26286486 Do personalised e-mail invitations increase the response rates of breast cancer survivors invited to participate in a web-based behaviour change intervention? A quasi-randomised 2-arm controlled trial. Background: Previous research has shown that the personalisation of study invitations improves response rates in survey-based research. To examine if this finding extends to experimental studies, we examined the impact of personalised study invitation e-mails on the response rates of potentially eligible breast cancer survivors for participation in a 6 month randomised controlled trial testing the efficacy of a physical activity intervention. Methods: Potential participants (n = 344) were sent either a personalised email or a generic email. Results: Those sent the personalised email were 1.5 times (95 % CI = 1.18-1.93) more likely to respond than those sent the generic email. Conclusion: These findings suggest that personalisation may be a useful and potentially powerful tool that can be utilised when recruiting participants into experimental studies in order to boost response rates. abstract_id: PUBMED:29270056 An evaluation of the behaviour-change techniques used on Canadian cancer centre Web sites to support physical activity behaviour for breast cancer survivors. Background: Cancer centre Web sites can be a useful tool for distributing information about the benefits of physical activity for breast cancer (bca) survivors, and they hold potential for supporting health behaviour change. However, the extent to which cancer centre Web sites use evidence-based behaviour change techniques to foster physical activity behaviour among bca survivors is currently unknown. The aim of our study was to evaluate the presentation of behaviour-change techniques on Canadian cancer centre Web sites to promote physical activity behaviour for bca survivors. Methods: All Canadian cancer centre Web sites (n = 39) were evaluated by two raters using the Coventry, Aberdeen, and London-Refined (calo-re) taxonomy of behaviour change techniques and the eEurope 2002 Quality Criteria for Health Related Websites. Descriptive statistics were calculated. Results: The most common behaviour change techniques used on Web sites were providing information about consequences in general (80%), suggesting goal-setting behaviour (56%), and planning social support or social change (46%). Overall, Canadian cancer centre Web sites presented an average of M = 6.31 behaviour change techniques (of 40 that were coded) to help bca survivors increase their physical activity behaviour. Evidence of quality factors ranged from 90% (sites that provided evidence of readability) to 0% (sites that provided an editorial policy). Conclusions: Our results provide preliminary evidence that, of 40 behaviour-change techniques that were coded, fewer than 20% were used to promote physical activity behaviour to bca survivors on cancer centre Web sites, and that the most effective techniques were inconsistently used. On cancer centre Web sites, health promotion specialists could focus on emphasizing knowledge mobilization efforts using available research into behaviour-change techniques to help bca survivors increase their physical activity. abstract_id: PUBMED:26887585 A randomized controlled trial of two mail-based lifestyle interventions for breast cancer survivors. Purpose: Although breast cancer survivors' lifestyle choices affect their subsequent health, a majority do not engage in healthy behaviors. Because treatment end is a "teachable moment" for potentially altering lifestyle change for breast cancer survivors, we developed and tested two mail-based interventions for women who recently completed primary treatment. Methods: One hundred seventy-three survivors were randomly assigned to (1) Targeting the Teachable Moment (TTMI, n = 57), (2) Standard Lifestyle Management (SLM, n = 58), or (3) usual care (UC, n = 58) control group. Participants who were assigned to TTMI and SLM received relevant treatment materials biweekly for 4 months. Participants were assessed at baseline (T1, before randomization), post-treatment (T2, 4 months), and follow-up (T3, 7 months). Fruit and vegetable (F/V) intake, fat intake, and moderate-to-vigorous physical activity (MVPA) were assessed. Results: Results showed promise for these mail-based interventions for changes in health behaviors: Survivors in TTMI (+.47) and SLM (+.45) reported increased F/V intake, whereas those in UC (-.1) reported decreased F/V intake from T1 to T2. Changes in minutes of MVPA from T1 to T2 were higher in SLM than UC and marginally higher in TTMI than UC. However, these differences were due to decreased MVPA in UC rather than increased MVPA in the intervention groups. There were no group differences regarding fat intake. Survivors reported high satisfaction and preference for mail-based interventions, supporting feasibility. Conclusions: Mail-based lifestyle interventions for breast cancer survivors may benefit F/V intake and physical activity. Further testing and optimizing of these interventions is warranted. abstract_id: PUBMED:36231542 Web-Based Personalized Intervention to Improve Quality of Life and Self-Efficacy of Long-Term Breast Cancer Survivors: Study Protocol for a Randomized Controlled Trial. Long-term breast cancer survivors (&gt;5 years free of disease) may suffer late sequelae of cancer that impact on their quality of life. The use of telehealth for cancer care is recommended but little is known about the effectiveness of digital interventions for long-term cancer survivors. This study aims to evaluate the effectiveness of a web-based personalized intervention based on artificial intelligence instead of usual primary health care to improve the quality of life of long-term survivors of breast cancer and self-efficacy for the management of late sequelae. A randomized controlled trial will be conducted. The sample will consist of long-term breast cancer survivors recruited from primary health centers. Women will be randomly assigned to the intervention group to receive a web-based personalized intervention or to the control group to receive standard primary health care by nurses. Data on quality of life of cancer survivors and self-efficacy for the management of late sequelae of cancer will be collected and assessed at preintervention, and at 3, 6, and 9 months. It is expected that, at the end of the programme, the experimental group will have improved quality of life and improved self-efficacy for the management of late sequelae of cancer. abstract_id: PUBMED:34989962 A systematic review of behaviour change techniques used in interventions to increase physical activity among breast cancer survivors. Background: Despite evidence that physical activity (PA) can help reduce recurrence and mortality, many breast cancer survivors are less active than recommended levels. The aim of this systematic review is to advance our understanding of which behaviour change techniques (BCTs) have been used in interventions promoting breast cancer survivors' PA and to evaluate their potential to increase PA. Methods: A systematic search was conducted in five databases (Medline; PsycInfo; Embase; CINAHL and Scopus) for studies published between 2005 and 2019. Following a rigorous screening process, 27 studies were retained. These were reviewed and analysed for quality, coded for BCTs (k = 0.65) and interventions categorised according to their potential to increase PA using an established methodology. Results: The majority of studies were moderate quality (64%). Demonstration on how to perform the behaviour was the most commonly used BCT (n = 23). Adding objects to the environment, (pedometer or accelerometer) was the BCT with the highest potential to increase PA. This was followed by, goal setting and self-monitoring of behaviour. A theory-based approach to evaluation was used in only 59% (n = 16) of the studies. Conclusions: The results of this review inform which BCTs have the potential to increase PA for breast cancer survivors and inform intervention development. Future research, is encouraged to properly report intervention procedures around dose and frequency of intervention components to allow for review and replication. abstract_id: PUBMED:27498099 How do different delivery schedules of tailored web-based physical activity advice for breast cancer survivors influence intervention use and efficacy? Purpose: The purpose of the study is to investigate the impact of differing delivery schedules of computer-tailored physical activity modules on engagement and physical activity behaviour change in a web-based intervention targeting breast cancer survivors. Methods: Insufficiently active breast cancer survivors (n = 492) were randomly assigned to receive one of the following intervention schedules over 12 weeks: a three-module intervention delivered monthly, a three-module intervention delivered weekly or a single module intervention. Engagement with the website (number of logins, time on site, modules viewed, action plans completed) was measured using tracking software. Other outcomes (website acceptability, physical activity behaviour) were assessed using online surveys. Physical activity outcomes were analysed using regression models for both study completers and when applying intention-to-treat (using multiple imputation). Results: Completers allocated to the monthly module group rated the intervention higher (b = 2.2 95 % CI = 0.02-4.53) on acceptability and had higher levels of resistance-training (IRR = 1.88, 95 % CI = 1.16-3.04) than those in the single module group. When accounting for missing data, these differences were no longer significant. The completion of at least two action plans was higher among those allocated to the monthly module group compared to those in the weekly module group (53 vs 40 %, p = 0.02); though the completion of at least two modules was higher in the weekly module group compared to the monthly module group (60 vs 46 %; p = 0.01). There were no other significant between group differences observed. Conclusion: This study provides preliminary evidence that web-based computer-tailored interventions can be used to increase physical activity among breast cancer survivors. Further, there were some outcome differences based on how the tailored modules were delivered, with the most favourable outcomes observed in the monthly delivery group. Implications For Cancer Survivors: This study will be useful for informing the design of future web-based interventions targeting breast cancer survivors. abstract_id: PUBMED:23958584 Usage of a generic web-based self-management intervention for breast cancer survivors: substudy analysis of the BREATH trial. Background: Generic fully automated Web-based self-management interventions are upcoming, for example, for the growing number of breast cancer survivors. It is hypothesized that the use of these interventions is more individualized and that users apply a large amount of self-tailoring. However, technical usage evaluations of these types of interventions are scarce and practical guidelines are lacking. Objective: To gain insight into meaningful usage parameters to evaluate the use of generic fully automated Web-based interventions by assessing how breast cancer survivors use a generic self-management website. Final aim is to propose practical recommendations for researchers and information and communication technology (ICT) professionals who aim to design and evaluate the use of similar Web-based interventions. Methods: The BREAst cancer ehealTH (BREATH) intervention is a generic unguided fully automated website with stepwise weekly access and a fixed 4-month structure containing 104 intervention ingredients (ie, texts, tasks, tests, videos). By monitoring https-server requests, technical usage statistics were recorded for the intervention group of the randomized controlled trial. Observed usage was analyzed by measures of frequency, duration, and activity. Intervention adherence was defined as continuous usage, or the proportion of participants who started using the intervention and continued to log in during all four phases. By comparing observed to minimal intended usage (frequency and activity), different user groups were defined. Results: Usage statistics for 4 months were collected from 70 breast cancer survivors (mean age 50.9 years). Frequency of logins/person ranged from 0 to 45, total duration/person from 0 to 2324 minutes (38.7 hours), and activity from opening none to all intervention ingredients. 31 participants continued logging in to all four phases resulting in an intervention adherence rate of 44.3% (95% CI 33.2-55.9). Nine nonusers (13%), 30 low users (43%), and 31 high users (44%) were defined. Low and high users differed significantly on frequency (P&lt;.001), total duration (P&lt;.001), session duration (P=.009), and activity (P&lt;.001). High users logged in an average of 21 times, had a mean session duration of 33 minutes, and opened on average 91% of all ingredients. Signing the self-help contract (P&lt;.001), reporting usefulness of ingredients (P=.003), overall satisfaction (P=.028), and user friendliness evaluation (P=.003) were higher in high users. User groups did not differ on age, education, and baseline distress. Conclusions: By reporting the usage of a self-management website for breast cancer survivors, the present study gained first insight into the design of usage evaluations of generic fully automated Web-based interventions. It is recommended to (1) incorporate usage statistics that reflect the amount of self-tailoring applied by users, (2) combine technical usage statistics with self-reported usefulness, and (3) use qualitative measures. Also, (4) a pilot usage evaluation should be a fixed step in the development process of novel Web-based interventions, and (5) it is essential for researchers to gain insight into the rationale of recorded and nonrecorded usage statistics. Trial Registration: Netherlands Trial Register (NTR): 2935; http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=2935 (Archived by WebCite at http://www.webcitation.org/6IkX1ADEV). abstract_id: PUBMED:29044832 Understanding acceptability of and engagement with Web-based interventions aiming to improve quality of life in cancer survivors: A synthesis of current research. Objective: This review sought to summarize existing knowledge to inform the development of an online intervention that aims to improve quality of life after cancer treatment. Methods: To inform our intervention, we searched for studies relating to Web-based interventions designed to improve quality of life in adults who have completed primary treatment for breast, prostate, and colorectal cancer (as these are 3 of the most common cancers and impact a large number of cancer survivors). We included a variety of study designs (qualitative research, feasibility/pilot trials, randomized trials, and process evaluations) and extracted all available information regarding intervention characteristics, experiences, and outcomes. Data were synthesized as textual (qualitative) data and analyzed by using thematic analysis. Results: Fifty-seven full text articles were assessed for eligibility, and 16 papers describing 9 interventions were analyzed. Our findings suggest that cancer survivors value interventions that offer content specific to their changing needs and are delivered at the right stage of the cancer trajectory. Social networking features do not always provide added benefit, and behavior change techniques need to be implemented carefully to avoid potential negative consequences for some users. Conclusions: Future work should aim to identify appropriate strategies for promoting health behavior change, as well as the optimal stage of cancer survivorship to facilitate intervention delivery. Clinical Implications: The development of Web-based interventions for cancer survivors requires further exploration to better understand how interventions can be carefully designed to match this group's unique needs and capabilities. User involvement during development may help to ensure that interventions are accessible, perceived as useful, and appropriate for challenges faced at different stages of the cancer survivorship trajectory. abstract_id: PUBMED:34289828 A randomised feasibility trial of an employer-based intervention for enhancing successful return to work of cancer survivors (MiLES intervention). Background: Employers express a need for support during sickness absence and return to work (RTW) of cancer survivors. Therefore, a web-based intervention (MiLES) targeted at employers with the objective of enhancing cancer survivors' successful RTW has been developed. This study aimed to assess feasibility of a future definitive randomised controlled trial (RCT) on the effectiveness of the MiLES intervention. Also preliminary results on the effectiveness of the MiLES intervention were obtained. Methods: A randomised feasibility trial of 6 months was undertaken with cancer survivors aged 18-63 years, diagnosed with cancer &lt; 2 years earlier, currently in paid employment, and sick-listed &lt; 1 year. Participants were randomised to an intervention group, with their employer receiving the MiLES intervention, or to a waiting-list control group (2:1). Feasibility of a future definitive RCT was determined on the basis of predefined criteria related to method and protocol-related uncertainties (e.g. reach, retention, appropriateness). The primary effect measure (i.e. successful RTW) and secondary effect measures (e.g. quality of working life) were assessed at baseline and 3 and 6 months thereafter. Results: Thirty-five cancer survivors were included via medical specialists (4% of the initially invited group) and open invitations, and thereafter randomised to the intervention (n = 24) or control group (n = 11). Most participants were female (97%) with breast cancer (80%) and a permanent employment contract (94%). All predefined criteria for feasibility of a future definitive RCT were achieved, except that concerning the study's reach (90 participants). After 6 months, 92% of the intervention group and 100% of the control group returned to work (RR: 0.92, 95% CI: 0.81-1.03); no difference were found with regard to secondary effect measures. Conclusions: With the current design a future definitive RCT on the effectiveness of the MiLES intervention on successful RTW of cancer survivors is not feasible, since recruitment of survivors fell short of the predefined minimum for feasibility. There was selection bias towards survivors at low risk of adverse work outcomes, which reduced generalisability of the outcomes. An alternative study design is needed to study effectiveness of the MiLES intervention. Trial Registration: The study has been registered in the Dutch Trial Register ( NL6758/NTR7627 ). abstract_id: PUBMED:37149509 Identifying the effective behaviour change techniques in nutrition and physical activity interventions for the treatment of overweight/obesity in post-treatment breast cancer survivors: a systematic review. Purpose: Updated evidence for the treatment of obesity in cancer survivors includes behavioural lifestyle interventions underpinning at least one theoretical framework. The aim of this systematic review was to assess the effectiveness of theory-based lifestyle interventions for the treatment of overweight/obesity in breast cancer survivors and to report effective behavioural change techniques (BCTs) and components used in these interventions. Methods: Four databases were searched for RCTs published between database inception and July 2022. The search strategy included MeSH terms and text words, using the PICO-framework to guide the eligibility criteria. The PRISMA guidelines were followed. Risk-of-bias, TIDier Checklist for interventions' content, and the extent of behaviour change theories and techniques application were assessed. To evaluate the effectiveness of interventions, trials were categorised as "very," "quite," or "non" promising according to their potential to reduce body weight, and BCTs promise ratios were calculated to assess the potential of BCTs within interventions to decrease body weight. Results: Eleven RCTs met the inclusion criteria. Seven trials were classified as "very", three as "quite" and one study was "non" promising. Studies' size, design, and intervention strategies varied greatly, but the weight-loss goal in all studies was ≥ 5% of the initial body weight through a 500-1000 kcal/day energy deficit and a gradually increased exercise goal of ≥ 30 min/day. Social Cognitive Theory was the most commonly used theory (n = 10). BCTs ranged from 10 to 23 in the interventions, but all trials included behaviour goal setting, self-monitoring, instructions on the behaviour, and credible source. The risk-of-bias was "moderate" in eight studies and "high" in three. Conclusion: The present systematic review identified the components of theory-based nutrition and physical activity behaviour change interventions that may be beneficial for the treatment of overweight/obesity in breast cancer survivors. The strategies mentioned, in addition to reported behavioural models and BCTs, should be considered when developing weight-loss interventions for breast cancer survivors. Answer: Yes, personalized email invitations do increase the response rates of breast cancer survivors invited to participate in a web-based behavior change intervention. A study found that those sent the personalized email were 1.5 times more likely to respond than those sent the generic email (PUBMED:26286486).
Instruction: Are different stoichiometries feasible for complexes between lymphotoxin-alpha and tumor necrosis factor receptor 1? Abstracts: abstract_id: PUBMED:22568977 Are different stoichiometries feasible for complexes between lymphotoxin-alpha and tumor necrosis factor receptor 1? Background: Tumor necrosis factors, TNF and lymphotoxin-α (LT), are cytokines that bind to two receptors, TNFR1 and TNFR2 (TNF-receptor 1 and 2) to trigger their signaling cascades. The exact mechanism of ligand-induced receptor activation is still unclear. It is generally assumed that three receptors bind to the homotrimeric ligand to trigger a signaling event. Recent evidence, though, has raised doubts if the ligand:receptor stoichiometry should indeed be 3:3 for ligand-induced cellular response. We used molecular dynamics simulations, elastic network models, as well as MM/PBSA to analyze this question. Results: Applying MM/PBSA methodology to different stoichiometric complexes of human LT-(TNFR1)n=1,2,3 the free energy of binding in these complexes has been estimated by single-trajectory and separate-trajectory methods. Simulation studies rationalized the favorable binding energy in the LT-(TNFR1)1 complex, as evaluated from single-trajectory analysis to be an outcome of the interaction of cysteine-rich domain 4 (CRD4) and the ligand. Elastic network models (ENMs) help to associate the difference in the global fluctuation of the receptors in these complexes. Functionally relevant transformation associated with these complexes reveal the difference in the dynamics of the receptor when free and in complex with LT. Conclusions: MM/PBSA predicts complexes with a ligand-receptor molar ratio of 3:1 and 3:2 to be energetically favorable. The high affinity associated with LT-(TNFR1)1 is due to the interaction between the CRD4 domain with LT. The global dynamics ascertained from ENMs have highlighted the differential dynamics of the receptor in different states. abstract_id: PUBMED:8869635 Model complexes of tumor necrosis factor-alpha with receptors R1 and R2. The biological activities of tumor necrosis factor-alpha (TNF-alpha) are mediated by two different receptors, TNFR1 and TNFR2. To analyze the receptor binding site(s) of TNF-alpha, molecular models have been built of the complexes of TNF-alpha with the extracellular regions of receptors R1 and R2, based on the known crystal structures of TNF-alpha and lymphotoxin bound to R1. The model structure of R2 from residues 18-160 was built by analogy to the crystal structure of R1 in complex with lymphotoxin. The amino acid sequences of R1 and R2 show 27.5% identity over this region and were aligned with five insertions and three deletions. There are 18 conserved cysteines that form disulfides. R2 has lost one pair of cysteines compared with R1, but two new cysteines were modeled as forming a new disulfide bond. Both symmetric and asymmetric trimers of TNF-alpha were used to model the complexes with TNFR1 and R2. An analysis of differences in the model complexes showed good agreement with data on the differential binding of TNF mutants to its two receptors. abstract_id: PUBMED:23088250 Binding efficiency of protein-protein complexes. We examine the relationship between binding affinity and interface size for reversible protein-protein interactions (PPIs), using cytokines from the tumor necrosis factor (TNF) superfamily and their receptors as a test case. Using surface plasmon resonance, we measured single-site binding affinities for binding of the large receptor TNFR1 to its ligands TNFα (K(D) = 1.4 ± 0.4 nM) and lymphotoxin-α (K(D) = 50 ± 10 nM), and also for binding of the small receptor Fn14 to TWEAK (K(D) = 70 ± 10 nM). We additionally assembled data for all other TNF-TNFR family complexes for which reliable single-site binding affinities have been reported. We used these values to calculate the binding efficiencies, defined as binding energy per square angstrom of surface area buried at the contact interface, for nine of these complexes for which cocrystal structures are available, and compared the results to those for a set of 144 protein-protein complexes with published affinities. The results show that the most efficient PPI complexes generate ~20 cal mol(-1) Å(-2) of binding energy. A minimal contact area of ~500 Å(2) is required for a stable complex, required to generate sufficient interaction energy to pay the entropic cost of colocalizing two proteins from 1 M solution. The most compact and efficient TNF-TNFR complex was the BAFF-BR3 complex, which achieved ~80% of the maximal achievable binding efficiency. Other small receptors also gave high binding efficiencies, while the larger receptors generated only 44-49% of this limit despite interacting primarily through just a single small domain. The results provide new insight into how much binding energy can be generated by a PPI interface of a given size, and establish a quantitative method for predicting how large a natural or engineered contact interface must be to achieve a given level of binding affinity. abstract_id: PUBMED:9317127 Characterization of lymphotoxin-alpha beta complexes on the surface of mouse lymphocytes. The lymphotoxin-alpha beta complex (LT alpha beta) is found on the surface of activated lymphocytes and binds to a specific receptor called the LT beta receptor (LT beta R). In the mouse, signaling through this pathway is important for lymph node development and splenic organization, yet the biochemical properties of murine LT alpha and LT beta are essentially unknown. Here we have used soluble receptor-Ig forms of LT beta R and TNF-R55 and mAbs specific for murine LT alpha, LT beta, and LT beta R to characterize the appearance of surface LT alpha beta complexes and LT beta R on several common murine cell lines. Cells that bound LT beta R also bound anti-LT alpha and anti-LT beta mAbs in a FACS analysis. The ability of these reagents to discriminate between surface TNF and LT was verified by analysis of surface TNF-positive, LPS-activated murine RAW 264.7 monocytic cells. Primary mouse leukocytes from spleen, thymus, lymph node, and peritoneum were activated in vitro, and CD4+ and CD8+ T cells as well as B cells expressed surface LT ligand but not the LT beta R. Conversely, elicited peritoneal monocytes/macrophages were surface LT negative yet LT beta R positive. This study shows that on mononuclear cells, surface LT complexes and receptor are expressed similarly in mice and man, and the tools described herein form the foundation for study of the functional roles of the LT system in the mouse. abstract_id: PUBMED:12853971 NIK is a component of the EGF/heregulin receptor signaling complexes. Nuclear factor kappaB-inducing kinase (NIK) is a member of the MAP kinase kinase kinase family that was first identified as a component of the TNF-R1-induced NF-kappaB activation pathway (TNF, tumor necrosis factor; nuclear factor kappaB, NF-kappaB). Gene knockout study, however, suggests that NIK is dispensable for TNF-R1- but required for lymphotoxin-beta receptor-induced NF-kappaB activation. A NIK kinase inactive mutant is a potent inhibitor of NF-kappaB activation triggered by various stimuli, suggesting that NIK is involved in a broad range of NF-kappaB activation pathways. To unambiguously identify signaling pathways that NIK participates in, we screened antibody arrays for proteins that are associated with NIK. This effort identified ErbB4, one of the EGF/heregulin receptors, and Grb7, an adapter protein associated with ErbB4 (ErbB, epidermal growth factor receptor family protein; EGF, epidermal growth factor; Grb, growth factor receptor bound). Coimmunoprecipitation experiments demonstrated that NIK interacted with Grb7, as well as Grb10 and Grb14, but not Grb2. Domain mapping experiments indicated that the central GM domain of Grb7 was sufficient for its interaction with NIK. Coimmunoprecipitation experiments also indicated that Grb7 and NIK could be simultaneously recruited into signaling complexes of all known EGF/heregulin receptors, including EGFR, ErbB2, ErbB3, and ErbB4. In reporter gene assays, NIK could potentiate Grb7, ErbB2/ErbB4, and EGF-induced NF-kappaB activation. A NIK kinase inactive mutant could block ErbB2/ErbB4 and EGF-induced NF-kappaB activation. Moreover, EGF/heregulin receptors activated NF-kappaB in wild-type, but not NIK-/- embryonic fibroblasts. Our findings suggest that NIK is a component of the EGF/heregulin receptor signaling complexes and involved in NF-kappaB activation triggered by these receptors. abstract_id: PUBMED:25007095 Different tumor necrosis factor α antagonists have different effects on host susceptibility to disseminated and oropharyngeal candidiasis in mice. Tumor necrosis factor α is important for the host defense against intracellular pathogens. We tested the effect of mouse analogs of human TNF-α antagonists, the rat anti-mouse TNF-α monoclonal antibody (XT22) and the soluble mouse 75 kDa TNF-α receptor fused to the Fc portion of mouse IgG1 (p75-Fc), on the susceptibility of mice to hematogenously disseminated candidiasis (HDC) and oropharyngeal candidiasis (OPC). Both XT22 and p75-Fc significantly reduced mice survival, increased kidney fungal burden, and reduced leukocyte recruitment during HDC. However, only XT22 significantly increased the oral fungal burden and reduced leukocyte recruitment during OPC. This result suggests that XT22 and p75-Fc affect host susceptibility to different types of Candida albicans infections by different inhibitory mechanisms. abstract_id: PUBMED:18842714 Complexes of human papillomavirus type 16 E6 proteins form pseudo-death-inducing signaling complex structures during tumor necrosis factor-mediated apoptosis. High-risk strains of human papillomavirus (HPV) such as HPV type 16 (HPV16) and HPV18 are causative agents of most human cervical carcinomas. E6, one of the oncogenes encoded by HPV16, possesses a number of biological and transforming functions. We have previously shown that the binding of E6 to host apoptotic proteins such as tumor necrosis factor (TNF) R1, the adaptor protein FADD, and procaspase 8 results in a significant modification of the normal flow of apoptotic events. For example, E6 can bind to and accelerate the degradation of FADD. In addition, full-length E6 binds to the TNF R1 death domain and can also bind to and accelerate the degradation of procaspase 8. In contrast, the binding of small splice isoforms known as E6* results in the stabilization of procaspase 8. In this report, we propose a model for the ability of HPV16 E6 to both sensitize and protect cells from TNF as well as to protect cells from Fas. We demonstrate that both the level of E6 expression and the ratio between full-length E6 and E6* are important factors in the modification of the host extrinsic apoptotic pathways and show that at high levels of E6 expression, the further sensitization of U2OS, NOK, and Ca Ski cells to TNF-mediated apoptosis is most likely due to the formation of a pseudo-death-inducing signaling complex structure that includes complexes of E6 proteins. abstract_id: PUBMED:30610617 Post-translational modification of the death receptor complex as a potential therapeutic target in cancer. Programmed cell death is critical to the physiological function of multi-cellular organisms, controlling development, immunity, inflammation, and cancer progression. Death receptor (DR)-mediated regulation of a protease functions as a second messenger to initiate a death signal cascade to induce apoptosis or necroptosis. Recently, it has become clear that post-translational modifications (PTMs) of signaling components in the DR complex are highly complex, temporally controlled, and tightly regulated, and play an important role in cell death signaling. This review focuses on the molecular mechanisms and pathophysiological consequences of PTMs on the formation of the DR signaling complex, especially with respect to tumor necrosis factor receptor 1 (TNFR1). Furthermore, characterization of the role of PTMs in spatially different TNFR1 complexes (complexes I and II), especially with respect to the role of ubiquitination and phosphorylation of receptor interacting protein 1 (RIP1) in programmed cell death in cancer cells, will be reviewed. By integrating recently gained insight of the functional importance of PTMs in complex I or II, this review discusses how the concerted action of PTMs results in life or death upon DR ligation. Finally, the emerging concept of a sequential cell death checkpoint by the PTMs of RIP1, which may reveal novel therapeutic opportunities for the treatment of some cancers, will be discussed. abstract_id: PUBMED:9207387 Pretreatment with a 55-kDa tumor necrosis factor receptor-immunoglobulin fusion protein attenuates activation of coagulation, but not of fibrinolysis, during lethal bacteremia in baboons. Baboons (Papio anubis) receiving a lethal intravenous infusion with live Escherichia coli were pretreated with either a 55-kDa tumor necrosis factor (TNF) receptor-IgG fusion protein (TNFR55:IgG) (n = 4, 4.6 mg/kg) or placebo (n = 4). Neutralization of TNF activity in TNFR55:IgG-treated animals was associated with a complete prevention of mortality and a strong attenuation of coagulation activation as reflected by the plasma concentrations of thrombin-antithrombin III complexes (P &lt; .05). Activation of fibrinolysis was not influenced by TNFR55:IgG (plasma tissue-type plasminogen activator and plasmin-alpha2-antiplasmin complexes), whereas TNFR55:IgG did inhibit the release of plasminogen activator inhibitor type I (P &lt; .05). Furthermore, TNFR55:IgG inhibited neutrophil degranulation (plasma levels of elastase-alpha1-antitrypsin complexes, P &lt; .05) and modestly reduced release of secretory phospholipase A2. These data suggest that endogenous TNF contributes to activation of coagulation, but not to stimulation of fibrinolysis, during severe bacteremia. abstract_id: PUBMED:15357952 Compartmentalization of TNF receptor 1 signaling: internalized TNF receptosomes as death signaling vesicles. The molecular regulation of the recruitment of initial signaling complexes at the TNF-R1 is poorly defined. We demonstrate here that within minutes internalized TNF-R1 (TNF receptosomes) recruits TRADD, FADD, and caspase-8 to establish the "death-inducing signaling complex" (DISC). In addition, we identified the TNF-R1 internalization domain (TRID) required for receptor endocytosis and provide evidence that TNF-R1 internalization, DISC formation, and apoptosis are inseparable events. Analyzing cell lines expressing an internalization-deficient receptor (TNF-R1 DeltaTRID) revealed that recruitment of RIP-1 and TRAF-2 to TNF-R1 occurred at the level of the plasma membrane. In contrast, aggregation of TRADD, FADD, and caspase-8 to establish the TNF-R1-associated DISC is critically dependent on receptor endocytosis. Furthermore, fusion of TNF receptosomes with trans-Golgi vesicles results in activation of acid sphingomyelinase and cathepsin D. Thus, TNF receptosomes establish the different TNF signaling pathways by compartmentalization of plasma membrane-derived endocytic vesicles harboring the TNF-R1-associated DISC. Answer: Yes, different stoichiometries are feasible for complexes between lymphotoxin-alpha (LT) and tumor necrosis factor receptor 1 (TNFR1). Molecular dynamics simulations, elastic network models, and MM/PBSA (Molecular Mechanics/Poisson-Boltzmann Surface Area) methodologies have been used to analyze the binding free energy of different stoichiometric complexes of human LT-(TNFR1)n=1,2,3. These studies predict that complexes with a ligand-receptor molar ratio of 3:1 and 3:2 are energetically favorable. The high affinity associated with the LT-(TNFR1)1 complex is attributed to the interaction between the cysteine-rich domain 4 (CRD4) of the receptor and the ligand. Additionally, the global dynamics of the receptor, as determined from elastic network models, show differential dynamics when the receptor is free compared to when it is in complex with LT (PUBMED:22568977).
Instruction: The Influence of Intense Combined Training on Upper Extremity Function in Children With Unilateral Cerebral Palsy: Does Initial Ability Matter? Abstracts: abstract_id: PUBMED:26734890 The Influence of Intense Combined Training on Upper Extremity Function in Children With Unilateral Cerebral Palsy: Does Initial Ability Matter? Aim: To examine whether level of manual ability modifies the response to an intensive program combining modified constraint and bimanual training on arm functioning children with unilateral cerebral palsy (UCP). Methods: Thirty-four children aged 6-11 years, with Manual Ability Classification System (MACS) levels I-III (I:8, II:15, III:11), participated in a 2-week program, combining an hour of modified constraint with 5 hr of bimanual practice daily. The Assisting Hand Assessment (AHA) and the Jebsen-Taylor Test of Hand Function (JTTHF) were done pre-, post-, and 3-months post-intervention. Results: Mean AHA logits unit scores increased overtime (F2;50 = 5, p = 0.01). There was no significant difference in AHA logits units change score between MACS levels (F4;56 = 1.4, p = 0.22). JTTHF scores did not change for either the affected or less-affected hand, although a significant interaction of time and MACS level was found in the less-affected side (F4;58 = 6.5, p &lt; 0.01). Children in MACS levels I and II improved by similar degrees, in comparison to children at MACS level III who demonstrated significantly greater change over time. Conclusion: While a similar trend of improvement was found in the bimanual abilities of children at all MACS levels, only children at MACS level III had improved performance in unilateral abilities in the less-affected side following intervention. abstract_id: PUBMED:36373302 Does bimanual task training benefit manual ability and hand function of children with bilateral spastic cerebral palsy? Purpose: Sixty percent of children with bilateral cerebral palsy have impaired hand function. The study's purpose was to examine the benefits of bimanual task practice on the manual ability and hand function of children with bilateral spastic cerebral palsy. Methods: In this pre-post study design, 18 children with bilateral spastic cerebral palsy with an average age of 11.5 (+/-1.9) years, Manual Ability Classification System levels I-III and Bimanual Fine Motor Function levels I-III participated in bimanual task practice of upper extremities. The task practice included clay activities, paper manipulation and activities of daily needs. The children underwent 45-minute training sessions 3 times a week over 6 weeks. The outcome measures were ABILHAND-Kids, Quality of Upper Extremity Skills Test and grip strength. Results: Post-training, a mean change of 6.44 logits in ABILHAND-Kids, 11 points on the Quality of Upper Extremity Skills Test, and 3.3 and 3.1 kilograms grip strength in the dominant and nondominant hands respectively were observed with a statistical significance (p &lt; 0.05). Conclusion: Bimanual task training might be beneficial in improving manual ability, hand function and grip strength in children with bilateral spastic cerebral palsy. abstract_id: PUBMED:25061598 The effect of the action observation physical training on the upper extremity function in children with cerebral palsy. The purpose this study was to investigate the effect of action observation physical training (AOPT) on the functioning of the upper extremities in children with cerebral palsy (CP), using an evaluation framework based on that of the International Classification of Functioning, Disability and Health (ICF). The subjects were divided into an AOPT group and a physical training (PT) group. AOPT group practiced repeatedly the actions they observed on video clips, in which normal child performed an action with their upper extremities. PT group performed the same actions as the AOPT group did after observing landscape photographs. The subjects participated in twelve 30-min sessions, 3 days a week, for 4 weeks. Evaluation of upper extremity function using the following: the power of grasp and Modified Ashworth Scale for body functions and structures, a Box and Block test, an ABILHAND-Kids questionnaire, and the WeeFIM scale for activity and participation. Measurements were performed before and after the training, and 2 weeks after the end of training. The results of this study showed that, in comparison with the PT group, the functioning of the upper extremities in the AOPT group was significantly improved in body functions and activity and participation according to the ICF framework. This study demonstrates that AOPT has a positive influence on the functioning of the upper extremities in children with CP. It is suggested that this alternative approach for functioning of the upper extremities could be an effective method for rehabilitation in children with CP. abstract_id: PUBMED:26869858 Grading and Quantification of Upper Extremity Function in Children with Spasticity. The World Health Organization's International Classification of Functioning, Disability and Health (ICF) provides an ideal framework within which to conceptualize grading and quantification of upper extremity function for children with spasticity. In this article the authors provide an overview of assessments and classification tools used to (1) understand upper extremity function associated with spasticity and the factors that contribute to dysfunction, (2) guide the selection of appropriate interventions, (3) identify specific muscles to target using surgical interventions and botulinum toxin-A injections, and (4) measure the outcomes of upper extremity interventions. Assessments of upper extremity function are briefly described and categorized as to whether they (1) measure children's best ability or actual performance in daily life, (2) are clinician administered or are a child/proxy report, (3) assist in planning intervention and/or measuring outcomes, and (4) evaluate unimanual or bimanual ability. In addition, measures of spasticity and hypertonicity, and classifications of static and dynamic upper extremity postures are summarized. abstract_id: PUBMED:32683215 Intensive upper extremity training improved whole body movement control for children with unilateral spastic cerebral palsy. Background: Children with unilateral spastic cerebral palsy (USCP) exhibit impaired bimanual coordination, gait control, and whole body movement control. Intensive upper extremity training has been found to be effective for improving upper extremity function. However, the effectiveness of the intensive upper extremity training on whole body movement control is not known. Research Question: The present study aimed to evaluate the effects of Constraint Induced Movement Therapy (CIMT) and Hand Arm Bimanual Intensive Therapy (HABIT) on bimanual coordination and gait control during a complex whole body task. Methods: Sixteen children with congenital hemiplegia (age 6-12 years; GMFCS: I-II, MACS: I-II) were randomly assigned to either CIMT or HABIT for 6 h per day training for 15 days. Children were asked to perform two whole body tasks (walking with and without a tray carrying) while 3-D kinematic analysis was performed before and after training. Results: After training, the HABIT group increased the symmetry in height of their hands during tray carrying (more leveled tray). Both CIMT and HABIT groups decreased the lateral motion of the tray. The CIMT group increased speed and stride length after training in both the walking and tray carrying tasks. Both groups also increased their minimum toe clearance (all p &lt; 0.05). Significance: Two types of intensive upper extremity training have provided significant improvements to whole body movement control for children with USCP. Adhering to the specificity of practice concept, HABIT improved bimanual coordination after training during the whole body tray carrying tasks. Given extensive interactions between the upper and lower extremities in real-world activities, future studies should focus on the effects of such combined training. abstract_id: PUBMED:31501133 Protocol for a multisite randomised trial of Hand-Arm Bimanual Intensive Training Including Lower Extremity training for children with bilateral cerebral palsy: HABIT-ILE Australia. Introduction: Children with bilateral cerebral palsy often experience difficulties with posture, gross motor function and manual ability, impacting independence in daily life activities, participation and quality of life (QOL). Hand-Arm Bimanual Intensive Training Including Lower Extremity (HABIT-ILE) is a novel intensive motor intervention integrating upper and lower extremity training. This study aimed to compare HABIT-ILE to usual care in a large randomised controlled trial (RCT) in terms of gross motor function, manual ability, goal attainment, walking endurance, mobility, self-care and QOL. A within-trial cost-utility analysis will be conducted to synthesise costs and benefits of HABIT-ILE compared with usual care. Methods And Analysis: 126 children with bilateral cerebral palsy aged 6-16 years will be recruited across three sites in Australia. Children will be stratified by site and Gross Motor Function Classification System and randomised using concealed allocation to either receiving HABIT-ILE immediately or being waitlisted for 26 weeks. HABIT-ILE will be delivered in groups of 8-12 children, for 6.5 hours per day for 10 days (total 65 hours, 2 weeks). Outcomes will be assessed at baseline, immediately following intervention, and then retention of effects will be tested at 26 weeks. Primary outcomes will be the Gross Motor Function Measure and ABILHAND-Kids. Secondary outcomes will be brain structural integrity, walking endurance, bimanual hand performance, self-care, mobility, performance and satisfaction with individualised goals, and QOL. Analyses will follow standard principles for RCTs using two-group comparisons on all participants on an intention-to-treat basis. Comparisons between groups for primary and secondary outcomes will be conducted using regression models. Ethics And Dissemination: Ethics approval has been granted by the Medical Research Ethics Committee of Children's Health Queensland Hospital and the Health Service Human Research Ethics Committee (HREC/17/QRCH/282) of The University of Queensland (2018000017/HREC/17/QRCH/2820), and The Cerebral Palsy Alliance Ethics Committee (2018_04_01/HREC/17/QRCH/282). Trial Registration Number: ACTRN12618000164291. abstract_id: PUBMED:25521486 The Influence of a Constraint and Bimanual Training Program Using a Variety of Modalities, on Upper Extremity Functions and Gait Parameters Among Children with Hemiparetic Cerebral Palsy: A Case Series. Aim: To assess the influence of an intensive combined constraint and bimanual upper extremity (UE) training program using a variety of modalities including the fitness room and pool, on UE functions as well as the effects of the program on gait parameters among children with hemiparetic cerebral palsy. Methods: Ten children ages 6-10 years participated in the program for 2 weeks, 5 days per week for 6 hr each day. Data from the Assisting Hand Assessment (AHA) for bimanual function , the Jebsen-Taylor Test of Hand Function (JTTHF) for unimanual function, the six-minute walk test (6MWT), and the temporal-spatial aspects of gait using the GAITRite walkway were collected prior to, immediately post and 3-months post-intervention. Results: A significant improvement was noted in both unimanual as well as bimanual UE performance; A significant improvement in the 6MWT was noted, from a median of 442 meter [range: 294-558] at baseline to 466 [432-592] post intervention and 528 [425-609] after 3 months (p = .03). Conclusion: Combining intensive practice in a variety of modalities, although targeting to the UE is associated with substantial improvement both in the upper as well as in the lower extremity function. abstract_id: PUBMED:25527487 Hand and Arm Bimanual Intensive Therapy Including Lower Extremity (HABIT-ILE) in Children With Unilateral Spastic Cerebral Palsy: A Randomized Trial. Background: Intensive bimanual training results in more improvement in hand function in children with unilateral spastic cerebral palsy (USCP) than lower intensity conventional interventions. However, it is not known whether combined upper and lower extremity training in an intensive protocol is more efficacious for upper and lower functional abilities than conventional therapies provided in usual customary care. Objective: To determine the efficacy of Hand and Arm Bimanual Intensive Therapy Including Lower Extremity (HABIT-ILE) for children with USCP. Methods: Twenty-four children with USCP were randomized into 2 groups: an immediate HABIT-ILE group (IHG, initially receiving HABIT-ILE, 10 days = 90 hours), and a delayed HABIT-ILE group (DHG), which continued their conventional/ongoing treatment for an intended total duration of 90 hours. In phase 2, children in the DHG were crossed over to receive HABIT-ILE and children of the IHG were followed in their ongoing conventional therapy. Children were assessed using the Assisting Hand Assessment (AHA, primary outcome), the ABILHAND-Kids, and the Pediatric Evaluation of Disability Inventory. Dexterity (Box and Blocks Test [BBT]) and pinch strength were also measured. Locomotor abilities were assessed with Six-Minute Walk Test (6MWT, primary outcome) and ABILOCO-kids. Social participation was measured with the Assessment of Life-HABITs. Results: A 2 (groups) × 3 (test sessions) analysis of variance indicated significant improvements for primary outcomes (AHA, P &lt; .001; 6MWT, P = .002) and all secondary assessments except BBT, step length and bodyweight distribution following HABIT-ILE, but not conventional therapy. Conclusion: The findings suggest that combined upper and lower extremity in an intensive training protocol may be efficacious for improving both upper and lower extremity function in children with USCP. abstract_id: PUBMED:35430957 Correlation between scapular alignment and upper extremity function in children with hemiparetic cerebral palsy. Objectives: To investigate the relationship between scapular alignment and upper extremity function. Methods: Eighty-five children (63 boys and 22 girls) with spastic hemiplegic cerebral palsy aged 3 to 6 years were included in the study. Scapular upward rotation was assessed using Postural Zone software, and upper extremity function was assessed using the Pediatric Arm Function Test. Results: There was a significant difference (p = .0001) in the degree of upward scapular rotation between less affected and affected sides (-41.78 ± 4.87 and -26.42 ± 6.34, respectively). There was a significant difference (p = .0001) between the function of the upper extremity of the affected side and the less affected sides (48.15 ± 14.37, 62.1 ± 6.62, respectively). Pearson Correlation Coefficient (r) was calculated, and there was a strong negative significant correlation between the degree of scapular upward rotation of the affected side, a unilateral score of the affected side, and the total score of the Pediatric Arm Function Test (r = -0.976, p = .0001 and r = -0.973, p = .0001, respectively). The correlation between symmetry index and total score of the Pediatric Arm Function Test was a strong positive significant correlation (r = 0.946, p = .0001). Conclusion: The degree of upward scapular rotation was less on the affected side. Scapular alignment and symmetry may contribute to upper extremity function in children with hemiplegic cerebral palsy. Clinically, correction of scapular deviations may be considered in the rehabilitation program for children with hemiplegic cerebral palsy. This study suggests further experimental studies to find the cause and effect. abstract_id: PUBMED:23030054 Wii™-habilitation of upper extremity function in children with cerebral palsy. An explorative study. Objective: Commercially available virtual reality systems can possibly support rehabilitation objectives in training upper arm function in children with Cerebral Palsy (CP). The present study explored the effect of the Nintendo Wii™ training on upper extremity function in children with CP. Methods: During six weeks, all children received twice a week training with the Wii™, with their most affected arm. The Melbourne Assessment of Upper Limb Function and ABILHAND-Kids were assessed pre- and post- training. In addition, user satisfaction of both children and health professionals was assessed after training. Enjoyment in gaming was scored on a visual analogue scale scale after each session by the children. Results: Fifteen children with CP participated in the study. The quality of upper extremity movements did not change (-2.1, p &gt; 0.05), while a significant increase of convenience in using hands/arms during performance of daily activities was found (0.6, p &lt; 0.05). Conclusion: Daily activities seem to be easier performed after Wii™ training for most of the included children with CP. Answer: The influence of intense combined training on upper extremity function in children with unilateral cerebral palsy (UCP) appears to be affected by the initial level of manual ability. A study examining the response to an intensive program combining modified constraint and bimanual training on arm functioning in children with UCP found that while there was a general trend of improvement in bimanual abilities across all levels of the Manual Ability Classification System (MACS), only children at MACS level III showed significant improvement in unilateral abilities in the less-affected side following the intervention (PUBMED:26734890). This suggests that initial ability, as classified by MACS levels, does matter in terms of the degree of improvement in unilateral abilities, with children who have lower initial manual ability (MACS level III) demonstrating greater change over time compared to those with higher initial ability (MACS levels I and II). In contrast, another study focusing on children with bilateral spastic cerebral palsy undergoing bimanual task training showed improvements in manual ability, hand function, and grip strength post-training, indicating benefits of bimanual task practice regardless of the initial manual ability level (PUBMED:36373302). However, this study did not specifically address the influence of initial ability on the degree of improvement. Overall, the evidence suggests that initial manual ability may influence the outcomes of intense combined training on upper extremity function in children with UCP, particularly in terms of unilateral abilities. Children with lower initial ability may experience greater improvements, but further research is needed to fully understand the relationship between initial ability and training outcomes.
Instruction: Are conventional pressure-flow measurements dependent upon filled volume? Abstracts: abstract_id: PUBMED:16042728 Are conventional pressure-flow measurements dependent upon filled volume? Objective: To determine, in a prospective study, whether detrusor pressure (p(det.Qmax)) and maximum urinary flow rate (Q(max)) measurements obtained after filling to maximum cystometric capacity (MCC) differ from those obtained with filling restricted to average voided volume (V(void)), as standard protocols for pressure flow studies (PFS) mandate bladder filling until the subject has a strong desire to void, which aids standardization but further divorces the test from real-life experience. Patients And Methods: After calculating the appropriate sample size, 84 patients attending for PFS with an adequately completed 3-day frequency-volume chart were recruited. Each underwent two consecutive PFS with filling to MCC and average V(void) in a random order, and measurements of p(det.Qmax) and Q(max) were compared. For men, the agreement for a diagnosis of obstruction between the tests was also assessed. Results: Complete data were obtained from 76 (90%) of the patients, with a mean (range) age of 64 (20-94) years. The mean (sd) difference between MCC and average V(void) was 134 (113) mL (P &lt; 0.01). There were no significant differences between estimates of Q(max), at - 0.1 (3) mL/s (P = 0.75), and of p(det.Qmax), at - 1 (13) cmH(2)O (P = 0.91), obtained within each patient. For men there was 91% agreement (32 of 35) in the classification of obstruction. Conclusions: Restriction of filling to the average V(void) during PFS allows a closer approximation to normal voiding and results in no clinically relevant change to the value of standard pressure-flow measurements or alters individual classification of obstruction. abstract_id: PUBMED:3944026 Pulmonary vascular resistance in the fluorocarbon-filled lung. Pulmonary vascular resistance was investigated in the fluorocarbon-filled lung in an in situ isolated lung preparation. Lungs were perfused at constant flow (100 ml X min-1 X kg-1) with whole blood from a donor cat. left atrial pressure was held constant at zero pressure. Measurements of pulmonary arterial pressure enabled calculation of pulmonary vascular resistance. Regional changes in pulmonary blood flow were determined by the microsphere technique. During quasi-static deflation over a range of 0-30 mmHg, dependent alveolar pressure was consistently greater for a volume of fluorocarbon than for gas, with each pressure-volume curve for the fluorocarbon-filled lung shifted to the right of the curve for the gas-filled lung. In turn, pulmonary vascular resistance was found to increase linearly as a function of increasing alveolar pressure, independent of the medium in the lung. Thus, for a given volume, pulmonary vascular resistance was consistently greater in the fluorocarbon-filled lung compared with the gas-filled lung. This increase in pulmonary vascular resistance was accompanied by a redistribution of pulmonary blood flow in which blood flow to the dependent region was decreased in the fluorocarbon-filled lung compared with the gas-filled lung. Conversely, the less-dependent regions of the lung received a relatively greater percentage of blood flow when filled with fluorocarbon compared with gas. These findings suggest that pulmonary vascular resistance is increased during liquid ventilation, largely as the result of mechanical interaction at the alveolar-vascular interface. abstract_id: PUBMED:3952385 Volume-pressure cycles from air and liquid-filled intact rabbit lungs. Air-filled and liquid-filled VP loops cycled over 100-300% FRC were measured using rabbit lung preparations. Four different stages of lung confinement were used ranging from intact lungs with the rib cage immobilized (Stage I) to isolated lungs (Stage IV). Stage I preparations showed larger air-filled and liquid-filled hysteresis areas of 1338 +/- 214 units2(air) and 849 +/- 73 units2(liquid), respectively, compared to 797 +/- 210 units2(air) and 128 +/- 33 units2(liquid) for excised lungs. The average peak pressure for Stage I at 300% FRC was 27.3 +/- 3.5 cm H2O compared to 21.8 +/- 4.8 cm H2O for Stage IV. The tissue contribution of the total air-filled hysteresis area was 63.5% for the Stage I lung preparation and decreased to 23.7% for the Stage IV lung preparations. These observations suggest that the tissue contribution to VP hysteresis may be greater in the intact lung preparation than previously assumed based upon excised lung studies and that geometric irreversibility may be a contributing factor. abstract_id: PUBMED:35774021 Breath-dependent pressure fluctuations in various constant- and variable-flow neonatal CPAP devices. Objective: In continuous positive airway pressure (CPAP) devices, pressure can be generated by two different mechanisms: either via an expiratory valve or by one or more jets. Valved CPAP devices are referred to as constant-flow devices, and jet devices are called variable-flow devices. Constant-flow CPAP devices are said to reduce the imposed work of breathing due to lower breath-dependent pressure fluctuations. The present study investigates the performance of various constant- and variable-flow CPAP devices in relation to breath-dependent pressure fluctuations. Design: Experimental study comparing the pressure fluctuations incurred by seven neonatal CPAP devices attached to an active neonatal lung model. Methodology: Spontaneous breathing was simulated using a tidal volume of 6 ml at pressure levels of 5, 7, and 9 mbar. The main outcomes were respiratory pressure fluctuations, tidal volume, and end-expiratory pressure. Results: All CPAP devices tested showed respiratory pressure fluctuations, varying from 0.631 to 3.466 mbar. The generated tidal volume correlated significantly with the pressure fluctuations (r = -0.947; p = 0.001) and varied between 5.550 and 6.316 ml. CPAP devices with jets showed no advantage over CPAP devices with expiratory valves. End-expiratory pressure in the nose deviated from the set pressure between -1.305 and 0.644 mbar and varied depending on whether the pressure was measured in the device or in the tube extending to the nose. Conclusion: During standard spontaneous breathing, breath-dependent pressure fluctuations in constant- and variable-flow devices are comparable. Pressure measurements taken in the tubing system can lead to a considerable deviation of the applied pressure. abstract_id: PUBMED:30310471 Assessing the Haemodynamic Impact of Coronary Artery Stenoses: Intracoronary Flow Versus Pressure Measurements. Fractional flow reserve (FFR)-guided percutaneous coronary intervention results in better long-term clinical outcomes compared with coronary angiography alone in intermediate stenoses in stable coronary artery disease (CAD). Coronary physiology measurements have emerged for clinical decision making in interventional cardiology, but the focus lies mainly on epicardial vessels rather than the impact of these stenoses on the myocardial microcirculation. The latter can be quantified by measuring the coronary flow reserve (CFR), a combined pressure and flow index with a strong ability to predict clinical outcomes in CAD. However, combined pressure-flow measurements show 30-40 % discordance despite similar diagnostic accuracy between FFR and CFR, which is explained by the effect of microvascular resistance on both indices. Both epicardial and microcirculatory involvement has been acknowledged in ischaemic heart disease, but clinical implementation remains difficult as it requires individual proficiency. The recent introduced pressure-only index instantaneous wave-free ratio, a resting adenosine-free stenosis assessment, led to a revival of interest in coronary physiology measurements. This review focuses on elaborating the coronary physiological parameters and potential of combined pressure-flow measurements in daily clinical practice. abstract_id: PUBMED:16340370 Expiratory flow limitation during gravitational drainage of perfluorocarbons from liquid-filled lungs. Flow limitation during pressure-driven expiration in liquid-filled lungs was examined in intact, euthanized New Zealand white rabbits. The aim of this study was to further characterize expiratory flow limitation during gravitational drainage of perfluorocarbon liquids from the lungs, and to study the effect of perfluorocarbon type and negative mouth pressure on this phenomenon. Four different perfluorocarbons (PP4, perfluorodecalin, perfluoro-octyl-bromide, and FC-77) were used to examine the effects of density and kinematic viscosity on volume recovered and maximum expiratory flow. It was demonstrated that flow limitation occurs during gravitational drainage when the airway pressure is &lt; or = -15 cm H(2)O, and that this critical value of pressure did not depend on mouth pressure or perfluorocarbon type. The perfluorocarbon properties affect the volume recovered, maximum expiratory flow, and the time to drain, with the most viscous perfluorocarbon (perfluorodecalin) taking the longest time to drain and resulting in lowest maximum expiratory flow. Perfluoro-octyl-bromide resulted in the highest recovered volume. The findings of this study are relevant to the selection of perfluorocarbons to reduce the occurrence of flow limitation and provide adequate minute ventilation during total liquid ventilation. abstract_id: PUBMED:4003874 Analysis of the pressure-volume relationship of excised lungs. The pressure-volume relationship of excised lungs is explicitly defined in the form of a mathematical model. In the model, lung volume (V) is given by the function V = VmaxF(Ptp,T*)H(Ptp). Vmax is maximum lung volume. F, which describes the recruitment of air-filled units, is a function of transpulmonary pressure (Ptp) and surface tension (T*), whereas H, which is also a function of transpulmonary pressure, describes the expansion of recruited units against tissue forces. F is shown to be the integral of the normalized distribution function of the lung units and remains constant so long as the number of air-filled units does not change. H, on the other hand, is shown to be the product of the elastic properties of the tissues and is responsible for the characteristic non-linear sigmoid shape of lung deflation curves. Results obtained with the model are consistent with the hypothesis that tissue elasticity, tissue hysteresis, area dependent surface tension, and recruitment share responsibility for the characteristic hysteresis of excised lungs. abstract_id: PUBMED:9412569 Influence of driving pressure on raised-volume forced expiration in infants. The raised-volume forced-expiration technique measures infant lung function over an extended volume range. To improve comparisons between individuals and populations, we investigated the influence of jacket pressure on outcome variables in 21 infants. To quantify pressure transmitted from the jacket to the pleural space at a given lung volume, the jacket was inflated against an occluded airway, and the increase in pressure at the mouth was measured. Flow-volume curves were recorded at transmitted pressure (Ptrans) values ranging from 0 to 41.9 cm H20. The effect of Ptrans on the FEV measures of FEV0.5, FEV0.75, and FVC, and on the forced expiratory flow measures of FEF25%, FEF50% and FEF75% was assessed. At Ptrans values between 0 to 20 cm H20, a significant positive relationship existed between transmitted pressure (Ptrans) and all outcome variables except FVC. At higher Ptrans values, all outcome variables demonstrated pressure independence, with the exception of FEF25% (which remained positive) and FVC (which was negative in a subgroup of wheezy infants). FEF75% values tended to decrease at Ptrans values &gt; 25 cm H20. At Ptrans values between 20 and 25 cm H20, most outcome variables are pressure independent. This range is therefore the most suitable for use with the raised-volume forced expiration technique. abstract_id: PUBMED:9382938 Pulmonary blood volume and its effects on pressure/flow relations and flow resistance in isolated lungs of rabbits. Quantitative information about the effects of pulmonary blood volume (Qp) on pulmonary haemodynamics is lacking since Qp changes inevitably with flow. To separate flow-dependent from volume-dependent changes in intravascular pressures we imposed changes in Qp (measured continuously) by altering outflow pressure in seven isolated, blood-perfused rabbit lungs and studied the effects of Qp on the relations between arteriovenous pressure gradient (DeltaP) and blood flow (Q.) under two conditions: flow-dependent volume changes were either permitted or compensated. In the latter circumstances, DeltaP changed more for a given change in Q.. The DeltaP/Q. relations were shifted to smaller DeltaP when Qp was increased. Hence, the calculated flow resistance (R = DeltaP/Q.) decreased with increasing Qp at a given Q.. Assuming constant viscosity, changes in R can be predicted from changes in vessel geometry and thus Qp. We found that R increased less than expected (by a factor of 3-7.5 instead of 9) when Qp was reduced to one-third. This discrepancy may be explained by a change in blood distribution within the lung despite constant Qp and by a change in apparent blood viscosity with Q.. Regardless of these speculations we have shown that Qp determines DeltaP at each flow and thus flow resistance. abstract_id: PUBMED:6895116 Functional anatomy and volume-pressure characteristics of immature lungs. The mechanical behavior of immature rabbit fetal lungs in situ was assessed by air and saline volume-pressure diagrams. All lungs were in their natural fetal state, i.e., filled with fetal pulmonary fluid, prior to inflation. Anatomic correlates were determined by continuous stereomicroscopic monitoring of the lungs. We found the following to be characteristic of immature lungs: (1) Tissue retractive forces are similar to adults. (2) Fetal lungs are not 'plastic' above functional residual capacity. (3) Initial aeration is by 'axial filling' in which airways are distended several times their resting size. (4) Invariably, peripheral rather than central saccules are the first to be aerated and saccules are recruited by both pressure- and time-dependent processes. (5) Pressure-dependence is related to surface forces and terminal orifice size, while time-dependent processes include orifice enlargement, liquid flow through terminal conduits, and the formation of very short-lived, labile bubbles. (6) 'Opening pressure' inflection in the VP diagram is not coincidental with, but follows the onset of saccular aeration. (7) Negative compliance at the onset of deflation is due to saccular enlargement and recruitment. (8) Hysteresis is due to tissue conformational characteristics at high pressures and air entrapment at low pressures. (9) Surface tension cannot be measured reliably from the saline and air VP diagrams. Answer: Conventional pressure-flow measurements are not significantly dependent upon filled volume. A study conducted to determine whether detrusor pressure (p(det.Qmax)) and maximum urinary flow rate (Q(max)) measurements differ when the bladder is filled to maximum cystometric capacity (MCC) versus average voided volume (V(void)) found no clinically relevant changes in the value of standard pressure-flow measurements or individual classification of obstruction when filling was restricted to the average V(void). The study concluded that restriction of filling to the average V(void) during pressure flow studies (PFS) allows a closer approximation to normal voiding without altering the measurements significantly (PUBMED:16042728).
Instruction: Is the pregnancy hormone relaxin an important player in human heart failure? Abstracts: abstract_id: PUBMED:15701466 Is the pregnancy hormone relaxin an important player in human heart failure? Background: The pregnancy hormone relaxin has been raised as a new compensatory mediator of cardiac origin in heart failure (HF). We set out to assess the role of relaxin in pressure overload-induced human HF. Methods: We studied 129 adult patients undergoing cardiac catheterization for isolated aortic valve stenosis (AS). Blood was sampled from the aortic root and, in a subset of 49 patients, from the coronary sinus for the determination of plasma relaxin by enzyme immunoassay. HF was diagnosed when the patient had dyspnea or fatigue on ordinary effort in association with pulmonary wedge pressure &gt;14 mm Hg at catheterization. Results: Forty-one patients had HF, which was systolic (ejection fraction &lt;50%) in 16 patients and diastolic in 25 patients. The median plasma relaxin was 32 pg/ml (&lt;12-297 pg/ml) in 88 AS patients without HF, 28 pg/ml (&lt;12-825 pg/ml) in the 41 AS patients with HF, and 42 pg/ml (range, &lt;12-100 pg/ml) in 11 control patients free of heart disease (p=0.82). Plasma relaxin did not correlate with any measurement of cardiac structure or function. The concentration gradients of relaxin from the aortic root to the coronary sinus indicated relaxin extraction by the heart in the control patients (median change, -5 pg/ml, p=0.038) vs. relaxin production in patients with systolic HF (median change, +6 pg/ml, p=0.028) (p=0.002 between groups). Conclusions: Although the heart may release relaxin into the circulation in certain forms of HF, this does not translate into elevated systemic concentrations. Relaxin is not a major player in human HF. abstract_id: PUBMED:11641245 The pregnancy hormone relaxin is a player in human heart failure. Human congestive heart failure is characterized by complex neurohumoral activation associated with the up-regulation of vasoconstricting and salt-retaining mediators and the compensatory rise of counter-regulatory hormones. In the present study, we provide the first evidence that relaxin (RLX), known as a pregnancy hormone, represents a potential compensatory mediator in human heart failure: plasma concentrations of RLX and myocardial expression of the two RLX genes (H1 and H2) correlate with the severity of disease and RLX responds to therapy. The failing human heart is a relevant source of circulating RLX peptides, and myocytes as well as interstitial cells produce RLX. Elevation of ventricular filling pressure up-regulates RLX expression and the hormone acts as a potent inhibitor of endothelin 1, the most powerful vasoconstrictor in heart failure. Furthermore, RLX modulates effects of angiotensin II, another crucial mediator. Our data identify RLX as a new player in human heart failure with potential diagnostic and therapeutic relevance. abstract_id: PUBMED:29412692 Peptide hormone relaxin: from bench to bedside. The peptide hormone relaxin has numerous roles both within and independent of pregnancy and is often thought of as a "pleiotropic hormone." Relaxin targets several tissues throughout the body, and has many functions associated with extracellular matrix remodeling and the vasculature. This review considers the potential therapeutic applications of relaxin in cervical ripening, in vitro fertilization, preeclampsia, acute heart failure, ischemia-reperfusion, and cirrhosis. We first outline the animal models used in preclinical studies to progress relaxin into clinical trials and then discuss the findings from these studies. In many cases, the positive outcomes from preclinical animal studies were not replicated in human clinical trials. Therefore, the focus of this review is to evaluate the various animal models used to develop relaxin as a potential therapeutic and consider the limitations that must be addressed in future studies. These include the use of human relaxin in animals, duration of relaxin treatment, and the appropriateness of the clinical conditions being considered for relaxin therapy. abstract_id: PUBMED:24934696 Recombinant human relaxin-2: (how) can a pregnancy hormone save lives in acute heart failure? Acute heart failure (AHF) syndrome, characterized by pulmonary and/or venous congestion owing to increased cardiac filling pressures with or without diminished cardiac output, is still associated with high post-discharge mortality and hospitalization rates. Many novel and promising therapeutic approaches, among them endothelin-1, vasopressin and adenosine antagonists, calcium sensitization, and recombinant B-type natriuretic hormone, have failed in large studies. Likewise, the classic drugs, vasodilators, diuretics, and inotropes, have never been shown to lower mortality.The phase III trial RELAX-AHF tested recombinant human relaxin-2 (rhRlx) and found it to improve clinical symptoms moderately, to be neutral regarding the combination of death and hospitalization at day 60, to be safe, and to lower mortality at day 180. This review focuses on basic research and pre-clinical findings that may account for the benefit of rhRlx in AHF. The drug combines short-term hemodynamic advantages, such as moderate blood pressure decline and functional endothelin-1 antagonism, with a wealth of protective effects harboring long-term benefits, such as anti-inflammatory, anti-fibrotic, and anti-oxidative actions. These pleiotropic effects are exerted through a complex and intricate signaling cascade involving the relaxin-family peptide receptor-1, the glucocorticoid receptor, nitric oxide, and a cell type-dependent variety of kinases and transcription factors. abstract_id: PUBMED:29526354 Heart Disease and Relaxin: New Actions for an Old Hormone. The hormone relaxin has long been recognized for its involvement in maternal adaptation during pregnancy. However, discoveries during the past two decades on the mechanism of action of relaxin, its family of receptors, and newly described roles in attenuating ischemia/reperfusion (I/R) injury, inflammation, and arrhythmias have prompted vast interest in exploring its therapeutic potential in cardiovascular disease. These observations inspired recently concluded clinical trials in patients with acute heart failure. This review discusses our current understanding of the protective signaling pathways elicited by relaxin in the heart, and highlights important new breakthroughs about relaxin signaling that may pave the way to more carefully designed future trials. abstract_id: PUBMED:23401142 Relaxin: new pathophysiological aspects and pharmacological perspectives for an old protein. Human relaxin-2 (hereafter simply defined as "relaxin") is a 6-kDa peptidic hormone best known for the physiological role played during pregnancy in the growth and differentiation of the reproductive tract and in the renal and systemic hemodynamic changes. This factor can also be involved in the pathophysiology of arterial hypertension and heart failure, in the molecular pathways of fibrosis and cancer, and in angiogenesis and bone remodeling. It belongs to the relaxin peptide family, whose members comprehensively exert numerous effects through interaction with different types of receptors, classified as relaxin family peptide (RXFP) receptors (RXFP1, RXFP2, RXFP3, RXFP4). Research looks toward the in-depth examination and complete understanding of relaxin in its various pleiotropic actions. The intent is to evaluate the likelihood of employing this substance for therapeutic purposes, for instance in diseases where a deficit could be part of the underlying pathophysiological mechanisms, also avoiding any adverse effect. Relaxin is already being considered as a promising drug, especially in acute heart failure. A careful study of the different RXFPs and their receptors and the comprehension of all biological activities of these hormones will probably provide new drugs with a potential wide range of therapeutic applications in the near future. abstract_id: PUBMED:25787721 Renal impairment and worsening of renal function in acute heart failure: can new therapies help? The potential role of serelaxin. Renal dysfunction is a frequent finding in patients with acute heart failure (AHF) and an important prognostic factor for adverse outcomes. Worsening of renal function occurs in 30-50% of patients hospitalised for AHF, and is associated with increased mortality, prolonged hospital stay and increased risk of readmission. Likely mechanisms involved in the decrease in renal function include impaired haemodynamics and activation of neurohormonal factors, such as the renin-angiotensin-aldosterone system, the sympathetic nervous system and the arginine-vasopressin system. Additionally, many drugs currently used to treat AHF have a detrimental effect on renal function. Therefore, pharmacotherapy for AHF should carefully take into account any potential complications related to renal function. Serelaxin, currently in clinical development for the treatment of AHF is a recombinant form of human relaxin-2, identical in structure to the naturally occurring human relaxin-2 peptide hormone that mediates cardiac and renal adaptations during pregnancy. Data from both pre-clinical and clinical studies indicate a potentially beneficial effect of serelaxin on kidney function. In this review, we discuss the mechanisms and impact of impairment of renal function in AHF, and the potential benefits of new therapies, such as serelaxin, in this context. abstract_id: PUBMED:26589290 Relaxin' the Heart: A Novel Therapeutic Modality. The peptide hormone relaxin has traditionally been linked to the maternal adaptation of the cardiovascular system during the first trimester of pregnancy. By promoting nitric oxide formation through different molecular signaling events, relaxin has been proposed as a pleiotropic and cardioprotective hormone in the setting of many cardiovascular diseases. In fact, preclinical studies were able to demonstrate that relaxin promotes vasodilatation and angiogenesis, ameliorates ischemia/reperfusion injury, and regulates extracellular matrix turnover and remodeling. In the RELAX-AHF phase 3 clinical trial, serelaxin (recombinant human relaxin) was shown to be safe, and it exerted survival benefits in patients with acute heart failure. RELAX-AHF-2 is currently ongoing, and it aims to address a larger population and evaluate harder clinical outcomes. Besides heart failure, acute myocardial infarction, peripheral arterial disease, and stable coronary disease could be target diseases for treatment with serelaxin in future clinical trials. abstract_id: PUBMED:31125285 Human recombinant relaxin-2 does not attenuate hypertension or renal injury but exacerbates vascular dysfunction in a female mouse model of SLE. Systemic lupus erythematosus (SLE) is an autoimmune disease that disproportionately affects women of reproductive age and increases their risk for developing hypertension, vascular, and renal disease. Relaxin has potential beneficial therapeutic effects in cardiovascular disease through direct actions on the vasculature. The potential therapeutic benefit of relaxin on SLE-associated cardiovascular and renal risk factors like hypertension has not previously been tested. We hypothesized that relaxin would attenuate hypertension, renal injury, and vascular dysfunction in an established female mouse model of SLE (NZBWF1 mice). Serelaxin (human recombinant relaxin-2, 0.5 mg·kg-1·day-1) or vehicle was administered via osmotic mini-pump for 4 wk in female control (NZW) or SLE mice between 28 and 31 wk of age. Serelaxin treatment increased uterine weights in both groups, suggesting that the Serelaxin was bioactive. Mean arterial pressure, measured by carotid artery catheter, was significantly increased in vehicle-treated SLE mice compared with vehicle-treated controls, but was not changed by Serelaxin treatment. Albumin excretion rate, measured by ELISA, was similar between vehicle- and Serelaxin-treated SLE mice and between vehicle- and Serelaxin-treated control mice. Wire myography was performed using isolated carotid arteries to assess endothelial-independent and -dependent vasodilation, and data confirm that SLE mice have impaired endothelium-independent and -dependent relaxation compared with control mice. Serelaxin treatment did not affect endothelium-independent vasodilation, but exacerbated the endothelium-dependent dysfunction. These data suggest that, contrary to our hypothesis, Serelaxin infusion does not attenuate hypertension, renal injury, or vascular dysfunction in SLE, but worsens underlying vascular endothelial dysfunction in this experimental model of SLE. These data do not support the use of human recombinant relaxin-2 as an antihypertensive in the SLE patient population. NEW &amp; NOTEWORTHY Relaxin is a peptide hormone commonly known for its role in pregnancy and for its use in recent clinical trials for the treatment of heart failure. Evidence suggests that relaxin has immunomodulatory effects; however, the potential therapeutic impact of relaxin in chronic immune mediated disease is unclear. This study tests whether recombinant human relaxin (Serelaxin) attenuates the progression of autoimmunity, and the associated cardiovascular consequences, in an experimental model of systemic lupus erythematosus. abstract_id: PUBMED:26294074 Serelaxin a novel treatment for acute heart failure. Acute heart failure (AHF) represents a major healthcare burden with a high risk of in-hospital and post-discharge mortality, which remained almost unchanged in the last few decades, underscoring the need of new treatments. Relaxin is a naturally occurring human peptide initially identified as a reproductive hormone and has been shown to play a key role in the maternal hemodynamic and renal adjustments that accommodate pregnancy. Recently, the new molecule serelaxin, a recombinant form of the naturally occurring hormone relaxin has been studied in patients hospitalized for AHF. In addition to vasodilation, serelaxin has anti-oxidative, anti-inflammatory and connective tissue regulating properties. In preclinical studies, it reduced both systemic and renal vascular resistance and, in the clinical trials Pre-RELAX-AHF and RELAX-AHF, it improved dyspnea and signs of congestion. In addition, serelaxin was associated with a reduction of 180-day mortality. The aim of this review is to summarize the pharmacological properties of serelaxin and the results of the preclinical and clinical studies. Answer: The role of the pregnancy hormone relaxin in human heart failure is a subject of ongoing research and debate. Some studies suggest that relaxin may be an important compensatory mediator in human heart failure. For instance, one study found that plasma concentrations of relaxin and myocardial expression of relaxin genes correlate with the severity of heart failure, indicating that relaxin could be a new player in human heart failure with potential diagnostic and therapeutic relevance (PUBMED:11641245). Additionally, relaxin has been shown to have anti-inflammatory, anti-fibrotic, and anti-oxidative actions, which could be beneficial in the context of heart failure (PUBMED:24934696). Clinical trials have also explored the use of serelaxin, a recombinant form of human relaxin-2, in acute heart failure, with some studies indicating that it may improve symptoms and exert survival benefits (PUBMED:26589290; PUBMED:26294074). However, other research has presented more nuanced or contradictory findings. One study involving patients with aortic valve stenosis and heart failure found that while the heart may release relaxin into the circulation in certain forms of heart failure, this does not translate into elevated systemic concentrations, leading to the conclusion that relaxin is not a major player in human heart failure (PUBMED:15701466). Furthermore, in a study on a female mouse model of systemic lupus erythematosus, treatment with serelaxin did not attenuate hypertension or renal injury and even exacerbated vascular dysfunction, suggesting that relaxin may not be beneficial in all contexts (PUBMED:31125285). In summary, while there is evidence supporting the potential therapeutic applications of relaxin in heart failure, the hormone's role is complex and may vary depending on the specific context and patient population. Further research, including carefully designed clinical trials, is needed to fully understand the impact of relaxin on heart failure and its potential as a treatment option (PUBMED:29526354; PUBMED:23401142; PUBMED:25787721).
Instruction: Does the store-turnover method still provide a useful guide to food intakes in Aboriginal communities? Abstracts: abstract_id: PUBMED:17073226 Does the store-turnover method still provide a useful guide to food intakes in Aboriginal communities? Objective: To consider the application of the store-turnover method as a guide to assess food intake in remote Aboriginal communities. Method: Food sources in a remote Aboriginal island community were documented. The contribution of quantifiable food sources to total community-level fresh fruit and vegetable availability was determined. Results: The store remains the single largest supplier of fruit and vegetables overall (54%), however its contribution varies depending on the subpopulation of interest. A store-turnover alone may significantly underestimate community-level dietary intake, depending on the contribution of other food sources. Conclusions: Changes in the food supply in remote communities, coupled with methodological complexities inherent in the store-turnover method, challenge its application in a contemporary context. Implications: A simplified version of the store-turnover method is needed that could be widely applied by community people and health practitioners seeking to initiate and monitor interventions to improve diet quality. abstract_id: PUBMED:7948337 Apparent dietary intake in remote aboriginal communities. Apparent per capita food and nutrient intake in six remote Australian Aboriginal communities using the 'store-turnover' method is described. The method is based on the analysis of community-store food invoices. The face validity of the method supports the notion that, under the unique circumstances of remote Aboriginal communities, the turnover of foodstuffs from the community store is a useful measure of apparent dietary intake for the community as a whole. In all Aboriginal communities studied, the apparent intake of energy, sugars and fat was excessive, while the apparent intake of dietary fibre and several nutrients, including folic acid, was low. White sugar, flour, bread and meat provided in excess of 50 per cent of the apparent total energy intake. Of the apparent high fat intake, fatty meats contributed nearly 40 per cent in northern coastal communities and over 60 per cent in central desert communities. Sixty per cent of the apparent high intake of sugars was derived from sugar per se in both regions. Compared with national Australian apparent consumption data, intakes of sugar, white flour and sweetened carbonated beverages were much higher in Aboriginal communities, and intakes of wholemeal bread, fruit and vegetables were much lower. Results of the store-turnover method have important implications for community-based nutrition intervention programs. abstract_id: PUBMED:8799099 The effect of retail store managers on aboriginal diet in remote communities. Key nutrient densities of the diet of two remote northern coastal Aboriginal communities were measured using the store-turnover method during the periods that three store managers were responsible for each store respectively. Individual store managers were a greater determinant of nutrient density than the community itself. Furthermore, nutrient densities tended to be highest in both communities when their stores were administered by one particular store manager. The results support the notion that store managers wield considerable power over the food supply of remote Aboriginal communities, and raise questions concerning the ability of Aboriginal community members to influence their own food supplies in retail stores. However, the study also confirms that store managers can be important allies in efforts to improve Aboriginal dietary intake. abstract_id: PUBMED:37201415 Food security in Aboriginal and Torres Strait Islander communities in remote Australia during the COVID-19 pandemic: An analysis of print news media and press releases. Objective: This article aims to examine the framing of the issue of food security in very remote Aboriginal and Torres Strait Islander communities in print media and press releases during the beginning of the COVID-19 pandemic in 2020. Methods: Newspaper articles were identified following a systematic search of the Factiva database, and press releases were identified from manual search of key stakeholder websites from January to June 2020 and analysed using a combined adapted framework of the Bacchi's What's the Problem Represented to be? Framework and the Narrative Policy Framework. Results: A food delivery "problem" dominated representations in press releases, and food supply at store level had prominence in print media. Both presented the cause of food insecurity as a singular, identifiable point in time, framed the issue as one of helplessness and lack of control, and proposed policy action. Conclusions: The issue of food security was represented in the media as a simple issue requiring an immediate fix, as opposed to a complex issue requiring a systems-level and sustained policy response. Implications For Public Health: This study will help to guide future media dialogue to impact on both immediate and longer-term solutions to food insecurity in very remote Aboriginal and Torres Strait Islander communities in Australia. abstract_id: PUBMED:26260761 Nutrition in remote Aboriginal communities: lessons from Mai Wiru and the Anangu Pitjantjatjara Yankunytjatjara Lands. Objective: To examine the impact of efforts to improve nutrition on the Anangu Pitjantjatjara Yankunytjatjara (APY) Lands from 1986, especially in Mai Wiru (good food) stores. Methods: Multiple methods were employed. Literature was searched systematically. In 2012, the store-turnover method quantified dietary intake of the five APY communities that have a Mai Wiru store. The price of a standard market basket of basic foods, implementation of store nutrition policy requirements and healthy food checklists were also assessed in all seven APY community stores at intervals from 2012. Results were compared with available data from 1986. Results: Despite concerted efforts and marked achievements, including decreased intake of sugar, increased availability and affordability of healthy foods (particularly fruit and vegetables) and consequent improvement in some nutrient intakes, the overall effect has been a decrease in total diet quality since 1986. This is characterised by increased supply and intake of discretionary foods high in saturated fat, added sugar and salt, particularly sugar sweetened beverages, convenience meals and take-away foods. Conclusions: The documented improvements confirm that residing in these communities can help Aboriginal residents exert control over key aspects of their food supply. However, the overall findings reflect broader changes to the general Australian food supply, and reinforce the notion that, in the absence of supportive regulation and market intervention, adequate and sustained resources are required to improve nutrition and prevent diet-related chronic disease on the APY Lands. Implications: This study also provides insights into food supply/security issues affecting other remote communities and wider Australia. abstract_id: PUBMED:28370899 Traditional food availability and consumption in remote Aboriginal communities in the Northern Territory, Australia. Objective: To explore availability, variety and frequency consumption of traditional foods and their role in alleviating food insecurity in remote Aboriginal Australia. Methods: Availability was assessed through repeated semi-structured interviews and consumption via a survey. Quantitative data were described and qualitative data classified. Results: Aboriginal and non-Indigenous key informants (n=30 in 2013; n=19 in 2014) from 20 Northern Territory (NT) communities participated in interviews. Aboriginal primary household shoppers (n=73 in 2014) in five of these communities participated in a survey. Traditional foods were reported to be available year-round in all 20 communities. Most participants (89%) reported consuming a variety of traditional foods at least fortnightly and 71% at least weekly. Seventy-six per cent reported being food insecure, with 40% obtaining traditional food during these times. Conclusions: Traditional food is consumed frequently by Aboriginal people living in remote NT. Implications for public health: Quantifying dietary contribution of traditional food would complement estimated population dietary intake. It would contribute evidence of nutrition transition and differences in intakes across age groups and inform dietary, environmental and social interventions and policy. Designing and conducting assessment of traditional food intake in conjunction with Aboriginal leaders warrants consideration. abstract_id: PUBMED:27624683 Store turnover as a predictor of food and beverage provider turnover and associated dietary intake estimates in very remote Indigenous communities. Objective: Determine how very-remote Indigenous community (RIC) food and beverage (F&amp;B) turnover quantities and associated dietary intake estimates derived from only stores, compare with values derived from all community F&amp;B providers. Methods: F&amp;B turnover quantity and associated dietary intake estimates (energy, micro/macronutrients and major contributing food types) were derived from 12-months transaction data of all F&amp;B providers in three RICs (NT, Australia). F&amp;B turnover quantities and dietary intake estimates from only stores (plus only the primary store in multiple-store communities) were expressed as a proportion of complete F&amp;B provider turnover values. Food types and macronutrient distribution (%E) estimates were quantitatively compared. Results: Combined stores F&amp;B turnover accounted for the majority of F&amp;B quantity (98.1%) and absolute dietary intake estimates (energy [97.8%], macronutrients [≥96.7%] and micronutrients [≥83.8%]). Macronutrient distribution estimates from combined stores and only the primary store closely aligned complete provider estimates (≤0.9% absolute). Food types were similar using combined stores, primary store or complete provider turnover. Conclusions And Implications: Evaluating combined stores F&amp;B turnover represents an efficient method to estimate total F&amp;B turnover quantity and associated dietary intake in RICs. In multiple-store communities, evaluating only primary store F&amp;B turnover provides an efficient estimate of macronutrient distribution and major food types. abstract_id: PUBMED:33153133 Maitjara Wangkanyi: Insights from an Ethnographic Study of Food Practices of Households in Remote Australian Aboriginal Communities. Many historical, environmental, socioeconomic, political, commercial, and geographic factors underscore the food insecurity and poor diet-related health experienced by Aboriginal people in Australia. Yet, there has been little exploration of Aboriginal food practices or perspectives on food choice recently. This study, with 13 households in remote communities on the Anangu Pitjantjatjara Yankunytjatjara (APY) Lands, fills this gap using ethnographic and Indigenist methods. Results highlight Anangu resourcefulness, securing food despite poverty and adversity, and provide unique insights into factors influencing the three major types and range of dietary patterns identified. These factors include household economic cycles and budgeting challenges; overcrowding and family structures, mobility and 'organization'; available food storage, preparation and cooking infrastructure; and familiarity and convenience. Structural and systemic reform, respecting Aboriginal leadership, is required to improve food security. abstract_id: PUBMED:38347471 Development of a survey tool to assess the environmental determinants of health-enabling food retail practice in Aboriginal and Torres Strait Islander communities of remote Australia. Background: Environmental factors can impact the ability of food retail businesses to implement best practice health-enabling food retail. Methods: We co-designed a short-item survey on factors influencing food retail health-enabling practice in a remote Australian setting. Publicly available submissions to an Australian Parliamentary Inquiry into food pricing and food security in remote Indigenous communities were coded using an existing remote community food systems assessment tool and thematically analysed. Themes informed survey questions that were then prioritised, refined and pre-tested with expert stakeholder input. Results: One-hundred and eleven submissions were coded, and 100 themes identified. Supply chain related data produced the most themes (n = 25). The resulting 26-item survey comprised questions to assess the perceived impact of environmental factors on a store's health-enabling practice (n = 20) and frequency of occurrence (n = 6). Conclusions: The application of this evidence-informed, co-designed survey will provide a first-time cross-sectional analysis and the potential for ongoing longitudinal data and advocacy on how environmental factors affect the operations of remote stores. abstract_id: PUBMED:28898509 What do we know about the diets of Aboriginal and Torres Strait Islander peoples in Australia? A systematic literature review. Objective: To provide an overview of published research on the dietary intake of Aboriginal and Torres Strait Islander peoples. Methods: Peer-reviewed literature from 1990 to October 2016 was searched to identify studies that measured the dietary intake of Australian Aboriginal and Torres Strait Islander populations. Study quality was assessed using a purposely devised quality appraisal tool. Meta-analysis was not possible due to the heterogeneity in dietary intake assessment methods. A narrative synthesis of study findings, where key themes were compared and contrasted was completed. Results: Twenty-five articles from twenty studies with outcome measures related to dietary intake were included. Dietary intake was assessed by electronic store sales, store turnover method, 24-hour dietary recall, food frequency questionnaire and short questions. Consistent findings were low reported intakes of fruit and vegetables and high intakes of total sugar and energy-dense, nutrient-poor food and beverages. Conclusions: While differences between studies and study quality limit the generalisability of the findings, most studies suggest that the diets of Aboriginal and Torres Strait Islander peoples are inadequate. Implications for public health: A more concerted approach to understanding dietary patterns of Aboriginal and Torres Strait Islander peoples is required to inform policy and practice to improve diet and nutrition. Answer: The store turnover method has historically been used to assess food intake in remote Aboriginal communities, but its current utility is subject to debate. The method is based on analyzing community-store food invoices and has been considered a useful measure of apparent dietary intake for the community as a whole under the unique circumstances of remote Aboriginal communities (PUBMED:7948337). However, recent studies suggest that the store turnover method may not be as reliable as it once was due to changes in the food supply and the complexity of food sources in remote communities. For instance, while the store remains the single largest supplier of fruit and vegetables, its contribution varies depending on the subpopulation of interest, and relying solely on store turnover may significantly underestimate community-level dietary intake (PUBMED:17073226). Additionally, the method does not account for the influence of store managers, who have been shown to be a greater determinant of nutrient density than the community itself, indicating that the food supply can be heavily influenced by the management of the store (PUBMED:8799099). Moreover, the store turnover method does not capture the consumption of traditional foods, which are reported to be available year-round and frequently consumed, playing a role in alleviating food insecurity (PUBMED:28370899). This suggests that a more comprehensive approach is needed to assess dietary intake that includes traditional food sources. Furthermore, the method may not reflect the broader changes to the general Australian food supply, which has seen an increase in the supply and intake of discretionary foods high in saturated fat, added sugar, and salt (PUBMED:26260761). This indicates that the store turnover method may not capture the full extent of dietary changes and the nutrition transition occurring in these communities. In light of these findings, while the store turnover method can still provide some insights, it is clear that a simplified and more comprehensive version of the method is needed. Such a method should be able to be widely applied by community people and health practitioners to initiate and monitor interventions to improve diet quality, taking into account the contribution of other food sources, including traditional foods, and the influence of store managers (PUBMED:17073226). Additionally, it should reflect the broader changes in the food supply and the need for systemic reform to improve food security and diet-related health (PUBMED:33153133).
Instruction: Triaging herpes zoster ophthalmicus patients in the emergency department: do all patients require referral? Abstracts: abstract_id: PUBMED:21175516 Triaging herpes zoster ophthalmicus patients in the emergency department: do all patients require referral? Objectives: The objective was to assess the predictive value of clinical signs and symptoms of herpes zoster ophthalmicus (HZO) for development of moderate to severe eye disease. Methods: This was a prospective cohort multicenter study of 54 patients referred to the ophthalmology service after presenting to the emergency department (ED) or primary care clinic with a zosteriform rash of less than 10 days' duration. Upon referral to ophthalmology, easily assessable clinical signs and symptoms were documented. A complete ocular exam was then performed. Patients were followed for 2 months. Results: Twenty-three patients (43%) developed moderate to severe disease as defined by corneal or intraocular involvement. Eye redness and rash in the supratrochlear nerve distribution had a statistically significant association with clinically relevant eye disease. All 23 patients who developed moderate to severe eye disease presented with a red eye. Hutchinson's sign (nasociliary nerve involvement) was not predictive of clinically relevant eye disease. Conclusions: Eye redness was 100% sensitive for predicting moderate to severe eye disease in this sample of patients and should necessitate immediate referral for ophthalmologic assessment. Patients lacking eye redness, even with a positive Hutchinson's sign, may not require immediate specialist consultation. All patients not being referred require careful instructions to seek further care should they develop any concerning eye symptoms such as redness, pain, photophobia, or visual disturbance. abstract_id: PUBMED:18534294 Ophthalmic diagnoses in the ED: herpes zoster ophthalmicus. The epidemiology, pathophysiology, and clinical presentation of herpes zoster ophthalmicus in the emergency department is discussed with an emphasis on the identification of the numerous potential ocular complications. Emergency physicians need to be able to recognize the clinical features of herpes zoster ophthalmicus and initiate appropriate therapy and referral. abstract_id: PUBMED:24945738 Pattern of Acute Retinal Necrosis in a Referral Center in Tunisia, North Africa. Purpose: To describe the pattern of acute retinal necrosis (ARN) syndrome in a referral center in Tunisia. Methods: Retrospective review of the charts of 12 patients (12 eyes). Results: Necrotizing retinitis involved &gt;50% of the retina in 33.3% of eyes. Polymerase chain reaction on aqueous humor sample identified herpes simplex virus-1 in 4 eyes (33.3%) and varicella zoster virus in 3 eyes (25%), and was negative in 5 eyes (41.7%). All patients received intravenous acyclovir, followed by oral antivirals. Follow-up ranged from 6 to 16 months. Retinal detachment (RD) occurred in 3 eyes (25%), but no patient developed bilateral ARN. Final visual acuity was 20/200 or worse in 7 eyes (58.3%). Delay in diagnosis (p = 0.015), macular involvement (p = 0.045), development of RD (p = 0.018), and 25-50% of retinal involvement (p = 0.045) were associated with a worse visual outcome. Conclusion: ARN carries poor visual prognosis in Tunisia. abstract_id: PUBMED:37195565 Approaching headaches and facial pains in eye care practice. Headaches and facial pains are among the most frequent ailments seen in outpatient or emergency settings. Given the fact that some of the primary headaches and facial pains mimic the characteristic patterns seen in ocular diseases and related conditions, it is fairly common for these situations to be sent to an ophthalmology or optometry clinic and misdiagnosed as ocular headaches. This may result in a delay in starting an appropriate therapy, therefore extending the patient's illness. This review article aims to help the practitioners in understanding common causes of headaches and facial pains, approaching such cases in eye OPD and differentiating them for similar ocular conditions to impart an appropriate treatment or referral. abstract_id: PUBMED:24763121 Clinical features of ocular herpetic infection in an italian referral center. Purpose: The aim of this study was to describe clinical manifestations of herpetic ocular infection caused by herpes simplex virus (HSV), varicella zoster virus (VZV), and cytomegalovirus (CMV) in a referral center in Northern Italy. Methods: This retrospective study included 241 patients with herpetic ocular infection referred to the tertiary-care Ocular Immunology and Uveitis Service, at the San Raffaele Scientific Institute in Milan, from January 2006 to August 2013. The main clinical parameters evaluated were etiology, clinical features, ocular complications, and recurrences of the infection. Results: Two hundred forty-one patients (144 female and 97 male) were followed for a mean time of 24.9 ± 18.2 months (range, 12-72). One hundred eighty-nine (78.4%) patients had HSV, 45 (18.7%) had VZV, and 7 (2.9%) had CMV infection. In the HSV and VZV groups, the most frequent manifestation was keratitis (41.3% and 31.1%, respectively), followed by anterior uveitis (33.3% and 28.9%, respectively). The most common CMV presentation was retinitis (71.4%). The main complications observed were glaucoma (38.1% in HSV group, 40% in the VZV group, and 28.6% in the CMV group) and cataract (27.5% in HSV group, 26.7% in VZV group, and 28.6% in CMV group), whereas retinal detachment frequently occurred in patients with retinitis (50%, 42.9%, and 40% among HSV, VZV, and CMV patients, respectively). Recurrences were observed in 65.1%, 51.1%, and 28.6% of patients with HSV, VZV, and CMV, respectively. Conclusions: Manifestations of herpetic ocular disease in our patients are comparable with other published series. However, the rate of ocular complications and recurrences during follow-up were higher compared with other series. abstract_id: PUBMED:31573356 Herpetic Anterior Uveitis in a Chinese Referral Center: Clinical Manifestations and Laboratory Test Results. Purpose: To describe and to compare the clinical manifestation and laboratory test results of herpetic anterior uveitis (HAU) caused by Herpes simplex virus (HSV) and varicella-zoster virus (VZV). Methods: A retrospective, observational study on patients diagnosed with HAU. Etiology, clinical features, ocular complications, and recurrences of the infection were evaluated as main clinical parameters. The aqueous Interleukin-8 (IL-8) level was also measured to assess the intraocular inflammation. Results: Thirty-two eyes (32 patients) were involved. Among all involved cases, 24 had VZV-AU and 8 had HSV-AU. Common clinical features of HAU included the presence of KPs (90.6%), distorted pupil (83.3%), Iris atrophy (71.9%) and corneal edema (50%). The intraocular fluid analysis showed higher viral load and IL-8 level in VZV-AU. Conclusions: Compared with HSV-AU, the intraocular inflammation was more severe in VZV-AU. The intraocular fluid analysis was valuable for the etiological diagnosis and the evaluation of disease severity. abstract_id: PUBMED:30148265 Correlating CD4 count with mucocutaneous manifestations in HIV-positive patients: A prospective study. Background: Mucocutaneous findings may be the presenting symptoms in HIV-afflicted individuals. A multitude of mucocutaneous diseases also occurs during the course of the ailment, with some conditions being classed as disease defining. They include infectious diseases and noninfective inflammatory and neoplastic dermatoses. With progressive fall in CD4 count, there is a change in the types of mucocutaneous lesions encountered. Aim: This study aims to statistically correlate the CD4 counts with the mucocutaneous manifestations in 100 HIV-positive patients. Materials And Methods: A total of 100 cases of HIV-positive patients with skin and mucous membrane manifestations were selected serially. A complete history was taken, clinical examination was done, and the CD4 count was noted. Patients were divided into four groups (Groups I, II, III, IV) with different ranges of CD4 values, namely, &lt;50, 50-200, 201-500, &gt;500, respectively. Results: The distribution of study population in CD4 ranges showed that majority (47%) of the study population had CD4 count between 201 and 500, and 29% of the study group had CD4 count between 50 and 200 cells. Almost 21% of the patients had the count &gt; 500 cells and 3% had cell count &lt; 50. Majority of the infectious and non-infectious dermatoses were common in the CD4 count between 201-500 (Group III) and 50-200 (Group II). In the study groups, 52 cases (52%) were on antiretroviral therapy (ART), and the remaining 48 cases (48%) were not on ART at the time of diagnosis of mucocutaneous manifestations. Out of 48 ART-naïve cases, 23 patients were screened and newly diagnosed at the outpatient department (OPD) based on the mucocutaneous manifestations. Most of the patients with multiple mucocutaneous conditions were in the CD4 count &lt;200, whereas single manifestation was seen predominantly in CD4 count &gt;200. Conclusion: Statistically significant association with the CD4 count was seen in herpes zoster ophthalmicus, genital wart, genital herpes, vaginal discharge syndrome, scabies, pyoderma, dermatophytosis, Hansen's disease, herpetic gingivostomatitis, seborrhoeic dermatitis, lichen planus, and drug reactions. These dermatoses may indicate the worsening of immune status and the need for regular monitoring with periodical CD4 counting. Occurrence of dermatoses such as photosensitive eczema, drug reaction, lichen planus, Type I lepra reaction, and herpes zoster ophthalmicus in patients on ART may be due to IRIS. To avoid the more frequent occurrence of infectious dermatoses and to reduce the development of IRIS with ART, all HIV-positive cases may be started on ART at higher CD4 count. Screening for HIV infection is suggested whenever the following conditions are seen: persistent oral candidiasis, atypical manifestations of zoster, herpes zoster ophthalmicus, herpetic gingivostomatitis and MC in adults, exaggerated IBA, and extensive seborrhoeic dermatitis. abstract_id: PUBMED:26319339 Herpes Zoster Ophthalmicus. Herpes zoster is a commonly encountered disorder. It is estimated that there are approximately 1 million new cases of herpes zoster in the United States annually, with an incidence of 3.2 per 1000 person-years. Patients with HIV have the greatest risk of developing herpes zoster ophthalmicus compared with the general population. Other risk factors include advancing age, use of immunosuppressive medications, and primary infection in infancy or in utero. Vaccination against the virus is a primary prevention modality. Primary treatments include antivirals, analgesics, and anticonvulsants. Management may require surgical intervention and comanagement with pain specialists, psychiatrists, and infectious disease specialists. abstract_id: PUBMED:25532996 A prospective study of the clinical characteristics of patients with herpes simplex and varicella zoster keratitis, presenting to a New Zealand emergency eye clinic. Purpose: To identify the demographic and clinical associations of patients presenting with herpetic keratitis in New Zealand compared with presentations of presumed microbial keratitis. Methods: A prospective, 6-month, observational case series of presumed microbial keratitis (including marginal keratitis), specifically identifying all cases of herpes simplex and varicella zoster keratitis attending an emergency eye clinic was conducted. Main outcome measures included demographics, presenting signs and symptoms, and medical, ocular, and drug history. Results: A total of 140 cases of herpetic keratitis were identified, which comprised 125 cases of herpes simplex virus (89%) and 15 cases of varicella zoster virus (11%). Herpes simplex keratitis was associated with a history of keratitis (n = 58, phi = 0.24, P &lt; 0.0001), Maori ethnicity (n = 27, phi = 0.26, P &lt; 0.0001), respiratory disease (n = 18, phi = 0.14, P = 0.009), corticosteroid use (n = 23, phi = 0.22, P &lt; 0.0001), and cardiovascular disease (CVD) (n = 10, phi = 0.11, P = 0.03). Inverse association was observed between herpes simplex keratitis and ocular surface disease (n = 8, phi = -0.16, P &lt; 0.002) and Asian ethnicity (n = 4, phi = -0.11, P = 0.04). Varicella zoster keratitis was associated with CVD (n = 3, phi = 0.15, P = 0.03). Conclusions: Patients with asthma, CVD, on long-term corticosteroid inhalers or creams, or of Maori ethnicity need particular assessment to investigate herpes simplex as a possible cause of keratitis. Clinicians prescribing inhaled corticosteroids or topical corticosteroid creams should be aware of possible association with herpes simplex keratitis. abstract_id: PUBMED:23089088 Images in emergency medicine. Young boy with eye pain. Herpes zoster ophthalmicus, varicella zoster stromal keratitis, episcleritis and iritis. N/A Answer: Based on the provided abstracts, not all patients presenting with herpes zoster ophthalmicus (HZO) in the emergency department may require immediate referral to an ophthalmologist. According to a prospective cohort multicenter study (PUBMED:21175516), eye redness was found to be 100% sensitive for predicting moderate to severe eye disease in patients with HZO. Therefore, patients presenting with eye redness should be referred immediately for ophthalmologic assessment. However, patients without eye redness, even those with a positive Hutchinson's sign, may not require immediate specialist consultation. It is crucial, though, that all patients not being referred are given careful instructions to seek further care should they develop any concerning eye symptoms such as redness, pain, photophobia, or visual disturbance. The importance of recognizing the clinical features of HZO and initiating appropriate therapy and referral is emphasized in another abstract (PUBMED:18534294), which discusses the potential ocular complications of HZO. This suggests that while not all patients may require immediate referral, emergency physicians should be able to identify those who do based on specific clinical presentations. In summary, triaging of HZO patients in the emergency department should be based on the presence of clinical signs such as eye redness, which necessitates immediate referral. Patients without such signs should be educated on symptoms that would warrant a subsequent consultation with an ophthalmologist.
Instruction: Narrow band UVB: is it effective and safe for paediatric psoriasis and atopic dermatitis? Abstracts: abstract_id: PUBMED:26977048 Prospective Investigation of 25(OH)D3 Serum Concentration Following UVB Narrow Band Phototherapy in Patients with Psoriasis and Atopic Dermatitis. Vitamin D deficiency represents a major health issue. It is a worldwide endemic and is associated with a broad variety of severe diseases. The skin is a key tissue for the human body's vitamin D endocrine system. It represents a target tissue for biologically active vitamin D metabolites. Approximately 90% of the human body's requirements of vitamin D have to be synthesised in the skin by the action of UVB-radiation. However, individual factors that influence a person's cutaneous synthesis of vitamin D are still not well understood. In our present prospective study we investigated the effect of UVB narrow band (UVBnb, 311 nm) and PUVA phototherapy on 25(OH)D3 serum concentration, in patients with psoriasis, atopic dermatitis and a few cases with other dermatoses (n=41). We found that two weeks of UVBnb treatment resulted in an increase of 25(OH)D3 serum concentration from 11.4 to 20.5 ng/ml (p&lt;0.001), while in contrast PUVA-treatment did not significantly alter vitamin D status. These findings question the hypothesis of a relevant vitamin D metabolizing effect of UVA. Psoriasis patients showed a trend for a stronger increase in 25(OH)D3 serum levels following UVBnb compared to patients with atopic dermatitis. Patients with relatively low baseline serum 25(OH)D3 concentrations had a stronger increase in 25(OH)D3 concentrations compared to patients with relatively high 25(OH)D serum concentrations. In general patients with skin types (Fitzpatrick) I and II (median=14.3 ng/ml) had a higher baseline of 25(OH)D3 serum concentration compared to patients with skin types III (median=11.2 ng/ml) or IV-V (median=12.3 ng/ml), although these differences were not statistically significant (p=0.106). Baseline 25(OH)D3 serum concentrations were correlated with presence of genetic variants (SNPs of VDR, CYP2R1, VDBP/GC) that influence vitamin D status, and with other individual factors such as body mass index, age and gender. We also investigated the effect of phototherapy on blood pressure and a variety of laboratory parameters such as CRP, HbA1c, LDL, HDL, triglycerides and cholesterol. In conclusion, our pilot study shows that UVBnb phototherapy efficiently increases 25(OH)D3 serum concentration and reports interesting preliminary findings that have to be re-evaluated in larger follow-up studies. abstract_id: PUBMED:11438134 Narrow-band ultraviolet B and broad-band ultraviolet A phototherapy in adult atopic eczema: a randomised controlled trial. Background: Narrow-band ultraviolet B (UVB) is an effective treatment for psoriasis, and open studies suggest that this phototherapy might improve atopic eczema. We did a randomised controlled trial to compare narrow-band UVB, UVA, and visible light phototherapy as second-line, adjunctive treatments in adult patients with moderate to severe atopic eczema. Methods: Phototherapy was administered twice a week for 12 weeks. 26 patients were randomly assigned narrow-band UVB, 24 were assigned UVA, and 23 visible fluorescent light. The primary endpoints were change in total disease activity (sum of scores at six body sites) and change in extent of disease after 24 treatments compared with baseline. Data were analysed by the method of summary measures. Findings: 13 patients withdrew or were excluded from analysis. Mean reductions in total disease activity over 24 treatments in patients who received narrow-band UVB and UVA, respectively, were 9.4 points (95% CI 3.6 to 15.2) and 4.4 points (-1.0 to 9.8) more than in patients who received visible light. Mean reductions in extent of disease after 24 treatments with narrow-band UVB and UVA were 6.7% (1.5 to 11.9) and -1.0% (-5.3 to 3.3) compared with visible light. A small proportion of patients developed erythema after phototherapy or had a flare in their eczema sufficient to withdraw from treatment. Interpretation: Narrow-band UVB is an effective adjunctive treatment for moderate to severe atopic eczema, and the treatment is well tolerated by most patients. abstract_id: PUBMED:26369540 The Safety and Efficacy of Narrow Band Ultraviolet B Treatment in Dermatology: A Review. Narrow-band ultraviolet B (NBUVB) phototherapy is an important treatment modality in dermatology. The most common dermatological indications for NBUVB include psoriasis, atopic dermatitis and vitiligo; however, it has been found to be an effective and well-tolerated treatment option in various other dermatoses. The efficacy of NBUVB phototherapy compares favorably with other available photo(chemo)therapy options and its efficacy is further augmented by a number of topical and systemic adjuncts. The long-term safety of NBUVB phototherapy remains to be fully elucidated; however, available data now suggest that it is safe and well-tolerated. The objective of this review was to summarize the current understanding of the safety and efficacy of NBUVB phototherapy in dermatology. abstract_id: PUBMED:20726938 Narrow band UVB: is it effective and safe for paediatric psoriasis and atopic dermatitis? Background: Phototherapy has a time-honoured place in the treatment of variety of skin diseases in adults. The use of this modality in children is limited mainly due to concerns about long-term carcinogenic potential. Only a few clinical trials have been performed on the efficacy and safety of phototherapy in children. Objectives: To determine the efficacy and safety of NB-UVB phototherapy in children with atopic dermatitis (AD) and psoriasis. Methods: This is a retrospective review of the treatment outcomes of 129 children with psoriasis and AD, who were treated with NB-UVB between 1998 and 2006 at our institute. Results: Fifty per cent of the psoriatic patients and 25% of patients with AD achieved clearance by the end of the treatment. NB-UVB phototherapy was well-tolerated, with no serious adverse effects except one doubtful case of melanoma in situ. Conclusions: NB-UVB may be considered as a viable therapeutic option in children with psoriasis and AD. Children who are treated by phototherapy should remain under annual dermatologic observation. To determine true carcinogenic risk of UV therapy, longer follow-up is essential. abstract_id: PUBMED:12925186 Narrow-band ultraviolet B treatment for vitiligo, pruritus, and inflammatory dermatoses. Background: Narrow-band ultraviolet B (NB-UVB) therapy has been used successfully for the treatment of inflammatory and pigmentary skin disorders including atopic dermatitis, psoriasis, mycosis fungoides, polymorphous light eruption, and vitiligo. Methods: This is a retrospective review of the treatment outcomes of 117 consecutive patients with vitiligo, pruritus, and other inflammatory dermatoses, excluding those with psoriasis and CTCL, who were treated with NB-UVB between 1998 and 2001 at our institution. Results: Approximately 80% of all patients showed improvement in their condition. NB-UVB phototherapy was well tolerated, with no serious adverse effects. In patients with vitiligo, 6.4% had an abnormal thyroid-stimulating hormone level and 6.5% had anemia. Conclusion: NB-UVB may be considered as a viable therapeutic option in the treatment of vitiligo, pruritus, and other inflammatory dermatoses. Long-term adverse effects and cost-benefit analysis of NB-UVB therapy compared to other treatment modalities remain to be determined. abstract_id: PUBMED:38445760 Paediatric indications and dosing guidance for advanced targeted treatments in Australia. As with adults, paediatric patients may benefit from a number of advanced targeted therapies for inflammatory skin disease. This brief report aims to be an accessible reference tool with respect to regulatory approval and reimbursement of these treatments within Australia. abstract_id: PUBMED:8284319 Comparative potency of broad-band and narrow-band phototherapy sources to induce edema, sunburn cells and urocanic acid photoisomerization in hairless mouse skin. The Philips TL01 narrow-band (311-313 nm) fluorescent lamp provides effective phototherapy for psoriasis and atopic eczema while emitting less erythemogenic radiation than conventional broad-band (e.g. Philips TL12; 270-350 nm) sources. We studied the potency of TL01 and TL12 radiation to induce edema and sunburn cells (SBC) and to photoisomerize naturally occurring trans-urocanic acid (UCA) to cis-UCA in hairless mouse skin. Cis-UCA has immunosuppressive properties and is a putative mediator of UV-induced suppression of immune responses. For each source, there was UV dose dependence for all three responses. Within the dose ranges used, the potency ratio of TL12:TL01 radiation to induce equivalent edema and SBC was about 6:1. However, the potency ratio to induce cis-UCA was less than 2.3:1. Therefore, at a given level of edema or SBC induction, TL01 was more efficient than TL12 at UCA photoisomerization. The TL01 induction of immunomodulating cis-UCA, while causing minimal skin injury, may relate to the therapeutic efficacy of this source in skin conditions with an immunological component. abstract_id: PUBMED:35538716 Evaluation of Biological Effects and Transcriptome Changes Induced by LED-Based Narrow-Band UVB Phototherapy. Ultraviolet (UV), particularly UVB, is widely used in the treatment of skin diseases including psoriasis, atopic dermatitis, vitiligo, mycosis fungoides and pruritus. Recently, there has been a trend of replacing broad-band UVB (BB-UVB) units with narrow-band UVB (NB-UVB), as studies have demonstrated that NB-UVB is more efficacious in the treatment of psoriasis. The purpose of this study is to evaluate the biological effects and transcriptome changes induced by light-emitting diode-based NB-UVB (NB-UVB LED) phototherapy. Cell viability and the cell migration ability were significantly decreased posttreatment, as well as apoptosis and ROS levels were remarkably increased. NB-UVB-induced S phase arrest was observed 12 h postirradiation. Bioinformatics analysis of transcriptome sequencing data revealed that NB-UVB LED irradiation induced dose-depended changes in multiple key signaling pathways, such as PI3K and cytoskeletal-related pathways. The depolymerization of cytoskeleton induced by NB-UVB was observed 24 h posttreatment. In addition, the expression levels of cytoskeleton-related proteins FN1, ITGB4, ITGA1, RAC2 and DOCK1 decreased significantly 12 h after irradiation. Our results indicated that NB-UVB LED may serve as a novel option for the development of NB-UVB phototherapy devices. abstract_id: PUBMED:21762976 Reversal of atopic dermatitis with narrow-band UVB phototherapy and biomarkers for therapeutic response. Background: Atopic dermatitis (AD) is a common inflammatory skin disease exhibiting a predominantly T(H)2/"T22" immune activation and a defective epidermal barrier. Narrow-band UVB (NB-UVB) is considered an efficient treatment for moderate-to-severe AD. In patients with psoriasis, NB-UVB has been found to suppress T(H)1/T(H)17 polarization, with subsequent reversal of epidermal hyperplasia. The immunomodulatory effects of this treatment are largely unknown in patients with AD. Objective: We sought to evaluate the effects of NB-UVB on immune and barrier abnormalities in patients with AD, aiming to establish reversibility of disease and biomarkers of therapeutic response. Methods: Twelve patients with moderate-to-severe chronic AD received NB-UVB phototherapy 3 times weekly for up to 12 weeks. Lesional and nonlesional skin biopsy specimens were obtained before and after treatment and evaluated by using gene expression and immunohistochemistry studies. Results: All patients had at least a 50% reduction in SCORAD index scores with NB-UVB phototherapy. The T(H)2, T22, and T(H)1 immune pathways were suppressed, and measures of epidermal hyperplasia and differentiation normalized. The reversal of disease activity was associated with elimination of inflammatory leukocytes and T(H)2/T22- associated cytokines and chemokines and normalized expression of barrier proteins. Conclusions: Our study shows that resolution of clinical disease in patients with chronic AD is accompanied by reversal of both the epidermal defects and the underlying immune activation. We have defined a set of biomarkers of disease response that associate resolved T(H)2 and T22 inflammation in patients with chronic AD with reversal of barrier pathology. By showing reversal of the AD epidermal phenotype with a broad immune-targeted therapy, our data argue against a fixed genetic phenotype. abstract_id: PUBMED:16232274 New and established indications for phototherapy with narrowband UVB Phototherapy with ultraviolet (UV) irradiation of wavelengths between 280 and 320 nm (UV-B) is a safe and effective treatment for a variety of inflammatory skin diseases. In addition to standard broad band UVB, narrow band phototherapy with fluorescent bulbs emitting near monochromatic UV between 310-315 nm has become an important treatment for diseases such as psoriasis, atopic dermatitis or vitiligo. Other diseases respond favorably to narrow band UV-B phototherapy, the number of potential indications for such phototherapy is continuously growing. The differential effects of narrow band UV-B phototherapy in comparison to other UV phototherapies, as well as new and established indications for this treatment modality are reviewed. Answer: Narrow band UVB (NB-UVB) phototherapy is considered an effective and safe treatment modality for various dermatological conditions, including psoriasis and atopic dermatitis, in both adults and children. A retrospective review of the treatment outcomes of 129 children with psoriasis and atopic dermatitis (AD) treated with NB-UVB between 1998 and 2006 indicated that 50% of the psoriatic patients and 25% of patients with AD achieved clearance by the end of the treatment. NB-UVB phototherapy was well-tolerated, with no serious adverse effects except for one doubtful case of melanoma in situ. The study concluded that NB-UVB could be considered a viable therapeutic option in children with psoriasis and AD, although children treated with phototherapy should remain under annual dermatologic observation to determine the true carcinogenic risk of UV therapy, which requires longer follow-up (PUBMED:20726938). Furthermore, the safety and efficacy of NB-UVB phototherapy have been reviewed, summarizing that it is an important treatment modality in dermatology, with common indications including psoriasis, atopic dermatitis, and vitiligo. The review found that NB-UVB phototherapy is effective and well-tolerated, and while the long-term safety remains to be fully elucidated, available data suggest that it is safe (PUBMED:26369540). In conclusion, NB-UVB phototherapy is recognized as an effective and generally safe treatment for pediatric psoriasis and atopic dermatitis, with a good tolerance profile. However, ongoing monitoring for potential long-term adverse effects is recommended.
Instruction: Iron deficiency in infancy: is an immigrant more at risk? Abstracts: abstract_id: PUBMED:7572938 Transferrin saturation and risk of cancer. The authors examined the hypothesis that relatively high levels of transferrin saturation increase the risk of cancer. They studied a cohort of prepaid health plan members whose transferrin saturation levels were measured during the period 1969-1971 and who were followed for cancer through 1990. After the exclusion of 10 percent of the subjects who received treatment for one or more of six chronic conditions or who were pregnant when the measurement was made and persons who contributed less than 5 years of follow-up, the authors were left with 38,538 persons who were followed for an average period of 17.7 years. In women, a positive association was observed between transferrin saturation and risk of stomach carcinoma (&gt; or = 34.5% compared with &lt; or = 20.3%: relative risk (RR) = 3.5, 95% confidence interval (CI) 0.98-12). In men, transferrin saturation was inversely associated with risk of colon and rectal carcinoma (&gt; or = 40.7% compared with &lt; or = 26.0%: colon, RR = 0.62, 95% CI 0.35-1.1; rectum, RR = 0.30, 95% CI 0.08-1.1) and with non-Hodgkin's lymphoma (32.1-40.6% compared with &lt; or = 26.0%: RR = 0.31, 95% CI 0.11-0.88; no cases observed with transferrin saturation &gt; or = 40.7%). The authors did not find evidence that the risk of epithelial cancer (all sites combined) was related to transferrin saturation level or to iron deficiency (&lt; or = 15%) or overload (&gt; or = 60%). abstract_id: PUBMED:35351397 External Validation of the ACTA-PORT Transfusion Risk Score in Older Cardiac Surgery Patients at Risk of Frailty. Objectives: Accurate preoperative transfusion risk stratification may serve to better manage older patients undergoing cardiac surgery. Therefore, the aim of the present study was to externally validate the existing Association of Cardiothoracic Anesthetists perioperative risk of blood transfusion (ACTA-PORT) score in a population ≥70 years of age scheduled for cardiac surgery. Furthermore, the study authors investigated the additional prognostic value of individual frailty variables to this transfusion risk score. Design: A retrospective analysis. Setting: At a tertiary-care hospital. Participants: Five hundred seven patients aged ≥70 years undergoing elective cardiac surgery from July 2015 to August 2017. Interventions: None. Measurements And Main Results: The primary outcome was the administration of a perioperative blood transfusion. Frailty domains were assessed in a preanesthesia geriatric assessment, and a priori selection of biomarkers derived from blood was determined. The original ACTA-PORT score resulted in a c-statistic of 0.78 (95% confidence interval 0.74-0.82), with moderate calibration in predicting perioperative allogeneic transfusion in older patients undergoing cardiac surgery. Model updating, using the closed testing procedure, resulted in model revision with a higher discriminatory performance (c-statistic of 0.83, 95% confidence interval 0.79-0.86) and corrected calibration slope. Iron deficiency, impaired nutritional status, and physical impairment were associated with perioperative transfusions. The addition of individual frailty variables to the updated ACTA-PORT model did not result in improved predictive performance. Conclusions: External validation of the original ACTA-PORT score showed good discrimination and moderate calibration in older patients at risk of frailty undergoing cardiac surgery. Updating the original ACTA-PORT improved the predictive performance. Careful evaluation of additional frailty domains did not add prognostic value to the ACTA-PORT score. abstract_id: PUBMED:33725737 Perioperative Anesthesiologic Management: Risk Assessment and Preoperative Improvement of Patient Conditions With patient safety being anesthesiologists' top priority, the focus of preoperative assessment must be to reduce perioperative morbidity and mortality of each patient entrusted to us. Perioperative risk is multifactorial and depends on the extent of surgery and the preoperative condition of the patient.The three main causes of unexpected perioperative death are cardiac arrest, hypoxemia and acute bleeding. Therefore, cardiac and pulmonary risk assessment should cover pre-existing conditions, patient's functional capacity and risk factors associated with the surgical procedure. Specific assessment tools have been developed, are easily accessible and have proven effective in every day clinical practice. Regarding the risk of bleeding, taking a detailed patients' history (including medication) seems to be more suitable to detect mild bleeding disorders than laboratory screening.Functional capacity, defined as the patient's ability to cope with everyday life, gains importance in preoperative risk assessment, as do further factors like deficiencies in nutrition, anaemia, physical capacity, the metabolic status or frailty in the elderly. Prehabilitation strategies reduce perioperative mortality and morbidity by improving functional capacity. These include preoperative nutrition supplementation, physical exercise, correction of iron deficiency and optimized treatment of hyperglycemia.A combination of thorough risk stratification and prehabilitation strategies can improve preoperative conditions and reduce complications in the postoperative period. abstract_id: PUBMED:35132153 Impact and interactions between risk factors on the iron status of at-risk neonates. Objective: Examine interactions between perinatal risk factors for congenital iron deficiency (ID) using two cohorts. Study Design: Iron status in a composite 767-member cord blood cohort and a NICU cohort of 257 infants &lt; 33 weeks of gestation or small for gestational age (SGA). Risks for ID were examined. Cord ferritin levels &lt; 84 µg/L defined congenital ID. Serum ferritin &lt; 70 µg/L defined infantile ID at one-month. Results: 31% of the cord cohort had congenital ID; risks summative (p &lt; 0.0015). 16% of the NICU cohort had infantile ID; risks not summative. However, 32% had ID if the ferritin threshold was 100 µg/L. Being both preterm (p &lt; 0.0001) and SGA (p &lt; 0.05) negatively impacted cord iron status. Maternal hypertension was a novel predictor of iron status (p = 0.023 in preterm cord; p &lt; 0.0025 in NICU). Conclusion: Summing risks in term and understanding compounding risks in preterm infants can improve screening and management of ID in at-risk infants. abstract_id: PUBMED:38327465 Reticulocyte hemoglobin content associated with the risk of iron deficiency anemia. Background/objective: Reticulocyte hemoglobin content (MCHr) was recognized as a rapid and reliable marker for investigating iron deficiency (ID). We hypothesized that MCHr was associated with the risk of iron deficiency anemia in adults. Methods: This is a dual-center case-control study. A total of 806 patients and healthy individuals were recruited from Ruijin Hospital and Xinhua Hospital affiliated to Shanghai Jiaotong University School of Medicine between January 2021 and December 2021. The participants were categorized into iron deficiency anemia (IDA) group (n = 302), non-IDA group (n = 366), and healthy control group (n = 138). According to the MCHr level, the participants were divided into two groups, i.e. normal MCHr (≥25 pg) and decreased MCHr (&lt;25 pg) group. Multivariate logistic regression analysis and adjusted subgroup analysis were conducted to estimate the relative risk between MCHr and IDA, with confounding factors including age, sex, hemoglobin (Hb), mean corpuscular volume (MCV), mean corpuscular hemoglobin (MCH), mean corpuscular hemoglobin concentration (MCHC), Hematocrit (HCT), serum iron (Fe), ferritin (Ferrit), and total iron binding capacity (TIBC). Results: Compared with the non-IDA, the MCHr level with IDA decreased significantly. ROC curve analysis showed that MCHr had the largest area under the AUC curve. After comprehensive adjustment for confounding factors, individuals with normal level of MCHr exhibited a decreased risk of IDA (OR = 0.68 [0.60, 0.77], P &lt; 0.01), while the risk of IDA was up to 5 times higher for those with decreased MCHr. Conclusion: Our findings supported the hypothesis that MCHr was associated with the risk of IDA in adults and could serve as an indicator of IDA severity. MCHr holds clinical value as an auxiliary diagnostic indicator, providing valuable insights into whether invasive examinations are warranted in the assessment of IDA. abstract_id: PUBMED:15162088 Risk factors for arterial ischemic stroke in childhood. Stroke affects up to 13 of 100,000 children, is more common in boys and African Americans, and is associated with considerable cognitive and psychiatric morbidity, as well as motor disability. Around half are hemorrhagic and half are ischemic. Underlying conditions include sickle cell disease, cardiac abnormalities, chromosomal abnormalities (eg, Down syndrome), and neurocutaneous conditions (eg, neurofibromatosis), but up to half the patients with ischemic stroke have no previously diagnosed condition. Although there is almost certainly an important genetic component to stroke risk, head trauma, infections, drugs and radiation appear to play an etiological role in some patients. The majority of the patients with infarction in an arterial distribution have associated cerebrovascular disease. Vascular pathologies include carotid or vertebrobasilar dissection, intracranial vasculopathy affecting the middle and anterior cerebral arteries, which is often transient, and moyamoya. Intermediate risk factors may include hypertension, hypoxia, and poor nutrition leading, for example, to iron deficiency and hyperhomocysteinemia. Some chronic conditions may directly influence the child's behavior and stroke recurrence risk, although large cohorts and randomized controlled trials will be needed before strategies for modification can be evidence-based. abstract_id: PUBMED:3366489 Epidemiologic evidence of an association between body iron stores and risk of cancer. Biologic evidence suggests that high body iron stores could promote development of cancer. Because a previous study had shown an association between prescribed iron medication and lung cancer risk in men, we examined recent iron use as well as 2 additional indirect measures of body iron stores, anemia and the total iron-binding capacity (TIBC) of plasma, in relation to subsequent risk of cancer in a larger cohort of 174,507 persons. Women, but not men, who reported recent iron use had a lower risk of lung cancer than those who did not [RR = 0.60, 95% confidence limits (CL) 0.37, 0.97] after adjustment for age and cigarette smoking. Women who had used iron appeared to remain relatively iron-depleted. Risk for other cancers was slightly, but not significantly, lower in women who used iron. Anemia (hemoglobin less than 12 g) was also associated with lower risk of lung cancer in women (RR = 0.61, 95% CL 0.61, 0.98), but not in men. TIBC, which is inversely related to body iron stores, was inversely related to risk of lung cancer in women in a graded fashion (RR = 0.41, 95% CL 0.23, 0.73 comparing highest with lowest quartile). In men, a protective effect of higher TIBC against lung cancer was suggested, but did not reach statistical significance. These indirect measures of body iron stores appeared to reflect iron stores better in women than in men, probably because variability in iron stores is greater in women and iron deficiency more prevalent. A possible alternative explanation for our findings is incomplete adjustment for the confounding effects of cigarette smoking. This could apply to iron use and hemoglobin level which were related to smoking, but not to TIBC, which was not. These data, which indicate lower risk of cancer in iron-depleted women, lend epidemiologic support to the hypothesis that high iron stores may increase cancer risk, at least for lung cancer. abstract_id: PUBMED:32729920 Association of Family Income and Risk of Food Insecurity With Iron Status in Young Children. Importance: Iron deficiency (ID) has the greatest prevalence in early childhood and has been associated with poor developmental outcomes. Previous research examining associations of income and food insecurity (FI) with ID is inconsistent. Objective: To examine the association of family income and family risk of FI with iron status in healthy young children attending primary care. Design, Setting, And Participants: This cross-sectional study included 1245 children aged 12 to 29 months who attended scheduled primary care supervision visits from 2008 to 2018 in Toronto, Canada, and the surrounding area. Exposures: Family income and risk of FI were collected from parent-reported questionnaires. Children whose parents provided an affirmative response to the 1-item FI screen on the Nutrition Screening Tool for Every Toddler or at least 1 item on the 2-item Hunger Vital Sign FI screening tool were categorized as having family risk of FI. Main Outcomes And Measures: Iron deficiency (serum ferritin level &lt;12 ng/mL) and ID anemia (IDA; serum ferritin level &lt;12 ng/mL and hemoglobin level &lt;11.0 g/dL). All models were adjusted for age, sex, birth weight, body mass index z score, C-reactive protein level, maternal education, breastfeeding duration, bottle use, cow's milk intake, and formula feeding in the first year. Results: Of 1245 children (595 [47.8%] girls; median [interquartile range] age, 18.1 [13.3-24.0] months), 131 (10.5%) were from households with a family income of less than CAD $40 000 ($29 534), 77 (6.2%) were from families at risk of FI, 185 (14.9%) had ID, and 58 (5.3%) had IDA. The odds of children with a family income of less than CAD $40 000 having ID and IDA were 3 times higher than those of children in the highest family income group (ID: odds ratio [OR], 3.08; 95% CI, 1.66-5.72; P &lt; .001; IDA: OR, 3.28; 95% CI, 1.22-8.87; P = .02). Being in a family at risk of FI, compared with all other children, was not associated with ID or IDA (ID: OR, 0.43; 95% CI, 0.18-1.02; P = .06; IDA: OR, 0.16; 95% CI, 0.02-1.23; P = .08). Conclusions And Relevance: In this study, low family income was associated with increased risk of ID and IDA in young children. Risk of FI was not a risk factor for ID or IDA. These findings suggest that targeting income security may be more effective than targeting access to food to reduce health inequities in the prevention of iron deficiency. abstract_id: PUBMED:24819317 Gynecological care in young women: a high-risk period of life. Adolescence has been described as period of life when emotions are heightened and regulatory controls are reduced, and this can result in an escalation in risk-taking. Importantly for younger females, risk behaviors associated with the onset of sexual activity, and alcohol and substance abuse may coincide with pathologies such as polycystic ovary syndrome (PCOS) and abnormal uterine bleeding, an iron-deficient diet (vegetarian or vegan) and a negative body image leading to eating disorders. Girls transitioning through adolescence face a number of specific emotional and physical issues related to the onset of menarche and regular menstrual cycles. Menstruation combined with these risk behaviors and pathologies, and the rapid growth and development that is taking place, often results in numerous unwanted effects including iron deficiency. A low iron level is the most common cause of anemia in adolescent girls and can be detrimental to mood and cognition as well as physical well-being. In this article we review the impact of menarche, poor nutrition and some of the risk behaviors and pathologies that predispose females to challenges associated with adolescence, including anemia. We also examine factors that need to be taken into consideration during the initial, and follow-up, consultations with young women. Finally, we present some of the latest advice regarding nutrition and oral iron supplementation, particularly extended-release ferrous sulfate with mucoproteose, with a view to minimizing the development and risks of anemia in this vulnerable population. abstract_id: PUBMED:10815346 Prevalence and risk factors of iron deficiency in young mothers Quality of life and achievements are impaired by unrecognised iron deficiency. The iron requirement of women during their child-bearing age is high and increases in pregnancy. The aim of this study was to determine the prevalence and risk factors for iron deficiency in young mothers under contemporary German life conditions. Between September 1997 and August 1998 the iron status of 507 mothers of one-year old children was assessed. The data was derived from venous blood and questionnaires. Besides conventional methods, the concentration of soluble transferrin receptor was used as leading indicator of iron status. 9.5% had cellular iron deficiency and 2.2% of all mothers had iron deficiency anemia. In addition to absence of school education non-German nationality, a high number of children and vegetarian food are risk factors for iron deficiency. In contrast, high alcohol intake and cigarette smoking are associated with a better iron status. Children of mothers with insufficient iron supply are also at higher risk of iron deficiency. Answer: Based on the provided abstracts, there is evidence to suggest that certain populations, including immigrants, may be at higher risk for iron deficiency. Specifically, the study mentioned in abstract PUBMED:10815346 found that non-German nationality, which can be indicative of immigrant status, is a risk factor for iron deficiency among young mothers in Germany. This suggests that immigrant mothers and potentially their children may be at increased risk for iron deficiency. Additionally, abstract PUBMED:32729920 highlights that low family income is associated with increased risk of iron deficiency and iron deficiency anemia in young children. While this study does not directly mention immigrant status, it is known that immigrants often face socioeconomic challenges, which could place them in lower income brackets and thus at higher risk for iron deficiency. Furthermore, abstract PUBMED:38327465 discusses the use of reticulocyte hemoglobin content (MCHr) as a marker for investigating iron deficiency, which could be a useful tool for screening at-risk populations, including immigrants. While the other abstracts do not directly address the risk of iron deficiency in immigrants, they do discuss various risk factors and assessment tools for iron deficiency that could be relevant to immigrant populations. For instance, abstract PUBMED:33725737 emphasizes the importance of preoperative assessment and prehabilitation strategies, including correction of iron deficiency, which could be applicable to immigrant populations undergoing surgery. In summary, while the abstracts do not provide a direct answer to the question of whether immigrants are more at risk for iron deficiency in infancy, they do suggest that factors associated with immigrant status, such as lower income and certain nationalities, are associated with an increased risk of iron deficiency. Therefore, it is reasonable to infer that immigrants may be more at risk for iron deficiency in infancy.